Can computer algorithms learn to fight wars ethically?
The Pentagon is investing in weapons that can decide when to kill on the battlefield. If they succeed, the jury is still out on whether these autonomous weapons can be better than humans at making moral decisions, or, if they will become a nightmare come to life like the ones seen frequently in science fiction. Zachary Fryer-Biggs contemplates. "The scale of the exercises at West Point, in which roughly 100 students have participated so far, is small, but the dilemmas they present are emblematic of how the US military is trying to come to grips with the likely loss of at least some control over the battlefield to smart machines. The future may well be shaped by computer algorithms dictating how weapons move and target enemies. And the cadets’ uncertainty about how much authority to give the robots and how to interact with them in conflict mirrors the broader military’s ambivalence about whether and where to draw a line on letting war machines kill on their own."
From The Washington Post