Pentagon adopts ‘AI Ethical Principles’ for its killer robots

Trusting a machine to scan aerial imagery in search of targets is a legal and ethical minefield

Image Credits: Martin Rauscher/SEPA.Media /Getty Images.

As the United States looks to develop artificial intelligence weapons to keep up with Russia and China, the Pentagon has adopted a set of guidelines that it says will keep its killer androids under human control.

The Department of Defense adopted a set of “Ethical Principles for Artificial Intelligence” on Monday, following the recommendations by the Defense Innovation Board last October.

“AI technology will change much about the battlefield of the future, but nothing will change America’s steadfast commitment to responsible and lawful behavior,” Defense Secretary Mark Esper said in a statement.

According to the Pentagon, its future AI projects will be “Responsible, Equitable, Traceable, Reliable,” and “Governable,” as the Defense Innovation Board recommended. Some of these five principles are straightforward (once you decipher the Pentagon-speak anyway); “Governable,” for example, means that humans must be able to flip the off switch on “systems that demonstrate unintended behavior.” But others are more ambiguous.

What exactly the department will do to “minimize unintended bias in AI capabilities,” as it says it will do to keep these systems “equitable,” is vague, and may cause problems down the line if left undefined. Trusting a machine to scan aerial imagery in search of targets is a legal and ethical minefield, and Google already pulled out of a Pentagon project in 2018 that would have used machine learning to improve the targeting of drone strikes.

Similarly, the Pentagon’s promise that its staff will “exercise appropriate levels of judgment and care” when developing and fielding these new weapons is a lofty, but ultimately meaningless pledge.

The adoption of a loose set of ethical principles instead of an outright ban will leave some campaigners unsatisfied. Many leading pioneers of AI – such as Demis Hassabis at Google DeepMind and Elon Musk at SpaceX – are among more than 2,400 signatories to a pledge that outright opposes the development of autonomous weapons. Numerous other open letters and petitions against military AI have been filed worldwide in recent years.

Resistance from the tech industry presents the Pentagon with a practical dilemma, as well as an ethical one. Despite pumping increasing sums of money into developing AI systems, the US believes Russia and China are ahead and will extend their lead in this domain if the Defense Department can’t recruit the talent needed to compete.

To counter the brain drain, the Trump administration’s proposed $4.8 trillion 2021 budget would hike the Defense Advanced Research Projects Agency’s (DARPA) funding for AI-related research from $50 million to $249 million, and increase the National Science Foundation’s funding from $500 million to $850 million, with $50 million set aside specifically for AI.

Whatever devices DARPA comes up with, if this set of guidelines is followed, at least they’ll have an ‘off’ switch.



WATCH ALL SHOWS

AUDIO