Search

Restricting the Robotic Arms Race

Killer robots: They have certainly made their debut as the stars of popular science fiction novels and films. But could they ever be a real threat to humanity? Many ethicists and military officials say yes — and soon.

As early as WWI, the US military began testing prototypes of drones, the pilotless robotic bombers that are now used by militaries around the world. Although mankind’s increasing reliance on drones has accelerated the mechanization of warfare, even the most advanced drones still require humans to make decisions to kill. Drones fly unmanned, but human operators and military officers control their flights to varying degrees and dictate when to detonate.

As technology evolves, this may be changing. Drones and other robotic weapons are becoming more and more autonomous, or independent of human decision-making. According to a recent report by the Human Rights Watch, robotic weapons that select and attack targets without any human control at all could be feasible within the next 20 to 30 years.

“Turning decisions to take life over to machines is a route we shouldn’t go down,” said Wendell Wallach, a lecturer at the Yale Interdisciplinary Center for Bioethics. Among a growing group of ethicists and military officials around the world, Wallach is morally opposed to robotic weapons selecting and killing targets without humans directly behind the controls. He has issued a proposal to set clear limits on autonomous killer robots before they can be deployed.

 

An Ethicist’s Idea

“I never think of myself as passionate about robotics,” said Wallach. Despite his background as a Founder and President of two computer consulting companies — Farpoint Solutions and Omnia Consulting, Inc. — he remains an ethicist at heart, rather than a technology enthusiast.

Wallach has chaired the Technology and Ethics working group at the Yale Bioethics Center since 2005. His research at the Center has ushered in the emergence of a new field of inquiry: machine ethics, which explores the potential for robots to make moral decisions.

Wallach first became involved with weapons talks in 2010, a year after co-authoring the book Moral Machines: Teaching Robots Right From Wrong. He was invited to attend a workshop in Berlin, Germany, which was organized by the International Committee for Robotic Arms Control. After this first meeting which brought together some of the most renown ethicists and activists in military robotics, international arms control negotiators, experts in international humanitarian law, and other attendees, Wallach remained involved with discussing the limits of robotic weaponry. Two years later, during a return trip from a conference at Arizona State University on Directed Energy Weapons, it occurred to him how best to mobilize his thoughts into action.

“I was looking over at the Capitol during a changeover at the Reagan Airport, and an idea flashed in my mind,” said Wallach. He realized a good place to start with setting safe limits on robotic weapons would be the White House. He went on to draft a proposal for an executive order limiting killer robots, which has since been circulating within military and executive circles.

Needless to say, no prior policy, military or otherwise, has ever dealt with decision-making robots. “In effect, we are writing ethics as we go along,” said Wallach. To set a firm foundation for arms agreements, Wallach’s proposal calls for President Obama to make a clear declaration that allowing robots to make decisions to kill would violate existing international humanitarian law.

 

The Shift Toward Autonomy

As of now, no military has developed or deployed a fully autonomous killing machine. However, some robotic models are showing a shift toward greater autonomy. The US Navy has designed the X-47B, which can fly and land on aircraft carriers, taking directions from onboard computers rather than humans with remote controls. While Navy officials have stated that there is no intention to use the aircraft for combat, the model includes two weapons bays with the capacity to carry bombs or missiles.

The UK’s Taranis aircraft is designed to identify and strike targets at distances as far as separate continents. Although the machine will be monitored by human crews who must approve its attacks, Taranis is one of many bombers marking a shift toward more autonomous weaponry. IMAGE COURTESY OF BUSINESSINSIDER.COM
The UK’s Taranis aircraft is designed to identify and strike targets at distances as far as separate continents. Although the machine will be monitored by human crews who must approve its attacks, Taranis is one of many bombers marking a shift toward more autonomous weaponry.
IMAGE COURTESY OF BUSINESSINSIDER.COM

Other countries’ military technology is headed in a similar direction. The UK’s 2010 Taranis prototype also holds two weapons bays and is built to strike targets from across a continent. Although the UK Ministry of Defense has stated that it will have human operators, the aircraft has been described as a “fully autonomous intelligent system,” and it is unclear whether future models will be built to strike of their own volition. Meanwhile, Israel is working on the Harpy, an unmanned aircraft that will fly independently of human control to patrol areas and drop explosives when hostile radar signals are detected.

This shift toward autonomy can be detected even at the level of semantics. Current semi-autonomous robotic weapons are typically termed “human in the loop,” meaning that a human must order a lethal strike before the machine may act.  However, the US Air Force has switched to using the phrase “human on the loop” to describe future weapons. In human on the loop systems, robots will strike targets autonomously, but human overseers will be watching and poised to veto their decisions.

In a recent paper, Wallach raised two issues with moving humans outside the decision-making loop: speed and culpability. Although the “on the loop” model assumes that humans will intervene if a robot makes a bad decision, it is unlikely that humans will be able to keep up with the speed of autonomous robots’ computing and actions as models become more advanced. Allowing robots to decide when to kill also means removing blame from humans when something goes wrong. It would be nearly impossible to put a machine on trial for breaking the laws of war. Although competition among nations is a significant pull, the race toward robot autonomy also rides on human welfare considerations. “A lot of arguments for lethal autonomous weapons are coming from people who want to save the lives of soldiers,” said Wallach. “But this benefit could be at a great price.”

While substituting robots for people would keep some soldiers off the battlefield, it could also end in a greater loss of civilian lives. In a current semi-autonomous system, a human operator might see that a drone is targeting a site near a funeral gathering, judge that the number of potential civilian casualties is too great, and decide not to strike. Robots would likely be unable to make such nuanced decisions — at least not until technology has developed much further.

More autonomous robots could also mean more war. If we could cut down the expense of war by decreasing the number of human fighters involved, would we be quicker to enter wars? And if we could place the burden of deciding to kill entirely on machines, would we be quicker to kill?

 

The Resistance

Wallach’s proposal first began circulating in February 2012. Early in 2013, a group of 48 non-governmental organizations in 23 countries mobilized to form the Campaign to Stop Killer Robots. While this larger campaign lobbies for an international ban on killer robots, Wallach continues to believe an executive order from the President at the national level would be an important first step.

“My proposal is out there, but it’s small potatoes compared to the international campaign,” said Wallach. Even so, there is strategic value to the US taking the initiative in setting a ban.

“The US isn’t likely to go along with any campaign unless it has an early say,” said Wallach. Although the drive to deploy autonomous robots is a global issue, starting with a statement from the US President will help counteract resistance to banning lethal autonomous weapons.

It is unclear as of now whether plans for the US military point toward a ban. On November 19, 2012, the US Department of Defense issued a directive stating its intention to expand autonomous robotic weaponry, but some readers interpreted the report as a promising requirement for humans to stay “in the loop.” Wallach’s proposal is still in circulation and has gained the support of several former military generals.

Technological races in warfare have gotten disastrously out of hand in the past. The atomic bomb escalation of the Cold War and the ongoing proliferation of cruise missiles have already evidenced the danger of humans building weapons that escalate tensions.

Robotic warfare may be another arms race of the modern age, but history need not repeat itself if nations can agree on clear restrictions. In the words of Wallach, “There is an opportunity to put a line in the sand now.”

 

About the Author: Jessica Hahne is a junior English major in Silliman College. She was the 2013 Editor-in-Chief of the Yale Scientific Magazine.

Acknowledgements: The author would like to thank Wendell Wallach for sharing his knowledge of machine morality and for voicing his ethical opinions on robotic warfare.

Further Reading:

  • Wendell Wallach, “Terminating the Terminator: What to do About Autonomous Weapons,” Science Progress, 2013.
  • Wendell Wallach and Colin Allen, “Framing Robot Arms Control,” Ethics and Information Technology, 15(2012):125-135. doi: 10.1007/s10676-012-9303-0
  • Human Rights Watch and International Human Rights Clinic, “Losing Humanity: The Case Against Killer Robots,” 2012.
  • Christof Heyns, “Report of the Special Rapporteur on Extrajudicial Summary or Arbitrary Executions,” United Nations General Assembly, 2013.
  • Wendell Wallach and Colin Allen, Moral Machines: Teaching Robots Right from Wrong (New York: Oxford University Press, 2009).
  • Campaign to Stop Killer Robots. “Call to Action.” Last Modified 2013. https://www.stopkillerrobots.org/call-to-action/