Search

Machine Morality: Computing Right and Wrong

Imagine a future in which artificial intelligence can match human intelligence, and advanced robotics is commonplace: robotic police guards patrol the streets, smart cars yield to one another, and robotic babysitters care for children. Such a world may appear to lie in the realm of science fiction, but many of its features are increasingly realistic. While the benefits of such a world are enticing and fascinating, advanced artificial intelligence and robots bring a whole set of ethical challenges.

Wendell Wallach, a scholar at Yale’s Inter-disciplinary Center for Bioethics, researches the potential ethical challenges of future technologies and how to accommodate potential societal changes. Wallach is a leading researcher in the field of machine ethics, also known as robot ethics, machine morality, or friendly AI. The central question of machine ethics is: “How can we implement moral decision-making in computers and robots?” This inherently interdisciplinary field is at the interface of philosophy, cognitive science, psychology, computer science and robotics.

The da Vinci Surgical System, a robotic surgical system and an example of robotics performing a highly critical role. Courtesy of Wikipedia.

Different Levels of Moral Agency

As artificial intelligence and robotics continue to advance, we reach the possibility of computer systems making potentially moral decisions by themselves — artificial moral agents (AMAs). Wallach proposes a continuum of moral agency for all technology, from everyday objects completely lacking agency to full-fledged sentient robots with full moral agency. The continuum exists along two dimensions: autonomy, which indicates what the technology has power to do, and ethical sensitivity, which reflects what inputs the technology can use to make decisions. For example, a hammer has no autonomy and no sensitivity, while a thermostat has sensitivity to temperature and autonomy to turn on a furnace or a fan.

As robots gain increasing autonomy and sensitivity, so too do they have greater moral agency. Wallach explains that the most advanced machines today only have operational morality — the moral significance of their actions lies entirely in the humans involved in their design and use, far from full moral agency. The scientists and software architects designing today’s robots and software can generally anticipate all the possible scenarios the robot will encounter. Consider a robot caregiver taking care of the elderly. The designers of the robot can anticipate possible ethically-charged situations such as a patient refusing to take medication. Because the robot’s autonomy and sensitivity is limited, the designers can feasibly account for all possible situations, and desired behavior in expected situations can be programmed directly.

But what happens when the designers can no longer predict the outcomes? When both autonomy and sensitivity increase, greater moral agency and more complex systems arise. Functional morality refers to the ability of an AMA to make moral judgments when deciding a course of action without direct instructions from humans.

Moral agency increases as autonomy and ethical sensitivity increase. Courtesy of Moral Machines (Oxford University Press).

Wallach explains that implementing machine morality has two basic approaches — top-down and bottom-up — as well as a hybrid approach. In a top-down approach, a limited number of rules or principles governing moral behavior are prescribed and implemented. The top-down approach characterizes most moral frameworks in philosophy, such as Kant’s Categorical Imperative, utilitarianism, the Ten Commandments, or Isaac Asimov’s Three Laws.

The system attempts to learn appropriate responses to moral considerations in bottom up approaches, which take their inspiration from evolutionary and developmental psychology as well as game theory. Instead of selecting a specific moral framework, the objective is to provide an environment in which appropriately moral behavior is developed, which is roughly analogous to how most humans “learn” morality; growing children gain a sense of what is right and wrong based on social context and experiences. Similarly, bottom-up approaches, including evolutionary algorithms, machine learning techniques, or direct manipulation in order to optimize a particular outcome, can be applied to facilitate a machine achieving a goal.

Wallach notes that both approaches have their weaknesses. The broad principles in top-down approaches may be flexible, but they can also be too broad or abstract, making them less applicable to specific scenarios. Bottom-up approaches are good at combining different inputs, but they can be difficult to guide toward an explicitly ethical goal. Ultimately, AMAs will need both top-down principles as an overall guide as well as the flexible and dynamic morality of bottom-up approaches.

The creators of ASIMO hope that their robot can assist people in the home, even those confined to a bed or wheelchair. Courtesy of Honda.

Challenges in Machine Morality

Two main challenges stand in the way of implementing moral decision-making: first, implementing the chosen approach compu-tationally. For example, utilitarianism might look attractive because it is inherently computational: choose the action that produces the result with the highest utility. But what is the stopping point for what is considered a result of an action? How far in the future is an AMA expected to calculate? Furthermore, how does one computationally define utility for the calculation, and how does an AMA evaluate the utility of different outcomes? The difficulty of computational instantiation of decision making is also showcased in the short stories of Isaac Asimov, in which robots obey three laws in order: 1) do not harm humans, 2) obey humans, and 3) protect its own existence. Asimov wrote more than 80 short stories, exploring how many unexpected and potentially dangerous conditions arise from the combination of these rules. Furthermore, to function properly in a society of humans, AMAs may require the computational instantiation of human capabilities beyond reason, many of which we take for granted, such as emotions, social intelligence, empathy, and consciousness.

The second problem for implementing moral decision-making is what Wallach calls the “frames problem.” How does a system even know it is in a morally significant situation? How does it determine which information is morally relevant for making a decision and whether sufficient information is available? How does the system realize it has applied all considerations appropriate for the situation?

Practical Difficulties

With all of these complicated questions, one might wonder just how far along modern day technology is. Wallach explains while we are far away from any machines with full moral agency, it is not too early to give serious consideration to these ethical questions. “We are beginning to have driverless cars on the road, and soon there will be surveillance drones in domestic airspace. We are not far away from introducing a robot to take care of the elderly at home. We already have low-budget robots that entertain: robotnannies and robopets.”

With the advent of robots in daily life, many security, privacy, and legal quagmires remain unresolved. Robots placed in domestic environments pose privacy concerns. To perform their job, they likely need to record and process private information. If they are connected to the Internet, then they can potentially be hacked. The security for a robot performing a critical role, such as pacemakers, cars, or planes is even more paramount. Failure could be catastrophic and directly result in deaths.

Google’s self-driving cars, which are being piloted in Nevada, pose legal issues as well. How do we legally resolve a complicated accident involving a self-driving car? What should a self-driving car do if a situation forces it to choose between two options that both might cause loss of human life? Wallach proposes a question: suppose self-driving cars are found to cause 50 percent fewer accidents than human drivers. Should we reward the robot companies for reducing deaths, or will we sue them for accidents in which robot cars were involved? Wallach says, “If you can’t solve these ethical problems of who’s culpable and who’s liable, you’ll have public concern about letting robots into the commerce of daily life. If you can, new markets open up.”

Google’s self-driving car is already on the roads in Nevada. Legal and ethical hurdles are still in the way of mass adoption and acceptance. Courtesy of Gawker.

The Future of Machine Morality

Wallach ultimately tries to anticipate what sort of frameworks could be put in place to minimize the risks and maximize the benefits of a robot-pervasive society. Wallach points out that considering the ethical implications of AMAs falls into the broader discipline of engineering ethics and safety. Engineers need to be sensitive to these ideas when they think about the safety of their systems. Balancing safety and societal benefit has always been a core responsibility of engineering; today’s systems, however, are rapidly approaching the complexity where the systems themselves will need to make moral decisions. Thus, Wallach explains that “moral decision making can be thought of as a natural extension to engineering safety for systems with more autonomy and intelligence.”

When asked whether ethics should be a priority, Wallach responds with fervor: “I think it will have to be. There remain some technical challenges, but we will have to think through these problems eventually for our society to be comfortable accepting robots and AI as part of everyday life.”

As robots become more advanced and involved in our lives, tackling potential ethical issues also becomes more important. Courtesy of NASA.

About the Author
Sherwin Yu is a senior in Morse College studying Computer Science and Molecular Biophysics & Biochemistry.

Acknowledgements
The author would like to thank Wendell Wallach for his help in writing this article.

Further Reading
Moral Machines: Teaching Robots Right From Wrong (Oxford University Press 2009). Wendell Wallach and Colin Allen
Robots to Techno Sapiens: Ethics, Law and Public Policy in the Development of Robotics and Neurotechnologies. Wendell Wallach. Law, Innovation and Technology (2011) 3:185-207.