Search

Science vs. The Apocalypse: Robot Takeover

Image courtesy of Flickr.

In science fiction movies such as The Terminator and The Matrix, robots develop such sophistication that they overpower the humans who invented them. Amidst rapidly advancing robot technology, such a scenario may seem less like a fictional trope and more like a real possibility. How will humans interact with our increasingly life-like robots? Using a game theory experiment involving human-robot cooperation, University of Plymouth scientist Debora Zanatto and her team endeavored to find out.

When we interact with other humans, we seek empathy and feelings of positivity. By the rule of human-human interaction, it is our instinct to reward others’ cooperation with cooperation of our own, and to punish their selfishness with selfishness of our own. Zanatto’s team set up a scenario to investigate the implications of this concept for human-robot interactions.

Participants in the study engaged in an investment game in which collaborating with a robotic teammate was essential to success. The human and robot teammate each started the game with a set amount of virtual money. Both were required to independently decide how much of it to invest in a robot banker. Then, they were given the opportunity to alter their respective choices after seeing the other’s investments, with the human participant making their final choice last. Invested money could be returned to each teammate with profit, but the robot banker deducted money whenever the robot and human investments differed by a substantial amount.

Three sets of factors varied. The robot banker adopted either a “generous” condition, in which it returned fifty to eighty percent of each investment to the players, or a “mean” condition, in which it returned zero to thirty percent. The robot teammate was programmed to adopt either a collaborative strategy, adapting its investment to fit that of the human participant, or a fixed strategy, acting regardless of the human’s choice. And finally, the robotic teammate was either immobile and mute, or anthropomorphized to look at the human participant, follow a verbal script, and point to its investment choices on the screen. Each human participant played two games with the same robot, which utilized the collaborative strategy in one and the fixed strategy in the other.

It was found that when payoff was high—when the robot banker was “generous”—participants were more likely to be cooperative with a mute, collaborative robot, suggesting a preference for subservient and less humanoid qualities when there was more to be gained. However, when payoff was low and the robot banker was “mean,” participants demonstrated higher cooperation with a collaborative-strategy anthropomorphic robot partner, suggesting a preference for empathy and anthropomorphism in times of difficulty.

“Everything was related to the criticality of the situation. When the payoff was low, the participant had a tendency to cooperate with the confederate showing more human-like behavior. On the opposite, when the payoff was high, the participants couldn’t care less about the confederate’s social skills—they even seemed to prefer a robot that wore robotic features expected of non-human agents,” Zanatto explained.

Furthermore, a multi-part questionnaire assessing participant perceptions of the credibility, trust, animacy, and likeability of the robot teammates was given to participants after the two games. Participants ranked the fixed-strategy anthropomorphic robot highest in each category. This revealed two distinct conclusions of the study. Human collaboration with robots depends on the challenge’s difficulty; we always value robotic cooperativeness, but based on the scenario, we differentially value either anthropomorphism or immobility. Conversely, positive human perception of a robot, regardless of scenario, favors anthropomorphism and behavior consistency.

Could this be a cause to fear future manipulation at the hands of robots? Zanatto expresses a belief in the power of human-robot interactions to improve not only our lives, but also our understanding of our behavior. Yet in some ways, she thinks that it might seem that a pernicious robotic future is already here, “Some will trust fake news and incorrect medical information on the internet, but not a plastic robot, because the internet follows our stereotypical view of non-human technology,” Zanatto said.

In any case, we should be cognizant of the power and problems of anthropomorphic technology in other situations. “Robots are programmed; they can break and cease to work. If we become too trusting of the broken code and machine, we might follow its incorrect suggestions. It is like the problem we face with human interactions, following cues that lead us to trust someone we should not,” Zanatto said.