Cooperating to Understand Cooperation

In a prisoner’s dilemma, virtual agents follow the process of using either intuition or deliberation to act in either one-shot or repeated interactions. The decisions made through this process provide insight into when people cooperate or act selfishly.

Four people work in a group, each given a sum of money. They are asked if they wish to contribute some amount of money to the public pot. The choices are thus made clear: either selflessly contribute and allow the money to be evenly divided or selfishly keep the money and share in the amount contributed by the others.

Such is the setup of a public goods game, which, along with other game theory-related concepts like a prisoner’s dilemma, provides insight into inherent human behavior regarding cooperation and selfishness. These games have inspired the research conducted by Adam Bear, fourth year Ph.D. candidate in Psychology, and David Rand, Associate Professor of Psychology, Economics, and Management.

Basing their work on empirical data related to these games, Bear and Rand have developed a theoretical game theory model that explains that people who intuitively cooperate, but can also act selfishly, succeed from an evolutionary perspective.

To create the model, Bear and Rand used MATLAB to construct agent-based simulations consisting of virtual agents with all permutations of behaviors and ways of thinking, which interact with one another in various environments through a computer system for several generations. Afterward, they made mathematical calculations in order to confirm the accuracy of their simulations.

“The idea of these simulations is they’re meant to model some kind of evolution either over biological time or over cultural time,” said Bear. “People who tend to do well when they play their strategies are more likely to survive in the next generation than people who do poorly, so these agents interact on a set of generations, and once you do well, [you] tend to stay in the game; once you do poorly, you tend to die out.”

The model itself takes several factors, such as type of thinking and environment, into consideration.

With thinking, agents can either follow their intuition or use deliberation. Bear describes intuition as a form of cognition that uses heuristics—mental shortcuts—to get to answers quickly. This way of thinking is efficient but may lead to errors in reasoning due to its inability to contextualize the details of what one is facing. On the other hand, deliberation allows agents to take time to reason and make more accurate decisions. In the model, agents can strategize, choosing how much intuition and deliberation to use.

For the environment, agents vary in how much they engage in one-shot interactions in which they will only interact with each other once and repeated interactions in which they may possibly establish a relationship; the environment itself refers to the proportion of repeated to one-shot interactions.

From an evolutionary standpoint for which success seems built on self-interest, Bear explains, “Say we’re in a repeated interaction: it’s better if I’m nice to you if I’m going to see you again,” said Bear. “But if I’m never going to see you again… I’m better off being selfish if it would cost me a lot to be nice to you.”

The conclusions of Bear and Rand’s model focused on environments with more repeated interactions, which more realistically reflect the environment of the real world. Bear described the best kind of agent in their model: an agent that has a fast cooperative response, but selfish response upon deliberation in one-shot interactions.

Their research, however, received critiques from others working in the field, specifically Kristian Myrseth, Associate Professor in Behavioral Science and Marketing at Trinity College Dublin, and Conny Wollbrant, Assistant Professor in Economics at the University of Gothenburg. “The model makes this crucial claim that evolution never favors strategies… where deliberation increases your prosocial behavior,” Myrseth says with Wollbrant, adding later, “The problem is that we know today that … people are often behaving prosocially, not for strategic reasons, but because they feel that’s the right thing that they should do.” Therefore, they claim that the model does not allow cases of inherent prosocial behavior to survive despite the apparent evolution of people with such behavior existing in the world today.

Nevertheless, according to Bear, they have continued to work on additional versions of the model in order to make it more realistic by incorporating how intuition and deliberation may not be so black and white in terms of distinguishing the environment.

“There was this fun process of discovery in the model and learning what the model was actually showing,” said Bear. “It’s cool because you know you think when you model something, maybe it’ll be obvious what you’re going to find, but actually, you discover these interesting things that you didn’t necessarily anticipate before modeling.”