No cooperation without open communication: Scientists develop new and more realistic model of human interaction

Indirect reciprocity is a model that explores how humans act when their reputation is at stake, and which social norms people use to evaluate the actions of others. A key question in this area is: which social norms lead to cooperation in a society? Previous studies have always assumed that everyone in the population has all the relevant information and that everyone agrees who is good and bad — assumptions at odds with the reality we live in. In a new, more realistic model, Christian Hilbe, Laura Schmid, Josef Tkadlec, and Professor Krishnendu Chatterjee at the Institute of Science and Technology Austria (IST Austria), together with Professor Martin Nowak of Harvard University, explore what happens when information is incomplete and people make mistakes. In their model, previously successful strategies do not lead to sustained cooperation, and in most cases do not evolve at all. Their results will be published today in the journal PNAS.

In the world of game theory, indirect reciprocity is played out using two randomly selected individuals in a population: one donor, one recipient. The donor then needs to decide whether or not to help the recipient based on their social norms. The donor’s decision may depend on the reputations of the two individuals, and on the social norm the donor employs (for example, they might only help recipients with a good reputation). Meanwhile, the rest of the population is watching: after the donor’s decision, they update their opinions of him or her based on their own social norms. Past models were based on the assumptions that everyone agreed on the reputations of everyone else, and that everyone witnesses all interactions. These studies showed that there are eight “leading” social norms or “strategies” that lead to stable cooperation in a population. But what happens when people make mistakes, and differences of opinion develop? “We wanted to explore how the leading eight strategies fared when faced with incomplete, noisy information,” explains Laura Schmid, a PhD student in the Chatterjee group. What they found surprised them: none of the strategies led to high levels of cooperation, and many were unstable or did not evolve at all.

Modeling these interactions is mathematically demanding, and the previous assumptions made the analysis easier. “When you consider all the details, you need to rely on simulations, and those just take a lot of time” says postdoc Christian Hilbe. Still, even a single difference of opinion in the population could have drastic effects. If the donor thinks the recipient is bad, but the rest of the population thinks the recipient is good, the donor’s decision not to give causes his or her reputation to drop, resulting in a ripple effect throughout the population. Josef Tkadlec, another PhD student working with Professor Chatterjee, described mathematically how differences of opinion spread and divide a population. “For some strategies, even a single disagreement could lead to populations that were split into two polarized subgroups,” Tkadlec says. “Other strategies could recover, but it might take them a long time.”

The team has additional modifications already in mind: for instance, in the populations in previous simulations, everyone was connected with everyone else. What would happen when the population had a particular network structure? Moreover, individuals in populations were independent in forming their opinions. What would happen if they could communicate? The team has already found some numerical evidence that suggests that communication among individuals reduces errors and increases cooperation. “Seen from this angle,” concludes postdoc Christian Hilbe, “our findings highlight the importance of communication and coordination for building and maintaining cooperation in a society.”

Source: Read Full Article