Everyone seems interested in human cooperation, even though humans are not as cute as other animals, like bats. But I figured I would write a blog post mainly about cooperation in humans (…but also other animals too).

Imagine you and I are playing a trust game, an anonymous game played over a computer. We both start with $100. As the “investor”, I can give you any amount of money from zero to 100 dollars. You are the banker: whatever amount of money I give you, you can triple it instantly. So if I donate 10 dollars to you, you can make it 30. In theory then, I could give you all 100, then you could turn it into 300, then you could split it between us so we both get 150. You end with 250 and I with 150. We both win.
If you were even more generous, you could divide your 400 such that we both have 200. We both win equally. And if you were an extreme altruist, you could even give me back the entire 300 (in effect performing your tripling service for free). I win.
However, this is all in theory. I have no way of enforcing any of this. You could also just keep the 300 for yourself and give me nothing, so that you have 400 and I have 0. You win. And there’s nothing I could do about it. I don’t even know who you are, and I’ll never see you again. So maybe I should just keep the 100 dollars to myself…
Well, imagine you are playing this game. What would you do as the investor? As the banker?
Well, I know what I would do. If I was the investor, I would invest at least 50 dollars, maybe all 100. Surely, the other person would return my favor. And if I was the banker I would return at least 50% of the investment, and probably more such that our total amounts at the end are equal 200/200. I would definitely NOT take all the money as the banker. And if you’re like most people tested, you would do about the same thing for the same reason. Chances are, you too would be “irrationally” altruistic.
Why? The most obvious answer is that it feels good to be nice. Engaging in win/win cooperation with other individuals activates the reward centers of the brain more so than winning alone. Helping others feels good. And the more help the person needs, the better it feels to help them. Or maybe not…

If you’ve been trained in economics, you might not cooperate in these games, because you will have learned about rational self-interest, and what is the economically rational thing to do. In the one-shot ultimatum game, trust game, and prisoner’s dilemma game, no cooperation will occur if both participants are rational and self-interested. In fact, altruistic individuals are easily exploited by rational selfish agents in any public goods game. So in the above game, a rational agent would be more likely to just keep her money to herself in either role. In one sense, this is the rational thing to do. Rational selfish agents will always win these games.

So why did natural selection shape human nature to be so “irrationally” nice in these games? Are humans really THAT nice? Before we even answer this question we must remember that a limitation of interpreting such lab games, like the prisoner’s dilemma game, trust game, and the ultimatum game, is that they are indeed games. Moreover, they are interesting because they are weird. Since they pose unnatural situations, they don’t necessarily mimic how people would behave in real social situations. To supposedly remove the incentives for selfish behavior, the experimenter can tell the participants that their actions will be anonymous, that no one will see what they do, and that their interaction with their social partner is a one-shot interaction. But when was the last time you had a completely anonymous social interaction with another person you were completely sure you would never see again? If you have had such interactions, you probably don’t even think of them as social. There are few interactions at a college that can truly be regarded as “anonymous” and “one-shot”, but study participants are often college undergraduates making decisions in a laboratory setting where they really have no idea who’s watching them or how they are being evaluated. Participants might think consciously or unconsciously: “What if I am discovered being “selfish”? Is that bad? Will that be embarrassing if my peers find out? How will that look to this professor?” It’s impossible for people to turn off all those cues and act “as if” a social interaction was truly anonymous and one-shot. One way around this is to make the stakes more real or to change the way the game is presented to test alternative hypotheses. But back to “irrational” human generosity….
These problems aside, there is little doubt that people are still way nicer than the rational selfish agent model. It’s a current craze in popular books on human decision-making to try and explain why people are “irrational” in various ways– with their money, decisions, and altruistic behavior. But saying that people are “irrationally” nice depends on what you mean by “rational”. Rather than irrational, one could also think of the design of the human mind as “super-rational”. It’s bad at maximizing profits in contrived economic games and thought experiments, but better at navigating real social life for the purposes of maximizing inclusive fitness (passing on copies of your genes).
(note: I just checked to see if “super-rational” is a word, and apparently it is a term used to describe strategies in economic games! I’m not sure if my definition is the same or different, so I’ll just say that I’m not referring to the other definition, just my own.)
First of all, how rational (or successful) a given strategy is depends on the opponent’s strategy. Who is more likely to really win an argument: A rational person or an irrational person who cares more about winning the argument than about being rational? Or how about an irrational person who can’t control their emotions and might punch you in the face if you offend them?
I once witnessed a customer get in the most unreasonable argument at a restaurant (she complained that the burger she ordered was too small, and therefore it should be free). The manager of the restaurant tried unsuccessfully to reason with her, but then seeing how irrational the customer was being, she eventually just conceded and offered her the burger for free. In many cases, an irrational emotional person can’t be reasoned with and is more likely to get their way. The only way out is too calm them down by conceding. Does that make emotional hotheads irrational? Or super-rational?
People behave weirdly in economic games because the games remove many of the normal factors that would make our decisions rational in the normal, real world. Take for instance the notion of helping other people in need at a small cost to yourself. In the trust game, a person who was ruthlessly selfish would win the game. But in the real world, this is less likely to happen. The first reason is that, in the real world, social interactions are repeated, not one-shot interactions. It has long been understood that if such economic games are played repeatedly, cooperative strategies beat non-cooperative ones.
In the real world, a ruthlessly selfish person is more likely to build a reputation as a scrooge or jerk if their behavior is outside a social norm of cooperation. In fact, if I played the trust game with you, and you took all my money and didn’t split it, then two things would happen. I would not invest in you again. Also, I would have to fight the subtle urge to punch you in the face. In the real world in which our minds evolved, this would not be a good reaction to elicit in others.
And that is the normal reaction. In many cases, people *expect* to have their social investments rewarded, otherwise they feel cheated and upset. For example, imagine that tomorrow your best friend or significant other treats you like a total stranger, not badly– just completely neutral, like someone who doesn’t know you at all. Even though they didn’t do anything offensive per se, you would assume that they were “punishing” you for something you must have done, because you expect more from them than from the average person on the street. Once we invest in people, we expect something in return, even if we try not to.
When people feel cheated they will punish others even at a cost to themselves. So a perfect “rational selfish agent” who lived in the real world would not be considered perfectly socially intelligent: they would annoy lots of people, make enemies, and have few friends. So the rational selfish strategy of Homo economicus turns out to be a pretty bad strategy in a world of passionate, irrational Homo sapiens who would often prefer to cooperate together than achieve an equal economic outcome alone.
Overall, the human mind has evolved many layers of social emotions to navigate and negotiate a social world full of both cooperation and exploitation. We find opportunities for mutually beneficial cooperation wherever they might exist and we avoid social exploitation. We are shaped to be optimally nice for whatever world we evolved in. On average, we are “Goldilocks” nice: not too nice as to be exploited by others, but not too selfish as to miss the benefits of cooperation.
From an evolutionary standpoint, humans are not the champions of altruism. That would go to organisms like eusocial insects and slime mold, where individuals routinely kill themselves to help their neighbors. In biology, the term “altruism” means something very different than in everyday speech– it means evolved behavior that on average reduces the lifetime number of offspring you produce (your Darwinian fitness) while increasing the lifetime number of offspring of the recipient of your altruism. So altruism only involves behaviors that on average make the actor die sooner, have fewer offspring, or both. Most of the behaviors that we think of as socially or psychologically altruistic (like giving to charity for completely selfless reasons) have unclear average effects on one’s lifetime number of offspring, so we wouldn’t want to conclude right away that they are evolutionarily altruistic. Plus such specific behaviors themselves didn’t evolve, just the brain systems and emotions underlying them.
The only way that biological altruism (as defined above) can evolve is when the altruism is directed at genetic relatives and the benefit to those relatives (in offspring) is greater than the cost to you (in offspring). This is known as Hamilton’s rule. If you don’t like that, you’ll have to just use a different definition of altruism than the one used by evolutionary biologists. Otherwise, confusion arises.
Most non-biologists are not interested in this strict reproductive altruism, but rather in cooperation more generally. Individuals help others at a short-term cost to themselves, yet we know that this tendency must bring lifetime direct fitness benefits (“nice guys finish first”) since it appears in every human culture without being eroded by natural selection. How do we explain that? And there’s no evidence that such human cooperation is something new, especially since other primates are highly cooperative, and the more general phenomenon of biological cooperation is almost as old as life itself. The first replicating entities (genes) had to cooperate to form genomes, genomes cooperate in cells, cells cooperate in organisms, organisms cooperate in groups and societies. Members of different species and different kingdoms cooperate. Cooperation is the major evolutionary force that allows the building up of complexity in life from genes to societies.
Humans are social animals, and like all animals, our behavior is shaped by mix of our genetic predispositions, shaped by natural selection, and the past and present cues in our environment. Learning is important in humans, but learning requires genetic instructions about what to learn. In an extremely simplified sense, we can think of our biology as telling us what social and environmental information to extract and how we should use it. For example, human faces are extremely salient, even to babies. There is an entire region of the brain devoted to recognizing and processing human faces, which is why we see human faces in clouds, mountains, and burnt toast. It’s like our brains are saying, “Ok, these things are important, extract information from them, learn these particular faces, learn what these expressions mean.” Another example is a biological mechanism for learning language that allows toddlers to distinguish language sounds from all the other sounds they hear, which is why 3 year-olds learn to speak human language but don’t mimic crickets chirping, traffic sound, dogs barking, alarm clocks, or birds singing in the morning.
Even when we are learning how to cooperate, our biology is constraining us in some directions and pushing us forward in others: it’s difficult to teach a psychopath empathy or to teach normal people to ignore their prosocial tendencies. But cooperation in humans and most animals is expected to be fine-tuned to the range of circumstances we encounter. There are a number of social factors in a given situation that, consciously or subconsciously, affect how likely we are to help someone else in a given situation. Here are just a few in the no particular order:
Framing effects: If you as the experimenter prime your human subjects in your experiment either subconsciously, by pitting groups against each other, or simply by saying that their partners are “collaborators” or they are on the same “team”, the subjects will be more likely to cooperate. If you make them think of each other as “competitors”, or on different teams, they will be less likely to cooperate.
The rest of these are important in at least some other animal species:
Audience effects: People’s cooperative actions are sensitive to whether someone might be watching them. If you have people play cooperation games in the presence of a pair of dots that look like eyes, they will be more cooperative. If you turn the dots into flowers by adding petals, the effect goes away. This concept of reputation may seem like something that only humans could respond to, but there is an excellent example of audience effects on the behavior of cooperative cleaner fish.
Amount of past help from your social partner: People will make larger cooperative investments when they are most likely to get the most return on their investment. One way to do that is to help people who have helped you before. With each investment, you can trust them more to make larger investments, a phenomenon called “raising-the-stakes“.
Amount of past help received from anyone: People are more generous to a recipient when others have been generous to them, regardless of whether the recipient and the others are different. This is called generalized reciprocity. It’s also found in rats (see below).
Cost of helping: If the cost of helping is low, you are more likely to help someone.
Benefit to the recipient: You are more likely to help someone if they will benefit more from it.
Kinship: If someone is in your family, you are more likely to help them. And if someone you don’t know looks more like they could be related to you, you are more likely to help them.
Number of other potential helpful social partners: If there is only one individual that can help you, you are likely to invest more in that one individual. If there are more, this creates a biological market, which leads to the next one…
Social partner’s ability to help you relative to other potential partners: If one particular individual has an ability to help you more so than others, you are likely to invest more in that individual. This is tied to the last factor. In one terrific study, experimenters created a food apparatus that only a low-ranking vervet monkey could operate, delivering abundant food to everyone. The amount of social grooming that monkey received increased significantly. When the experimenters made it so that another monkey could also operate the food delivery device, that second monkey also received more grooming, while the first monkey’s grooming decreased a bit.
Partner’s ability to punish you for not cooperating: Punishment of non-cooperators is important in both humans and other animals. Obviously, this will have an effect on your propensity to cooperate… unless you decide to punish this punisher and so on. Primates often direct their cooperative behaviors and social grooming up a hierarchy. In some cooperatively breeding birds, mammals, and fish, subordinate individuals help the dominant male or female raise their offspring; otherwise, they might suffer a serious beat-down and/or be evicted from the group. Because actual punishment is costly, the threat of punishment can be more important than the actual thing.
And there are obviously many others… Which of these are most important? How do they interact? Which cues overshadow others? Which ones are found in mainly humans? Which are in most animals? I would love to know.

What about mechanisms? Some of these social cues we know have measurable effects on important hormones and neurotransmitters in the brain. These mechanisms are complicated and not yet well understood. But we have some interesting evidence. For example, oxytocin is a hormone and neuropeptide that some have even called “the moral molecule” because it affects social behavior so much. It’s more commonly known for its role in the female reproductive system, but it also has powerful effects on social bonding, social risk, and cooperative behavior. In most animals, it has the effect of reducing “social aversion” allowing mating, parental care, or social behavior to occur. When given to humans, it increases their generosity, trust in others, and propensity to cooperate. On the other hand, its actions are very context-specific, and it can also have the effect of increasing in-group/out-group biases, leading to attitudes of social defensiveness or ethnocentrism. So it’s probably too soon to call this the “moral molecule”. You can deliver oxytocin to someone by spraying it up their nose. You can also just have them release and uptake their own oxytocin by having them watch a sappy movie. This too will still make them more cooperative, generous, and trusting when you test them in lab games. But what determines one’s behavior is not just the amount of oxytocin in your brain but the density and location of oxytocin receptors, and this is likely even more important to shaping your “cooperative personality”.
While human cooperation is often studied using economic lab games, cooperation in other animals is often studied by training the animals to pull a lever or rope to deliver food to themselves and/or a social partner. The experimenter can ask the same question: When will subjects pay a cost to help another individual?
As you can imagine the results of these studies often depends on the details of the situation. For example, in one study design, chimpanzees will be selfish. In another study design, they will be generous. This style of study has attracted some criticism because they involve trained animals performing trained behaviors, and, like I said above, they might remove many of the important aspects of natural social situations.
However, the controlled nature of these studies also makes them very informative. For example, there have been a series of very interesting studies on cooperation in rats at Michael Taborsky‘s lab. Taborsky’s students have trained a number of rats to operate an apparatus allowing them to deliver food to other unrelated rats. The rats understand how the whole thing works, and that they can “pull” to deliver food to others but not themselves. He then asks: what factors make rats more likely to deliver food to unrelated partners?

First, he showed that when rats receive food from another rat, they will be more likely to pull (deliver food) for that rat. Anonymous help from another partner increases pulling by 20%. Receiving help from the same partner increases it an additional 51%. Of course, the rats pull the most for themselves.
Second, the lab found that rats pulled less when it took more energy to pull. The experimenters increased the resistance on the pulling, this made the rats hesitate before pulling for rats that had not helped them before, but they did not hesitate to help the nice rats that did help them previously.
Third, the rats pulled more often for partners that were heavier. Perhaps they were more eager to help bigger rats that might be able to punish them?
Interestingly, when the experimenters made the partners hungry by fasting them, the effect reversed. Now the subject rats pulled more for the lighter rats, who were more in need. The experimenters don’t know if the rats were responding based on the increased need of the lighter rats. That finding would fulfill some concepts of animal empathy, something that’s also been putatively demonstrated in rats. But that’s another story entirely.
I look forward to hearing more from this lab!
In the next post, I’ll discuss more about what I think our studies on vampire bats can bring to the table, and why I’m so psyched for our next experiments!