What are “prosocial” preferences? (+ news and updates)

Recent news

  • Vampire bat food sharing is more complicated than a strict short-term exchange of a singe commodity. They do not use previous short-term experience as the sole predictor (a literal interpretation of tit-for-tat). Food sharing is based on long-term social bonds which are fairly consistent and robust to experimental perturbations among females (manuscript in prep, more later)
  • Intranasal oxytocin affects vampire bat social behavior (manuscript in prep, more later)
  • Vampire bats perform more social grooming than the 4 other bats (Artibeus, Carollia, Eidolon, Rousettus) housed at Organization for Bat Conservation’s Bat Zone (manuscript in prep, more later)
  • Robert Trivers wrote me an email saying he “loved” my review paper, The Reciprocity Controversy, and he called it an “excellent paper.” It is nice to get an affirming email from “one of the great thinkers in the history of Western thought” (to quote Steven Pinker). But shortly thereafter, we seemed to disagree on Rob Kurzban’s book on cognitive modularity, written for a popular audience. (I was introduced to his work by a really smart field assistant I had, named Adi Shaked, who later went into psychology and co-authored one of my favorite recent psych papers). I really liked Kurzban’s book, and I thought it presented an alternative way of thinking about Trivers’ theory of self-deception (which he also wrote a non-technical book  about). But Trivers did not agree with Kurzban’s take on the whole matter. Oh well. This made me want to read more of the self-deception and modularity literature. But this is not even my field, and it’s not really helping me graduate any faster.
  • A recent paper just came out (with about 40 co-authors including myself), entitled Acoustic sequences in non-human animals: a tutorial review and prospectus, which was the result of a workshop at NIMBIOS. It’s a huge understatement to say that I learned far more than I contributed to this paper.

Other recent papers 

Some thoughts on “prosocial preferences” (the topic of the last 2 papers). 

In the last few years, there have been several animal behavior papers using this term prosocial. On this topic, I am often puzzled; I really just don’t “get it”. And I’m not even sure what it is that I don’t get, which makes it hard to discuss. I’ve never tried to write out all my thoughts on this topic in one place (though there’s bits here and there in this blog). So I’m going to try and do that now at the risk of sounding stupid or hostile (that’s certainly not my intention). My goal is to honestly discuss why I don’t understand the way this topic is discussed in the literature (what’s the big deal? what’s the question?).

Before I launch into this discussion, please bear with me because I want to say a few more things to reduce the tension I feel right now and make it clear that I’m not intending to attack, belittle, or criticize other people’s hard work. I have felt this way about many big emerging topics in my field over the last few years: behavioral syndromes, social networks, generalized reciprocity, and all matter of controversial topics from the social sciences. After reviewing the literature, I eventually settle into having a real opinion, but up until that moment I never know if I truly understand an idea that seems to me obvious, simplistic, or even wrong, but that many smart, experienced people find useful and insightful. One sociological problem is that puzzled people (like myself) tend to discuss the topic less, not more, and if the puzzled remain silent, researchers (even entire fields) don’t get the feedback they need. There is no confirmation that the ideas are being communicated well, clearly and convincingly, or if the whole thing sounds unconvincing. Everyone recognizes that science runs on this kind of skepticism.

Sometimes it’s hard to know if I actually contest the facts or their interpretation, or if I just disagree with the best framing of the issue, or if I’m merely interested in different aspects of the world than other researchers. For example, it took me awhile to understand how multi-level selection and inclusive fitness could both be more useful for different kinds of questions. And it took me awhile to understand how social network plots and metrics might provide more insight than the traditional analysis of a sociomatrix. Even in high school, I remember harassing my math teacher about relatively simple concepts, such as “instantaneous rate of change” in pre-calculus (“how can there be a rate of change in a single frozen moment?!?”). The only way to appreciate new concepts or perspective that seem strange/non-intuitive/wrong is to talk to people who think about it more clearly. So if someone wants to correct my thinking on this topic, I welcome your feedback.

One reason I’ve been thinking about this is that Joan Silk visited University of Maryland and I was able to talk with her a bit about my work on vampire bats. I’m extremely interested in the extent to which social investments are contingently enforced in long-term social relationships. Silk is probably the leading expert on social bonds in primates and other mammals, and she’s done some of the very best studies on the fitness consequences of social relationships in nature, which I think are among the most important studies done in social behavior. Silk and my friend Jennifer Vonk (who studied with Silk) have also done a bunch of work on primate prosocial behavior.

In a typical prosocial test, an animal subject has to choose between two tokens (or two levers or two strings): one rewards only herself (1/0) and one that rewards herself and another nearby subject (1/1). This latter option is called the prosocial option. In other versions, a person faces an economic dilemma like the following: she receives $100 dollars, and can choose to split her winnings with a partner or keep it for herself. There is a subtle implicit social expectation that she should share it (because the partner might be called an “investor”), but she is told there are no consequences for not sharing it.

Such tests are designed to tell us if humans or chimps are prosocial, meaning that they have preferences for outcomes that benefit others. People then ask: Are humans prosocial? Are chimps? Are vampire bats? When did prosocial preferences evolve? Joan Silk said she believes that prosocial preferences originated in humans after they diverged from other apes.


First, before we ask if humans are prosocial in general, what does it even mean to be prosocial in general?  In the last paper listed above, prosocial behaviour is defined as “any behaviour performed by one individual to alleviate another’s need or improve their welfare.” Does that mean maternal care by alligators is prosocial? I have a hard time understanding how the notion of prosocial behavior relates to either ultimate evolutionary questions (What’s the role of kin selection? Are there direct fitness benefits to the cooperative trait? If so, are they enforced?) or proximate mechanism questions (What’s the role of oxytocin or receptor density? What are the cues for social recognition? What information does the animal use to decide whether to help?).

Second, does a prosocial preference refer to an animal’s hidden internal motivation (like empathy or a general concern for others), or to the observable behavior (the act of helping others in any situation)? In many cases it’s clearly the former, but I sometimes feel that authors switch back and forth between these two meanings, which obscures the immense central difficulty of animal cognition: translating [what animals do] into facts about [how animals think].

Third, there are clearly many ways humans are unique. So one can certainly define prosocial preferences in ways that stress how humans are different are from other animals. Researchers define language this way: other animals communicate, but they don’t use language. But this requires, in my mind, a very different approach– one that emphasizes the study of human nature specifically. And the question of “when did human prosocial preferences evolve?” answers itself by definition.

If “prosocial preference” does indeed refer to an internal motivation, then it is clearly not some simple discrete trait; it is not even an observable behavior. It appears to be something like the desire (the motivation) of an animal to help others, regardless of the context. This is why prosocial tests involves exchanging tokens or pulling levers and strings. The tests allow control over the costs and benefits, but they also seek to show that the motivation to help is transferable to a novel situation (because it’s about helping another in whatever way, not about performing an instinctual act).

That seems to make sense at first, but it quickly falls apart. Consider for a moment what it means to have a prosocial preference that is unaffected by context (blind to the setting, partner(s), situation, etc). This is an organism that is simply and always motivated to help others. What others? Perhaps familiar others? Or do you need just need to be the same species? Or maybe just alive? Is a prosocial chimpanzee expected to care about just other chimpanzees? Or how about kittens? Or mice? Flies? Plants? What if a chimpanzee only really cares about her offspring? Or her immediate family? Is she still prosocial? Or is that not caring enough to count as prosocial?

An adaptationist view is that context should matter a lot. Cooperative decisions should be based on contextual factors that would be useful (for inclusive fitness) in the typical situation in which that species evolved. This view suggests that as a product of natural selection, your emotions should direct you to be nice in pretty biased ways: to your offspring, to your relatives, to socially dominant individuals, to individuals you will meet again, or to individuals that you depend on in some other way. And you should be nice under certain conditions such as when it would help your reputation, or when your help will directly or indirectly come back to help you in return. Indeed, the contextual cues that your mind uses to make these decisions might be really complex, because the evolution of cooperation involves coevolutionary arms races between social strategies seeking to make the best social investments that yield the highest social returns, while simultaneously avoiding exploitation by strategies that are themselves evolving to be better at subtle exploitation. This process is ultimately limited by the complexity of the organisms. But even fungi make strategic cooperative investments that depend on context. To the extent that prosocial preferences are adaptive, they should be pretty context-specific.

In fact, even if prosocial preferences are non-adaptive byproducts (like how people like to have pets), then, assuming we are not just looking at random noise or mistakes, these preferences must be byproducts of something adaptive if they have any kind of design to them at all. So even then, they should still be somewhat context-dependent and triggered by certain stimuli. There is no theory that generosity should be context-free. And it never is.

In sharp contrast, many researchers seem to think about our own species (the supposed exemplars of prosocial preference) in a manner something like this: humans, unlike other animals, care about other people in general (and we will help people in need across many contexts), except perhaps psychopaths who do not care about other people (they might help others but only when it benefits them). Most of us are not psychopaths; humans in general have prosocial preferences.

Think I’m making this up? Is anyone really writing real scientific papers about how humans or other animals are just nice in general? Yes, there are lots. Here’s just one example: an abstract for a recent paper published in the journal Nature Communications entitled “Humans display a ‘cooperative phenotype’ that is domain general and temporally stable” 

Understanding human cooperation is of major interest across the natural and social sciences. But it is unclear to what extent cooperation is actually a general concept. Most research on cooperation has implicitly assumed that a person’s behaviour in one cooperative context is related to their behaviour in other settings, and at later times. However, there is little empirical evidence in support of this assumption. Here, we provide such evidence by collecting thousands of game decisions from over 1,400 individuals. A person’s decisions in different cooperation games are correlated, as are those decisions and both self-report and real-effort measures of cooperation in non-game contexts. Equally strong correlations exist between cooperative decisions made an average of 124 days apart. Importantly, we find that cooperation is not correlated with norm-enforcing punishment or non-competitiveness. We conclude that there is a domain-general and temporally stable inclination towards paying costs to benefit others, which we dub the ‘cooperative phenotype’.

I just don’t get it. This conclusion is warm and fuzzy for sure. While I do like the data, I think the interpretation is weird. Cooperation here is being measured on a continuous scale from perfectly selfish to perfectly prosocial/fair/cooperative based on how individuals behave in economic games and thought experiments. People have different and consistent personality traits (surprising?), and we are, on average, not perfectly and rationally selfish in economic games (surprising?). Ok, but here’s the problem. The authors label anything other than being perfectly selfish as “cooperative” and so individuals who give anything other than zero in an economic game are called “cooperators”. So you would have to be perfectly selfish to not be considered cooperative.

For example, imagine I conduct an online public good experiment where I ask 4 people to each put in $10 and the total will then be doubled and split between them (so if each person puts in their $10, then 40*2=80 and 80/4=20, so each person would get $20 back). However, since nobody will know what others put in to the public pot, people can and might free-ride. Investing requires trust and the desire to cooperate. That’s the idea at least.

Joe expects everyone else to put in $10, but he considers putting in only $1 thinking he will make even more money by exploiting other people’s investments. He does the calculation. The total will then be $31 and each person gets back 31/4 or $7.75. Everyone else will have $7.75 and Joe will have $16.75. Haha, suckers! That’s his plan. It does not occur to Joe that putting in all $10 would have been an even better outcome given what he expects everyone else to do, or that putting in zero is even more rationally selfish, because it protects him from losing any money if everyone else puts in zero. Joe is both a bit irrational and competitive; he wants to “win” the game, but he also has this gut feeling like putting in zero “looks bad” and increases the chances he will be “caught” even though he is told nobody is watching and nobody will know. According to the authors, since Joe did not put in zero like a true rational selfish agent, he should be labeled “cooperative” and his actions show that he has a “cooperative phenotype”.

The authors must know that this is just “lowering the bar” for what’s considered selfish, because they even created a new category for any individuals that are perfectly cooperative (in my example, putting in the full $10 and trusting and expecting to get $20 back). These players are called “super-cooperators” and in the public goods game described above, they make up only about 1 out of 7 players. The most common amount given was zero (about 1 out of 3 players), but since the remaining people invested something other than zero, the authors considered this support for the “cooperative phenotype.”

The authors could have written the same results very differently.  They could have called anything other than perfect prosocial action “selfishness”. The exact same result would then be written like this:

We found that humans are in general not cooperative (shock!) and that this bias is consistent over time (shock!). So we have discovered that selfishness is domain-general and humans have a temporally stable inclination towards benefitting themselves more than what is best for everyone, which we dub the “selfish phenotype”.

I think it’s safe to say that no one would like this paper and no prestigious journal would publish it. Even before this paper came out, it was understood that much of this kind of evidence for prosocial preferences in humans has been interpreted incorrectly. I wrote a post about that here. These kinds of experiments are important for economic theory, but not for evolutionary theories for human cooperation.

One big problem is that we don’t know what the actions in the games actually mean to the players. I participated in a public goods game like this as an undergraduate in an economics class, and I was apparently one of the “super-cooperators” because I cooperated 100% but not because I was being nice. After all, the money was imaginary and I thought we might have to later raise our hands to show our choice (which we did!). Then everyone looked around and saw who was “cooperative” and who was “selfish”. It was an environmental economics class, and at least from the raised hands it seemed like all the environmental science majors chose the “prosocial” option whereas the economics majors chose the “selfish” one, probably because we both thought that was, like, the “correct” answer. In retrospect, I cared much more about what people thought of me (or might think of me), than on the imaginary money. I even noticed that a girl in the class (who I later married) picked the same same prosocial choice as me. Maybe I also wanted to make a point that people are not selfish. Maybe I wanted to feel good about myself. I don’t remember. But I do remember that all this sort of stuff was much more important to me than the actual payoffs that were the focus of the game.

My point is that again context matters. Your choice in the game depends entirely on how you interpret the game. An even better example: Bailey House, Joan Silk, and others conducted a prosocial test with kids and found that the kids liked to choose non-prosocial option (I get $1, you get nothing) over the prosocial option (we both get $1) because they thought it was funny. To detect a prosocial preferences in these kids, the researchers had to take out all the data from trials where the kids were laughing, indicating they were not playing the game “correctly”. With other animals, it’s even less obvious how they interpret the game.

Why are we even trying to figure out if humans are “prosocial” are not? The funding for the “cooperative phenotype” paper comes from the Templeton Foundation, which is all about the “big questions” bridging the gap between science and spirituality. This apparently includes silly simplistic questions and answers like, “Are people cooperative? Yes or No.”

Of course some people are consistently nicer than others. But even the most remarkable generosity of the kindest person is context-dependent. If Jesus or Buddha cared about every person unconditionally and equally across every possible context, then that’s just another way they are different from everyone else. For the rest of us, when we lack the proper contextual cues, we show the selfish preferences of a psychopath. For example: Do I care about starving children? Of course. I am certain that I do. I would help feed a starving child that was right in front of me. Yet, everyday of my life, I undeniably refuse to help millions of starving children simply because those children are out of my sight. And it’s not because I’m making a mistake or because I keep forgetting. Nope, I could be making a donation right now to save a starving child, and instead, I’m writing this blogpost. Does that mean that my preference is to write opinionated rants rather than help starving children? Does that mean I value rambling over human life? Am I psychopath with zero concern for human life? Maybe.

My desire to help others relies on some external stimuli triggering my empathy (that is, empathy is context-dependent). Hit this nerve hard enough, and I get out my wallet and starting making donations. To feel sympathy for a starving child, I have to at least picture in my mind’s eye a sad child’s face. This empathetic response is triggered by the contextual cues–the real or imagined sights and sounds of a starving child– not by the mere knowledge of the relative payoffs of helping or not helping. I do not have a defined and discoverable preference for how much I want to help starving kids, so there is no real answer to the simplistic question of whether or not I care about starving children.

Human prosocial preference, like the weather, is unpredictable, because so many factors influence it. This is because social decisions are complex. I can have a stable preference for, say, eating apples or not, but that’s very different. Apples are not agents. Apples won’t try to eat me back if I make the wrong decision, apples don’t influence my future wellbeing, and eating apples does not lead to other people judging or punishing me. Change any of those variables, and my apple preference should become pretty complex and context-specific. How much do we care about the wellbeing of rats? It depends on the context. Are they pets or pests? Or are they food or are they subjects in an experiment?

Say a person comes up to me tomorrow and asks me for a donation to help a starving child, would I do it? And how much would I donate? I don’t know. It depends. Most importantly, it depends on all kinds of weird stuff that should be irrelevant if I had a certain stable level of concern for starving kids.

So if an experimenter tests your “prosocial preference” in an experiment (Will you help a starving child?), the actual outcome is going to depend on how well the experimental conditions trigger your empathy, on whether you feel that you’re being watched or judged, on how much time and energy it will take you to perform the helping act, on what the expected rewards will be, and so on. And perhaps none of those considerations will even be conscious. All of this before you even consider the monetary payoffs to you and the child (say $1 for one meal). The ease of manipulating human preferences using such conscious or subconscious contextual cues is obvious to anyone who knows anything about marketing. What someone is willing to pay for object X,  is not really matched to a person’s internal value for object X.

Even worse for the prospect of correctly testing my prosocial preference, within a single social decision, different parts of my mind will have different preferences, and different contexts (e.g. setting, audience) or internal conditions (e.g. hormones, blood sugar) will engage these different parts to different extents. The human mind is not a singular rational actor with a unified singular goal. Even if some parts of me care deeply about others, there are other parts that are only concerned with my own selfish needs.

My point is not that people are, underneath it all, really selfish. My points are (1) that studying preferences is actually more difficult than it may seem at first, and (2) that human social preferences should have an adaptive design that is context-specific, just like the social preferences of any other animal. We should expect human nature to be adaptive for a particular social environment, not “cooperative” or “selfish”. Humans are just nice enough to readily cooperate in the social environments in which we evolved, but not so nice as to be exploited by others in those same social environments. When humans act non-adaptively nice or selfish in a laboratory experiment, we should first suspect it is because the experimental context is not ecologically relevant. Humans evolved moral emotions to help them solve the very complex problem of getting along with others in the real social world, not to be rational selfish agents or good consistent utilitarian moral philosophers. That is why context matters.

For all these reasons, measuring/testing for a general human prosocial preference (to the extent that there is such a thing) is actually a much more difficult task than merely describing natural patterns of a specific kind of human prosocial behavior (like when an anthropologist studies food sharing patterns in a human tribe) or testing the contextual cues that promote it (like an experiment testing the effects of audience cues or the facial similarity of the recipient).

In my mind, measuring preferences in general can only mean testing them across a wide range of standardized contexts and situations. Otherwise, we are either testing preferences under a single context, or just assuming that preferences rely on only a few variables, like the relative payoffs of the outcome of the actions (like incorrectly assuming that my decision to help a starving child just depends on the relative amounts of money that it costs me and what the child receives).

All of this is true for nonhuman animals as well. Is species X prosocial? Will members of species help a starving individual? It will depend on the context, and probably, many different aspects of the context. The food sharing behavior of a particular vampire bat depends not just on the identity and hunger state of the donor and recipient, it is also highly sensitive to whether the donor is stressed or not, or in a familiar setting or not, or whether it is separated from the recipient by cage bars or not. It would not be that surprising to me if it depends somewhat on how much she has been helped by other bats (called generalized reciprocity), if only because this provides some information for possibilities of direct reciprocity. And I also think the donor might also give more or less depending on the number of alternative partners around her, either in the present or in the recent past.

We already know humans and chimps and vampire bats cooperate under some circumstances but not others. The question is, what are those circumstances? We do not need studies on whether animals are “good” or “bad” or “prosocial” or “selfish” in ways that are “domain-general” whatever that really means. We need studies on what contextual cues trigger prosocial behavior in the first place. We can predict these cues using evolutionary theory (e.g. inclusive fitness, reciprocity, costly signaling, biological market theories), and the ecology and life history of the organism.

Like humans, animal motivations to help should be triggered by the sights and sounds of distress, and other situational cues, and not by some kind of conceptual understanding of the payoffs or the insight that someone is in need. After all, we humans have this conceptual understanding, yet we ignore it every time we drink a latte at Starbucks instead of donating that money to help other people in need. If our empathy was not limited in this way, I’m not sure we could live healthy lives.

Moreover, the vast majority of human cooperation (such as obeying social norms, laws, paying membership fees to non-profits, automated donations to charities, and the buying and selling of goods/services) does not require any sort of empathy or “prosocial” inclination at all. We just follow the social norms. Our lives are embedded in designed systems that nudge and coerce us into behaving cooperatively. If I donate blood, I’m not always sitting there thinking about, and motivated by, all the people out there who need blood. Perhaps I’m not feeling empathy at all. I might even be thinking about what a good person I am. Maybe I even get a sticker to show people that I donated blood. The behavior and the motivation are not so clearly linked.

If you want see real behavior motivated by real empathy or internal “prosocial preferences” in people (rather than people trying to adhere to their belief systems or other social norms or expectations), I don’t think you can just put people in abstract economic games or thought experiments. The best approach in my opinion would be to create a realistic naturalistic situation that evokes the proper prosocial emotions (like having a person collapse on the street and seeing the response of bystanders). Then the question would be: What factors can turn the “prosocial behavior dial” up and down? The same logic applies to other animals.


About Gerry Carter

I study the behavioral, sensory, and social ecology of vampire bats. http://socialbat.org.
This entry was posted in About cooperation. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s