Cute story on vampire bats on the BBC Earth News website.
Twitter FeedMy Tweets
Other recent papers
Some thoughts on “prosocial preferences” (the topic of the last 2 papers).
In the last few years, there have been several animal behavior papers using this term prosocial. On this topic, I am often puzzled; I really just don’t “get it”. And I’m not even sure what it is that I don’t get, which makes it hard to discuss. I’ve never tried to write out all my thoughts on this topic in one place (though there’s bits here and there in this blog). So I’m going to try and do that now at the risk of sounding stupid or hostile (that’s certainly not my intention). My goal is to honestly discuss why I don’t understand the way this topic is discussed in the literature (what’s the big deal? what’s the question?).
Before I launch into this discussion, please bear with me because I want to say a few more things to reduce the tension I feel right now and make it clear that I’m not intending to attack, belittle, or criticize other people’s hard work. I have felt this way about many big emerging topics in my field over the last few years: behavioral syndromes, social networks, generalized reciprocity, and all matter of controversial topics from the social sciences. After reviewing the literature, I eventually settle into having a real opinion, but up until that moment I never know if I truly understand an idea that seems to me obvious, simplistic, or even wrong, but that many smart, experienced people find useful and insightful. One sociological problem is that puzzled people (like myself) tend to discuss the topic less, not more, and if the puzzled remain silent, researchers (even entire fields) don’t get the feedback they need. There is no confirmation that the ideas are being communicated well, clearly and convincingly, or if the whole thing sounds unconvincing. Everyone recognizes that science runs on this kind of skepticism.
Sometimes it’s hard to know if I actually contest the facts or their interpretation, or if I just disagree with the best framing of the issue, or if I’m merely interested in different aspects of the world than other researchers. For example, it took me awhile to understand how multi-level selection and inclusive fitness could both be more useful for different kinds of questions. And it took me awhile to understand how social network plots and metrics might provide more insight than the traditional analysis of a sociomatrix. Even in high school, I remember harassing my math teacher about relatively simple concepts, such as “instantaneous rate of change” in pre-calculus (“how can there be a rate of change in a single frozen moment?!?”). The only way to appreciate new concepts or perspective that seem strange/non-intuitive/wrong is to talk to people who think about it more clearly. So if someone wants to correct my thinking on this topic, I welcome your feedback.
One reason I’ve been thinking about this is that Joan Silk visited University of Maryland and I was able to talk with her a bit about my work on vampire bats. I’m extremely interested in the extent to which social investments are contingently enforced in long-term social relationships. Silk is probably the leading expert on social bonds in primates and other mammals, and she’s done some of the very best studies on the fitness consequences of social relationships in nature, which I think are among the most important studies done in social behavior. Silk and my friend Jennifer Vonk (who studied with Silk) have also done a bunch of work on primate prosocial behavior.
In a typical prosocial test, an animal subject has to choose between two tokens (or two levers or two strings): one rewards only herself (1/0) and one that rewards herself and another nearby subject (1/1). This latter option is called the prosocial option. In other versions, a person faces an economic dilemma like the following: she receives $100 dollars, and can choose to split her winnings with a partner or keep it for herself. There is a subtle implicit social expectation that she should share it (because the partner might be called an “investor”), but she is told there are no consequences for not sharing it.
Such tests are designed to tell us if humans or chimps are prosocial, meaning that they have preferences for outcomes that benefit others. People then ask: Are humans prosocial? Are chimps? Are vampire bats? When did prosocial preferences evolve? Joan Silk said she believes that prosocial preferences originated in humans after they diverged from other apes.
First, before we ask if humans are prosocial in general, what does it even mean to be prosocial in general? In the last paper listed above, prosocial behaviour is defined as “any behaviour performed by one individual to alleviate another’s need or improve their welfare.” Does that mean maternal care by alligators is prosocial? I have a hard time understanding how the notion of prosocial behavior relates to either ultimate evolutionary questions (What’s the role of kin selection? Are there direct fitness benefits to the cooperative trait? If so, are they enforced?) or proximate mechanism questions (What’s the role of oxytocin or receptor density? What are the cues for social recognition? What information does the animal use to decide whether to help?).
Second, does a prosocial preference refer to an animal’s hidden internal motivation (like empathy or a general concern for others), or to the observable behavior (the act of helping others in any situation)? In many cases it’s clearly the former, but I sometimes feel that authors switch back and forth between these two meanings, which obscures the immense central difficulty of animal cognition: translating [what animals do] into facts about [how animals think].
Third, there are clearly many ways humans are unique. So one can certainly define prosocial preferences in ways that stress how humans are different are from other animals. Researchers define language this way: other animals communicate, but they don’t use language. But this requires, in my mind, a very different approach– one that emphasizes the study of human nature specifically. And the question of “when did human prosocial preferences evolve?” answers itself by definition.
If “prosocial preference” does indeed refer to an internal motivation, then it is clearly not some simple discrete trait; it is not even an observable behavior. It appears to be something like the desire (the motivation) of an animal to help others, regardless of the context. This is why prosocial tests involves exchanging tokens or pulling levers and strings. The tests allow control over the costs and benefits, but they also seek to show that the motivation to help is transferable to a novel situation (because it’s about helping another in whatever way, not about performing an instinctual act).
That seems to make sense at first, but it quickly falls apart. Consider for a moment what it means to have a prosocial preference that is unaffected by context (blind to the setting, partner(s), situation, etc). This is an organism that is simply and always motivated to help others. What others? Perhaps familiar others? Or do you need just need to be the same species? Or maybe just alive? Is a prosocial chimpanzee expected to care about just other chimpanzees? Or how about kittens? Or mice? Flies? Plants? What if a chimpanzee only really cares about her offspring? Or her immediate family? Is she still prosocial? Or is that not caring enough to count as prosocial?
An adaptationist view is that context should matter a lot. Cooperative decisions should be based on contextual factors that would be useful (for inclusive fitness) in the typical situation in which that species evolved. This view suggests that as a product of natural selection, your emotions should direct you to be nice in pretty biased ways: to your offspring, to your relatives, to socially dominant individuals, to individuals you will meet again, or to individuals that you depend on in some other way. And you should be nice under certain conditions such as when it would help your reputation, or when your help will directly or indirectly come back to help you in return. Indeed, the contextual cues that your mind uses to make these decisions might be really complex, because the evolution of cooperation involves coevolutionary arms races between social strategies seeking to make the best social investments that yield the highest social returns, while simultaneously avoiding exploitation by strategies that are themselves evolving to be better at subtle exploitation. This process is ultimately limited by the complexity of the organisms. But even fungi make strategic cooperative investments that depend on context. To the extent that prosocial preferences are adaptive, they should be pretty context-specific.
In fact, even if prosocial preferences are non-adaptive byproducts (like how people like to have pets), then, assuming we are not just looking at random noise or mistakes, these preferences must be byproducts of something adaptive if they have any kind of design to them at all. So even then, they should still be somewhat context-dependent and triggered by certain stimuli. There is no theory that generosity should be context-free. And it never is.
In sharp contrast, many researchers seem to think about our own species (the supposed exemplars of prosocial preference) in a manner something like this: humans, unlike other animals, care about other people in general (and we will help people in need across many contexts), except perhaps psychopaths who do not care about other people (they might help others but only when it benefits them). Most of us are not psychopaths; humans in general have prosocial preferences.
Think I’m making this up? Is anyone really writing real scientific papers about how humans or other animals are just nice in general? Yes, there are lots. Here’s just one example: an abstract for a recent paper published in the journal Nature Communications entitled “Humans display a ‘cooperative phenotype’ that is domain general and temporally stable”
Understanding human cooperation is of major interest across the natural and social sciences. But it is unclear to what extent cooperation is actually a general concept. Most research on cooperation has implicitly assumed that a person’s behaviour in one cooperative context is related to their behaviour in other settings, and at later times. However, there is little empirical evidence in support of this assumption. Here, we provide such evidence by collecting thousands of game decisions from over 1,400 individuals. A person’s decisions in different cooperation games are correlated, as are those decisions and both self-report and real-effort measures of cooperation in non-game contexts. Equally strong correlations exist between cooperative decisions made an average of 124 days apart. Importantly, we find that cooperation is not correlated with norm-enforcing punishment or non-competitiveness. We conclude that there is a domain-general and temporally stable inclination towards paying costs to benefit others, which we dub the ‘cooperative phenotype’.
I just don’t get it. This conclusion is warm and fuzzy for sure. While I do like the data, I think the interpretation is weird. Cooperation here is being measured on a continuous scale from perfectly selfish to perfectly prosocial/fair/cooperative based on how individuals behave in economic games and thought experiments. People have different and consistent personality traits (surprising?), and we are, on average, not perfectly and rationally selfish in economic games (surprising?). Ok, but here’s the problem. The authors label anything other than being perfectly selfish as “cooperative” and so individuals who give anything other than zero in an economic game are called “cooperators”. So you would have to be perfectly selfish to not be considered cooperative.
For example, imagine I conduct an online public good experiment where I ask 4 people to each put in $10 and the total will then be doubled and split between them (so if each person puts in their $10, then 40*2=80 and 80/4=20, so each person would get $20 back). However, since nobody will know what others put in to the public pot, people can and might free-ride. Investing requires trust and the desire to cooperate. That’s the idea at least.
Joe expects everyone else to put in $10, but he considers putting in only $1 thinking he will make even more money by exploiting other people’s investments. He does the calculation. The total will then be $31 and each person gets back 31/4 or $7.75. Everyone else will have $7.75 and Joe will have $16.75. Haha, suckers! That’s his plan. It does not occur to Joe that putting in all $10 would have been an even better outcome given what he expects everyone else to do, or that putting in zero is even more rationally selfish, because it protects him from losing any money if everyone else puts in zero. Joe is both a bit irrational and competitive; he wants to “win” the game, but he also has this gut feeling like putting in zero “looks bad” and increases the chances he will be “caught” even though he is told nobody is watching and nobody will know. According to the authors, since Joe did not put in zero like a true rational selfish agent, he should be labeled “cooperative” and his actions show that he has a “cooperative phenotype”.
The authors must know that this is just “lowering the bar” for what’s considered selfish, because they even created a new category for any individuals that are perfectly cooperative (in my example, putting in the full $10 and trusting and expecting to get $20 back). These players are called “super-cooperators” and in the public goods game described above, they make up only about 1 out of 7 players. The most common amount given was zero (about 1 out of 3 players), but since the remaining people invested something other than zero, the authors considered this support for the “cooperative phenotype.”
The authors could have written the same results very differently. They could have called anything other than perfect prosocial action “selfishness”. The exact same result would then be written like this:
We found that humans are in general not cooperative (shock!) and that this bias is consistent over time (shock!). So we have discovered that selfishness is domain-general and humans have a temporally stable inclination towards benefitting themselves more than what is best for everyone, which we dub the “selfish phenotype”.
I think it’s safe to say that no one would like this paper and no prestigious journal would publish it. Even before this paper came out, it was understood that much of this kind of evidence for prosocial preferences in humans has been interpreted incorrectly. I wrote a post about that here. These kinds of experiments are important for economic theory, but not for evolutionary theories for human cooperation.
One big problem is that we don’t know what the actions in the games actually mean to the players. I participated in a public goods game like this as an undergraduate in an economics class, and I was apparently one of the “super-cooperators” because I cooperated 100% but not because I was being nice. After all, the money was imaginary and I thought we might have to later raise our hands to show our choice (which we did!). Then everyone looked around and saw who was “cooperative” and who was “selfish”. It was an environmental economics class, and at least from the raised hands it seemed like all the environmental science majors chose the “prosocial” option whereas the economics majors chose the “selfish” one, probably because we both thought that was, like, the “correct” answer. In retrospect, I cared much more about what people thought of me (or might think of me), than on the imaginary money. I even noticed that a girl in the class (who I later married) picked the same same prosocial choice as me. Maybe I also wanted to make a point that people are not selfish. Maybe I wanted to feel good about myself. I don’t remember. But I do remember that all this sort of stuff was much more important to me than the actual payoffs that were the focus of the game.
My point is that again context matters. Your choice in the game depends entirely on how you interpret the game. An even better example: Bailey House, Joan Silk, and others conducted a prosocial test with kids and found that the kids liked to choose non-prosocial option (I get $1, you get nothing) over the prosocial option (we both get $1) because they thought it was funny. To detect a prosocial preferences in these kids, the researchers had to take out all the data from trials where the kids were laughing, indicating they were not playing the game “correctly”. With other animals, it’s even less obvious how they interpret the game.
Why are we even trying to figure out if humans are “prosocial” are not? The funding for the “cooperative phenotype” paper comes from the Templeton Foundation, which is all about the “big questions” bridging the gap between science and spirituality. This apparently includes silly simplistic questions and answers like, “Are people cooperative? Yes or No.”
Of course some people are consistently nicer than others. But even the most remarkable generosity of the kindest person is context-dependent. If Jesus or Buddha cared about every person unconditionally and equally across every possible context, then that’s just another way they are different from everyone else. For the rest of us, when we lack the proper contextual cues, we show the selfish preferences of a psychopath. For example: Do I care about starving children? Of course. I am certain that I do. I would help feed a starving child that was right in front of me. Yet, everyday of my life, I undeniably refuse to help millions of starving children simply because those children are out of my sight. And it’s not because I’m making a mistake or because I keep forgetting. Nope, I could be making a donation right now to save a starving child, and instead, I’m writing this blogpost. Does that mean that my preference is to write opinionated rants rather than help starving children? Does that mean I value rambling over human life? Am I psychopath with zero concern for human life? Maybe.
My desire to help others relies on some external stimuli triggering my empathy (that is, empathy is context-dependent). Hit this nerve hard enough, and I get out my wallet and starting making donations. To feel sympathy for a starving child, I have to at least picture in my mind’s eye a sad child’s face. This empathetic response is triggered by the contextual cues–the real or imagined sights and sounds of a starving child– not by the mere knowledge of the relative payoffs of helping or not helping. I do not have a defined and discoverable preference for how much I want to help starving kids, so there is no real answer to the simplistic question of whether or not I care about starving children.
Human prosocial preference, like the weather, is unpredictable, because so many factors influence it. This is because social decisions are complex. I can have a stable preference for, say, eating apples or not, but that’s very different. Apples are not agents. Apples won’t try to eat me back if I make the wrong decision, apples don’t influence my future wellbeing, and eating apples does not lead to other people judging or punishing me. Change any of those variables, and my apple preference should become pretty complex and context-specific. How much do we care about the wellbeing of rats? It depends on the context. Are they pets or pests? Or are they food or are they subjects in an experiment?
Say a person comes up to me tomorrow and asks me for a donation to help a starving child, would I do it? And how much would I donate? I don’t know. It depends. Most importantly, it depends on all kinds of weird stuff that should be irrelevant if I had a certain stable level of concern for starving kids.
So if an experimenter tests your “prosocial preference” in an experiment (Will you help a starving child?), the actual outcome is going to depend on how well the experimental conditions trigger your empathy, on whether you feel that you’re being watched or judged, on how much time and energy it will take you to perform the helping act, on what the expected rewards will be, and so on. And perhaps none of those considerations will even be conscious. All of this before you even consider the monetary payoffs to you and the child (say $1 for one meal). The ease of manipulating human preferences using such conscious or subconscious contextual cues is obvious to anyone who knows anything about marketing. What someone is willing to pay for object X, is not really matched to a person’s internal value for object X.
Even worse for the prospect of correctly testing my prosocial preference, within a single social decision, different parts of my mind will have different preferences, and different contexts (e.g. setting, audience) or internal conditions (e.g. hormones, blood sugar) will engage these different parts to different extents. The human mind is not a singular rational actor with a unified singular goal. Even if some parts of me care deeply about others, there are other parts that are only concerned with my own selfish needs.
My point is not that people are, underneath it all, really selfish. My points are (1) that studying preferences is actually more difficult than it may seem at first, and (2) that human social preferences should have an adaptive design that is context-specific, just like the social preferences of any other animal. We should expect human nature to be adaptive for a particular social environment, not “cooperative” or “selfish”. Humans are just nice enough to readily cooperate in the social environments in which we evolved, but not so nice as to be exploited by others in those same social environments. When humans act non-adaptively nice or selfish in a laboratory experiment, we should first suspect it is because the experimental context is not ecologically relevant. Humans evolved moral emotions to help them solve the very complex problem of getting along with others in the real social world, not to be rational selfish agents or good consistent utilitarian moral philosophers. That is why context matters.
For all these reasons, measuring/testing for a general human prosocial preference (to the extent that there is such a thing) is actually a much more difficult task than merely describing natural patterns of a specific kind of human prosocial behavior (like when an anthropologist studies food sharing patterns in a human tribe) or testing the contextual cues that promote it (like an experiment testing the effects of audience cues or the facial similarity of the recipient).
In my mind, measuring preferences in general can only mean testing them across a wide range of standardized contexts and situations. Otherwise, we are either testing preferences under a single context, or just assuming that preferences rely on only a few variables, like the relative payoffs of the outcome of the actions (like incorrectly assuming that my decision to help a starving child just depends on the relative amounts of money that it costs me and what the child receives).
All of this is true for nonhuman animals as well. Is species X prosocial? Will members of species help a starving individual? It will depend on the context, and probably, many different aspects of the context. The food sharing behavior of a particular vampire bat depends not just on the identity and hunger state of the donor and recipient, it is also highly sensitive to whether the donor is stressed or not, or in a familiar setting or not, or whether it is separated from the recipient by cage bars or not. It would not be that surprising to me if it depends somewhat on how much she has been helped by other bats (called generalized reciprocity), if only because this provides some information for possibilities of direct reciprocity. And I also think the donor might also give more or less depending on the number of alternative partners around her, either in the present or in the recent past.
We already know humans and chimps and vampire bats cooperate under some circumstances but not others. The question is, what are those circumstances? We do not need studies on whether animals are “good” or “bad” or “prosocial” or “selfish” in ways that are “domain-general” whatever that really means. We need studies on what contextual cues trigger prosocial behavior in the first place. We can predict these cues using evolutionary theory (e.g. inclusive fitness, reciprocity, costly signaling, biological market theories), and the ecology and life history of the organism.
Like humans, animal motivations to help should be triggered by the sights and sounds of distress, and other situational cues, and not by some kind of conceptual understanding of the payoffs or the insight that someone is in need. After all, we humans have this conceptual understanding, yet we ignore it every time we drink a latte at Starbucks instead of donating that money to help other people in need. If our empathy was not limited in this way, I’m not sure we could live healthy lives.
Moreover, the vast majority of human cooperation (such as obeying social norms, laws, paying membership fees to non-profits, automated donations to charities, and the buying and selling of goods/services) does not require any sort of empathy or “prosocial” inclination at all. We just follow the social norms. Our lives are embedded in designed systems that nudge and coerce us into behaving cooperatively. If I donate blood, I’m not always sitting there thinking about, and motivated by, all the people out there who need blood. Perhaps I’m not feeling empathy at all. I might even be thinking about what a good person I am. Maybe I even get a sticker to show people that I donated blood. The behavior and the motivation are not so clearly linked.
If you want see real behavior motivated by real empathy or internal “prosocial preferences” in people (rather than people trying to adhere to their belief systems or other social norms or expectations), I don’t think you can just put people in abstract economic games or thought experiments. The best approach in my opinion would be to create a realistic naturalistic situation that evokes the proper prosocial emotions (like having a person collapse on the street and seeing the response of bystanders). Then the question would be: What factors can turn the “prosocial behavior dial” up and down? The same logic applies to other animals.
Thanks to the generosity of Robert Baker, I was invited to visit with faculty and students at Texas Tech University and give 3 talks– a biology seminar, a family-friendly outreach talk, and a brief show-n-tell to a introductory biology class regarding my work with vampire bats.
Thank you to everyone who met with me and showed me around. I had some really interesting discussions with faculty and graduate students. It makes me realize I should put more time and effort into having these kinds of informal but stimulating conversations with people at my own school.
Texas Tech now probably has the largest collection of bat-focused faculty researchers of any school I’ve visited, with Tigga Kingston, Robert Baker, Richard Stevens, and now Liam McGuire. Liam (who I’ve known since 2007) studies links between physiology, ecology, and behavior in bats, with a focus on migration and hibernation. He has just set up his new lab and has a really, really cool research program planned. Although while looking for a link, I see that he needs to make a lab website though! I look forward to some amazing work coming out of his lab in the future! Also, his kids are adorable.
I had an interesting discussion about transposable elements and gene duplication with David Ray and Neal Platt.
Tigga Kingston is a conservation biologist who has conducted long-term ecological studies of bats in Southeast Asia. She has a terrific group of graduate students doing a remarkably wide variety of conservation-relevant projects around the world. I was particularly excited about Marina’s work on the SEABCRU online bat database. These kinds of scientific contributions (where are a huge amount of data is made available to many people) are not given the due academic credit that they deserve– they are way more important than a single paper. I’m glad many people in science are creating incentives for sharing data and not simply papers. I was also really impressed by the work of Kendra Phelps, Joe Huang, and Julie Senawi.
In the bat world, Texas Tech is synonymous with Robert Baker, who has been there for 46 years– more than half of the duration of the school’s existence. He has mentored about 100 graduate students in that time. Baker has spent much of his career studying the diversity and evolution of the phyllostomid bats, and as early as the 1960s he understood the importance of using genes and chromosomes (rather than just morphology) for constructing phylogenetic trees. In 2003, he published an influential phyllostomid phylogeny that assessed 48 of the 53 identified genera. I enjoyed talking systematics and phylogenetics with his graduate students Julie Parlos, Howie Huynh, and Cibele Caio. Julie was an extremely generous (and organized) host.
Julie and Kendra took me along for their acoustic survey spent listening for bats along a transect, driving around Lubbock at 20 mph. We heard no bats. But it made me realize that simply cruising around, hanging out, and chatting about science– could provide some useful data if you make it a recurring event and attach a bat detector to the roof of your car. Why didn’t I think of that? Also, I wish I lived someplace that had a “prairie dog town” as part of the local park.
It was the most enjoyable time I’ve had visiting a university.
In response to a talk I gave at the bat meetings, some people saw a problem in the experimental design of my partner choice tests, because I had a condition where a bat can’t reciprocate, but not a condition where a bat won’t reciprocate. I do know that hungry bats will beg other hungry bats, but the argument is that I don’t know if they will treat a simultaneously hungry bat that can’t reciprocate in the same way as a bat that won’t reciprocate. And perhaps bats would only “punish” or “abandon” partners that choose not to reciprocate, not those unfortunate bats who fail to reciprocate due to being repeatedly absent or starved themselves.
The first obvious logistical problem is that creating a situation where a bat can reciprocate but won’t is extremely difficult and maybe impossible in practice. But I don’t think this even matters that much because a bat should respond to both can’ts and won’ts, and here’s why.
Imagine you’re a female vampire bat maintaining cooperative social relationships with several other bats. Your time and energy are limited and you should therefore choose wisely with regards to which individuals you target with your social investments (i.e. food sharing and social grooming). Assume that you are equally related to Bat A and B, and that Bat A consistently feeds you when you are hungry. Under which of the following scenarios, should you begin to invest more in Bat A and less in Bat B?
1. Bat B never feeds you even when she has food to give you.
2. Bat B never feeds you because she is always hungry herself.
3. Bat B is never around when you are hungry.
The answer, which seems obvious to me, is that you should prefer bat A over B in all 3 scenarios. That is, under all 3 scenarios, you should invest less time and energy feeding Bat B and instead use that time and energy feeding and grooming Bat A. Now there might be differences between the three scenarios in how long it takes you to start preferring A versus B. You might be far less forgiving of scenario 1 vs scenario 2 and 3, but the basic point is that you should reduce your investment in bat B in all 3 cases.
The problem here is that people sometimes think that you, as the female bat, should only care about scenario 1 because that matches people’s’ notions of “cheating” whereas the others are accidental. Bat B has an excuse so it’s not cheating. People think that it matters a lot whether a bat won’t reciprocate versus whether it can’t reciprocate. I agree there’s a difference, but the difference is quantitative not qualitative.
To the extent that you are a Darwinian agent, your only concern is the probability of an investment leading to a fitness return. If a bat won’t reciprocate, this information will certainly change your prediction of the likelihood of this bat reciprocating in the future. But if a bat can’t reciprocate and it has an excuse (I was not around; I did not feed either), then you would judge this as less informative as to whether that bat would reciprocate in the future. The kind of information you glean is the same, but the information is different. The partner that won’t reciprocate is probably bad social investment; whereas the can’t reciprocate partner *might* be a bad investment (or it might just be a fluke). But in either case, you should remember the event and it should damage (if only very slightly) your relationship with that partner.
If you simulated cheating in Bat B by consistently removing (or fasting) Bat B whenever the subject bat was hungry (i.e. bat B can’t reciprocate), then biological market theory suggests you should certainly see a response, because you have made that partner look worse relative to others. The partner switching response might be faster if the bat won’t reciprocate, but it should come eventually in any case.
What’s a cheat? In the evolution of cooperation literature, authors often talk about “cheats” (reviewed here) to describe an individual that exploits the cooperation of others by gaining the fitness benefits without paying the fitness costs. Talking about “cheats” is often very useful in explaining why evolutionarily stable cooperation often requires some form of conditional enforcement or discrimination. But the term can also create unnecessary confusion for at least two reasons. First, “cheat” sounds like a discrete type, behavior, or trait, whereas much of the time it’s used to describe a scenario with individuals that vary continuously. Imagine if we talked about human prosocial behavior using the terms “cooperators” for anyone who cooperates more than average and “cheats” for anyone who cooperates less than average. Most people (who cooperate to an average degree) would be ambiguous. You can’t model this in the same way you might model conflict between more discrete “types” like males and females or discrete reproductive strategies. This is a problem I see in the behavioral syndromes literature too.
Second, people think “cheating” refers to something cognitive or intentional, rather than just variation in a cooperative trait. This is not such a problem in bacteria or plants, but it can be a big problem in animal cooperation studies. For (hypothetical) example, in a group of vampire bats clustering for warmth, a “cheat” could simply describe an individual that maintains a slightly lower body temperature allowing itself to be warmed by an adjacent body. Two normal bats that cluster together are both paying some cost and receiving a benefit of the other’s warm body. But a normal bat that clusters with a cold cheat is not receiving that benefit, only the cold cheat benefits, and it might be paying a larger cost. In reality, “cold cheats” probably do not exist because of physiological constraints, but the logic still applies. Cheating in this scenario is not a strategic behavior; it’s just a physiological trait.
To be clear, I’m not saying that the distinction between can’t help versus won’t help doesn’t matter. Some authors writing on this topic have argued that it would be very cognitively difficult to distinguish between partners that can’t and won’t reciprocate, and so this might be very rare distinction to make. But in fact precisely such a difference has been demonstrated in cooperatively mobbing birds.
Experimenters first used fake owls to induce cooperative mobbing among 3 mated pairs of pied flycatcher mated pairs. They created experimental triads of three equidistant nestboxes. One pair (the subject pair) was exposed to a fake owl near their nestbox to induce mobbing. The second pair (the defector pair) was held captive (either nearby in a blind or trapped inside their own nestbox) and hence prevented from mobbing. The third pair (the helper pair) was left untreated, such that the helper pair always helped the subjects with mobbing, but defector pair could not. The authors then simultaneously presented the helper and the defector pairs with owls, and tested at which nestbox the subject pair would choose to help. The subjects helped the helper pair more often. In a follow-up experiment, the defector pair was presented with an owl. In most trials, the helper pair, but not the subject pair, joined the defector pair in mobbing. This makes sense because the defectors only defected against the subjects, not the helpers.
Then using the same setup, the experimenters showed that the degree of reciprocity was also sensitive to whether the failure of partners to mob was caused by their absence (“the excuse principle“). To simulate voluntary defection, the experimenters removed the defection pair, but played their alarm calls to simulate their presence. To simulate involuntary absence, the experimenters completely removed the pair during the predator presentation to simulate their absence during the owl attack. There was no sign of their presence at all. When the captured birds appeared present but unwilling to help, the subjects later reciprocated help in only 2 of 20 cases, but when captured pair was completely absent, the subject pair reciprocated help in 20 of 21 cases.
This is a great experiment. But it does not suggest that the birds will forgive indefinitely. I think we can be fairly confident that if the absent birds were consistently and repeatedly absent whenever an owl showed up. The subjects would begin to reduce their help towards those partners as well, because repeated “voluntary defection” and “unintentional absence” are just two different ways of being a bad cooperative partner.
Alongside the partner’s capacity to help, there are many factors that should influence the degree of contingency in a cooperative exchange. One example is the cost-benefit ratio of the social investment. For example, mobbing birds only help past helpers at fairly distant nestboxes, but when mobbing birds are responding to an owl at a neighboring nestbox that is very close, they always help unconditionally (see here). This is because mobbing an owl that is very close to your own nestbox has a larger immediate selfish benefit (which immediately outweighs the cost of doing nothing), whereas mobbing a more distant owl poses large costs that have to compensated by the relationship you build with the neighboring pair.
Kinship is another factor that can influence the degree of experience-based contingency in helping decisions. Social bonds consisting of multiple cooperative services would decrease the contingency that can be easily measured, because asymmetries in one service can be balanced out with other services.
So now all these interactions begin to get very messy. Because we have multiple factors moving the degree of contingency in opposing directions. Vampire bats make larger social investments in highly bonded partners, which should make contingency very strong. But highly bonded partners might have multiple ways of helping each other (multiple currencies) which makes the measurable contingencies within each currency very weak. Moreover, social bonds tend to form between close relatives, and kinship could conceivably decrease contingency, because investors are compensated indirectly through indirect fitness, or increase contingency, because the larger investments in kin that are not reciprocated are worse than smaller losses. Hopefully, models that integrate kinship, partner choice, and exchange with multiple services will increasingly tackle such complexities to give us some clear predictions.
The one thing we can see for sure is that “tit-for-tat” (where a bat simply remembers only the last round with a single partner and makes a binary “yes” or “no” helping decision within a single service) is not a good model for predicting the pattern of cooperation in vampire bats or probably any long-lived social animal.
Did you know that Oct 26–Nov 1 is National Bat Week?
October 23. Bat Meetings in Albany, NY. My 15-min talk is “Complex Cooperation: Food Sharing in Vampire Bats is Not Simply “Tit For Tat”
October 30. Public outreach event at the Museum of Texas Tech University, Lubbock, Texas entitled Vampire Bats: The Secret Lives of Real Vampires. A kid-friendly talk with games and activities. 6-8 pm in the Helen DeVitt Jones Sculpture Court. With cookies apparently! https://www.depts.ttu.edu/museumttu/programscal14.html#oct13
Also, there’s an ongoing exhibit, “Vampire Bats – The Good, the Bad, and the Amazing” at this museum. I’ll be staying in Lubbock, Texas until Nov 1.
I also recently wrote this paper on why behavioral ecologists don’t agree on the importance of reciprocity in animal social behavior. In a nutshell, it’s because different authors use the term in contradicting ways (only some of which make sense). This paper was generously invited by my friend and collaborator Dr. Jennifer Vonk, who studies animal cognition. Here’s the abstract:
Reciprocity (or reciprocal altruism) was once considered an important and widespread evolutionary explanation for cooperation, yet many reviews now conclude that it is rare or absent outside of humans. Here, I show that nonhuman reciprocity seems rare mainly because its meaning has changed over time. The original broad concept of reciprocity is well supported by evidence, but subsequent divergent uses of the term have relied on
various translations of the strategy ‘tit-for-tat’ in the repeated Prisoner‘s Dilemma game. This model has resulted in four problematic approaches to defining and testing reciprocity. Authors that deny evidence of nonhuman reciprocity tend to (1) assume that it requires sophisticated cognition, (2) focus exclusively on short-term contingency with a single partner, (3) require paradoxical evidence for a temporary lifetime fitness cost, and (4) assume that responses to investments are fixed. While these restrictions basically define reciprocity out of existence, evidence shows that fungi, plants, fish, birds, rats, and primates enforce mutual benefit by contingently altering their cooperative investments based on the cooperative returns, just as predicted by the original reciprocity theory.
Today, after manually entering >3,000 rows of behavioral observations into excel from paper scoresheets, I’ve decided to record observations on computers whenever possible. Makes me wonder what kinds of apps are available for data scoring?
Finally, I should know by the end of the day whether my oxytocin treatments worked. The treatments seemed to slightly increase grooming and food sharing in the first pilot study with a small group of females, but they did not seem to work in a second small study with male bats and their moms. I’ll see very soon how it turned out.
Why biologists say group selection is wrong, but it’s not, but it is… kinda.
Whenever I talk about vampire bat food sharing to a public audience, someone will inevitably say something like, “Wow! It’s amazing that vampire bats will feed each other to perpetuate their species” or “It’s so interesting how vampire bats will act for the good of the group” (this despite the fact that a main point of my talk is that they don’t act for the good of the group). The idea is pervasive, “Animal X does Y to perpetuate the species/group/population/ecosystem.” It originates, I presume, from years of Disney animal documentaries on how lions eat zebras to keep the circle of life going. Little do people realize that if you make this simple innocent statement in the presence of a talkative biologist, it will induce a frustrated sigh followed by a boring and condescending monologue that begins something like:
“Aaactually… that’s not really how evolution works… [bla bla bla]”.
Behavioral ecologists refer to this popular idea, which they love to hate, as “group selection” and many consider it to be an out-of-date theory, or biological myth, akin to Lamarck’s famously wrong idea that giraffe necks are big because they keep stretching them to reach stuff. Richard Dawkins is well-known among biologists not for being an outspoken atheist, but because he wrote a book, The Selfish Gene, that could have just well been called The Group Selection Delusion.
In modern evolutionary biology, there is nothing controversial about the existence of group selection. It occurs when individuals live in social groups and those groups go extinct or proliferate at different rates. You can easily create group selection in the lab and show that it produces certain traits, and stable social groups in the wild can clearly have differing rates of extinction.
Yet, for many evolutionary biologists, talking about group selection is almost like talking about gender politics, gun control, or the Israeli-Palestine conflict. You better tread carefully. Group selection has been a controversial topic in evolution since this 1964 exchange in the journal Nature and biologists are still reading and writing exchanges on it. Researchers who invoke the theory of group selection in their work are always on the defensive, wary of being discriminated against as someone who doesn’t really understand social evolution.* Why all the controversy?
The real controversial question is: Under what conditions does group selection lead to group-level adaptation? That is, when should we expect individuals to act or possess traits “for the good of the group”?
The textbook answer is basically, never– adaptive behavior maximizes gene propagation not success of social groups. But a more correct answer is that individuals can be said to act for the good of the group under two specific conditions. First, when the groups are genetically identical (in this case, the group selection can be equivalently viewed as kin selection). Second, when all competition exists between groups rather than within groups (ie “altruism” within the group can be equivalently understood as cooperation with group members in collective competition with all others in the population). This is what happens when you perform group selection in the lab: you are effectively suppressing genetic competition within groups and creating genetic competition between groups.
If either of these two conditions are met, individuals can eventually possess traits that appear to exist for the good of the group even at the expense of their own reproduction (such as worker bees that sacrifice their lives for the bee colony). In many cases, both conditions are met. For example, all your (non-cancerous) cells act for the good of the group (i.e. your body) because the cells share the same genome (more or less); but in addition, competition between your cells is suppressed, because the best bet for each cell to compete with all the other living cells in the world is not to selfishly replicate as a rogue individual (like a cancer cell), but rather to collectively and cooperatively coordinate their actions with other cells in your body to make babies. As an animal, there’s a limit on how many cells your body can make through growth; but there’s essentially no limit on how many new bodies (and hence cells) your own body can make through reproduction.**
OK, now on to the spiders!
In summary, evolutionary theory tells us that group selection does not really lead to group-level adaptation, except under the strict conditions when the groups are clonal or devoid of any within-group competition. Theorists have shown this many times in many ways.***
But a recent (soon to be controversial) Nature paper claims to have done the impossible: demonstrated that group selection has lead to a group-level adaptation even in groups that are neither clonal nor devoid of competition. The paper entitled Site-specific group selection drives locally adapted group compositions is fascinating. I posted the PDF here (because the journal Nature charges you money to read an article that the author gives away for free).
Before you get lost on the internet and destroy an hour reading about how interesting social spiders are, here’s the basic group selection story in a nutshell. In these amazingly cool group-living spiders (Anelosimus studiosus), the individuals can have two distinct personalities: aggressive or docile. Each group of spiders has a stable ratio of aggressive to docile spiders, and the optimal ratio is different in different environments (just like the optimal phenotypic trait like skin color differs across different environments). This optimal ratio persists when a spider group is moved to a different environment (just like how your skin color largely remains the same when you move to a new environment). Each group has some evolved optimal ratio of aggressive: docile spiders, and the actions of the individual spiders somehow move the group towards that optimal ratio and maintain it there. In conclusion, this optimal ratio is a group-level adaptation that is driven by group selection.
In the authors’ own words:
Our observation that groups matched their compositions to the one optimal at their site of origin (regardless of their current habitat) is particularly important given that many respected researchers have argued that group selection cannot lead to group adaptation except in clonal groups and that group selection theory is inefficient and bankrupt.
Our study shows group selection acting in a natural setting, on a trait known to be heritable, and that has led to a colony-level adaptation.
Does this mean that all these evolutionary theorists were wrong? No. I think (and please someone correct me if I’m wrong) that the contradiction comes from two different meanings of group-level adaptation. When I think of a “group-level adaptation” I think of a phenotypic trait of an individual bee that maximizes the success of the bee colony. And I think this is how most theorists were using the term as well. When you think in terms of inclusive fitness (fitness is a property of individuals not groups), you also tend to think of “adaptive traits” as properties of individuals rather than groups (one level up) or cells (one level down).
A different meaning of a group-level adaptation–the one used by these authors in the spider study– is the “trait” of a colony that cannot be reduced down to the traits of the individuals. For example, this could be the size of the bee colony, the ratio of types in a spider colony, or the diversity of personalities in a human tribe.
So really, we have two different types of group-level adaptation. It seems that group selection is not a good explanation for the first kind (heritable traits of individuals that maximize group survival), but it can explain (and perhaps it’s the only explanation) for the other kind of group adaptation (heritable “traits” of entire groups). It’s important not to confuse these two different things that are often labeled by the same term. In the latter case, does it really make sense to ignore the individuals and talk about group-level fitness or group-level adaptations? I don’t know, maybe.
I used to be really annoyed by group selection, because it always seemed to just glaze over what I thought were the interesting aspects of behavior (the strategies and interactions of individuals within the groups). Just showing that cooperative groups outperform non-cooperative groups, does not explain what makes cooperation stable within each group. That seemed like the real difficult question. In my mind, groups don’t perform “behaviors”, individuals do. So talking about collective actions of groups, without understanding what the individuals are doing just seemed confusing to me.
But now after reading this spider paper and others like it, I am beginning to see more and more that being able to “zoom out” and see groups as having “traits” might be useful in some cases. This is not necessarily just taking the group mean and ignoring the variance (as you do when you talk about things like “bat roost 1 kinship vs bat roost 2 kinship” or “boy height vs girl height” or “white wealth vs black wealth” where you reduce populations to a single average value). Just like a t-test, we can simplify matters by talking about mean differences between groups without losing sight of variation within-groups.
But I still think it’s crucial to figure out what the individuals are actually doing to create this emergent behavior. In this case, we still don’t know. What exactly are the individual spiders doing to reach their optimal ratio? Are they monitoring and policing certain types? Are certain types leaving the group to start new groups? Are they switching groups? I don’t feel like I can understand what’s going on until I can answer these questions. The trick of using group selection theory is to figure out when it’s really solving a puzzle or when it’s just obscuring it.
Should we always try to reduce the properties of groups to the individuals and their interactions? Or should we allow ourselves to think of groups as “individuals”?
I think both. The semantics are confusing, but there’s no contradiction whatsoever between these perspectives. Even if one can’t keep both perspectives in focus at once (I can’t), one should be able to think in both ways and switch between them. If you can’t explain the behavior of the whole using the interactions between the parts, then you don’t really understand the whole. But if you can’t switch perspectives and zoom out to see the whole as an individual node in an even larger network, then you can’t see the bigger picture. (If you like that idea, you’ll love this.)
* At conferences, I’ve had conversation with graduate students working on group selection, who are obviously try to suss out where I stand on the issue to figure out “Is this safe place to discuss group selection?” and on the other side of the coin I’ve also heard one researcher say something like, “Did he just invoke group selection? Jesus Christ.”
**I guess there’s actually an interesting exception: the immortal Hela cancer cells, which are from one woman’s cervix and are now in labs all over the world and collectively weigh more than 20 tons. If these human cells were all in one place, there could be a giant human cervix tumor that weighs the equivalent of 250 humans. So this woman’s cancer cells were much more evolutionarily successful than her other cells, whose DNA has been diluted by 1/2 down every generation.
***By “demonstrate” I mean using models and theory laden with partial differential equations and other mathematical arguments that most biologists– like myself– don’t fully follow, but we will never admit that because we are too embarrassed of our mathematical incompetence and want to think we are real scientists too. We read the words in the introduction and discussion, and we get the gist. We know statistics. That’s math, ok?