Last month, I moved my vampire bats to the Organization for Bat Conservation in Bloomfield Hills, Michigan. I’m now working on a group of more than 30 bats.
I recently attended a small conference on The Evolution of Morality at Oakland University.
My favorite talk was by Robert Kurzban on the evolution of third-party punishment (in humans). Kurzban started his talk with the crucial point that morality and altruism are not the same thing. Altruism is about mostly helping other people. Morality is largely about judging people’s actions. There is an obvious adaptive reason to judge our own actions (mostly because other people will also judge our actions and reward and punish us accordingly). But here’s the real mystery: why do we morally condemn other people’s actions? Even people who have no effect on us? And why do we do evolutionarily-nonsensical things like condemn the actions of our friends and family, even when those actions do not hurt us? Imagine a scenario between a bully and a victim. The aspects of human nature that can drive bullying make evolutionary sense. The aspects of human nature underlying the actions of the victim typically make evolutionary sense too. But now consider a third-party observer that is unaffected by the dispute. Most third-party observers don’t involve themselves in a typical dispute (for better or worse), but interestingly, almost everyone passes judgment. That is, almost everyone mentally chooses a side. Why do people always perform moral condemnation (even though most people will never take a moral action)?
There are many ideas about this, but most of them don’t predict the weird ways people make moral decisions to condemn or not condemn others. And they don’t answer why we should even take the time to pass judgment at all. Kurzban makes a very compelling argument that the adaptive function of moral condemnation is for each third party bystander to choose the larger side in a dispute. All bystanders want a consensus to be reached on who is right in a dispute so that there won’t be a long drawn out conflict. And they also want to be allied with the larger, winning side. But how do we know which will be the largest side? Well, let’s examine some choices of strategy.
One strategy would be to always side with your friends and family. Here the problem is that many other people will be against you. If your friend murders someone and you take his side, now you have to defend yourself against the victim’s family and friends. The result is a lot of fighting. Not good. Another option: you could always side with the most powerful people. The disadvantage here is that it will quickly lead to a situation where certain people will dominate and repress everyone, including you. Not the best outcome long-term. Kurzban argues that people have evolved to conform to an implicit set of rules within each group for coordinating whose side to take. Conforming to these rules itself requires cooperation, but it’s easy to see how it would benefit everyone to have this set of rules. Crucially, the rule has to depend on people’s actions, not their identities. For instance, the rule might be something like: “If person A tries to kill person B for personal gain, then person A is wrong and take the side of person B” or “If person A steals from person B, then take the side of person B”, and so on.
Obviously, the actual moral rules we follow are more complicated and also not always so explicit or conscious. We often don’t know what they are, except through gut feeling. We can’t articulate why certain things just “feel wrong.” Compare moral rules with language. In language acquisition, we each adopt words and rules from a local language, but we all have an instinct to learn a language. A spot in our mind that says “language goes here”. Likewise, humans have an instinct to adopt a local set of moral norms and to want other people to agree with them. Those moral rules might differ a bit place to place, but they will serve the same functions and so have many similarities. That is how you get the evolution of morality. And this view explains why people make moral judgements in some very strange ways, based on rules rather than always on actual welfare. Now to be honest, I don’t know how much of what I just wrote is Kurzban’s ideas versus my ideas, extrapolating from what Kurzban and others have said. But I think it’s a very interesting view and I will be interested to read more of his papers.
I also enjoyed talks by Sarah Brosnan (moral emotions in primates) and David Buss (human sexuality), and I thought Randy Thornhill’s talk on his theory that pathogens have largely shaped human social behavior was really interesting. I enjoyed talking to Peggy Mason about her prosocial rats, Brock Brothers, Chinmay Aradhye, Mark McCoy about what personality traits predict religiousity, and why the incentive structure in academia are often bad for science. I also had interesting discussions with Michael Pham and Sylis Nicolas (who also study human sexuality) about all kinds of things I won’t discuss here, Alexis Garland (about counting in animals), and about all kinds of stuff with Bailey House who has studied the development of reciprocity in children. I had some other discussions with many other people that I can’t recall right now. It was a fun conference.
The most memorable talk was by a philosopher named David Benatar who made the most cringing (but intellectually brave) philosophical argument I’ve ever heard in my life. I don’t have time to go into it here, but you can read about it in this review which summarizes the argument (and by coincidence, my response to it was fairly close to what the reviewer said).
I was particularly impressed by the way that Todd Shackleford and Jennifer Vonk were very supportive of their graduate students. Young faculty tend to promote their graduate students a lot, and this is always heart-warming to see.