Talk: Cooperative food-sharing in the vampire bat
Today! Monday August 31st @ 3 PM
Location: Zoology Part II Lecture Theatre, Cambridge University, Downing Street, Cambridge, UK CB2 3EJ
Thanks to Neeltje for organizing this talk.
Talk: Cooperative food-sharing in the vampire bat
Today! Monday August 31st @ 3 PM
Location: Zoology Part II Lecture Theatre, Cambridge University, Downing Street, Cambridge, UK CB2 3EJ
Thanks to Neeltje for organizing this talk.
In an upcoming paper, I show that when a female bat feeds another bat, this allows her to add another possible donor to her own ‘social safety net’. There’s an obvious benefit to her: bats with larger sharing networks are more successful at getting fed.
But there’s potentially a more subtle benefit. If a hungry bat is fed by only 1 donor (say her daughter), then her daughter pays the entire cost of feeding her. But if that hungry bat is fed by, say, 5 donors (her daughter, her sister, and three non-relatives), then the feeding costs are reduced for her daughter and sister.
Sharing investments in non-kin do not necessarily detract (at least directly) from sharing investments in kin, unless both kin partners and non-kin partners are in need on the same day.
Kin selection might therefore favor bats that establish non-kin bonds because it will reduce the burden on their close relatives. If so, that would be an interesting means by which that kin selection could paradoxically favor non-kin helping.
To make an anthropomorphic analogy, this would be like saying that natural selection rewards families where the children quickly develop strong cooperative relationships with others in the tribe outside the family, so that they won’t be socially dependent 100% on their parents (and their parents can therefore invest more in other kids).
Or am I making a logical error? Has anyone made a formal social evolution model of something like that? It seems like someone would have. Let me know in the comments.
This monster post has been sitting on my computer hard-drive for a few months (seriously). For awhile, I was too scared to publish it. What I’ve written below is based on a (very) informal talk I gave at a graduate student seminar series at University of Maryland. To get the gist, the slides for that talk are below (all the way down) or here.
It’s also based on my stewing thoughts in response to dozens of conversations I’ve had about science and academia over the last year or so. My question is: does being a “good academic” and being a “good scientist” ever conflict? And if so, how often? And more importantly, can we fix academia (or science) to eliminate this conflict?
I’ll get to that in a moment, but I’m going to start with a related discussion of the journal PLOS One. If you are thinking, “Ugh, PLOS One? Bleh” then good, this essay is for you!
I like PLOS One. I’ve been a big fan of Public Library of Science (PLOS) since I first heard of them, and PLOS One is their cross-disciplinary, innovative, transformative (and sadly controversial) online journal. It focuses on rapid publication and article-level metrics, and it has sparked many similar journals. Since PLOS One was launched in 2006, it has quickly become the largest journal in the world, with a respectable impact factor ((# 2014 citations) / (#2012+2103 papers)= 3.2). It appeals especially to younger scientists, who are publishing their first papers and haven’t fully adopted the norms of how traditional publishing works. As I’ve written previously, if you look at traditional science publishing with a beginner’s mind, it makes no sense. PLOS One intended to change that.
But scientists are still quite divided on this journal and the publishing approach it takes. In fact, many of us have been warned from publishing there or in similar open-access “mega-journals”. The internet is full of posts like this:
“Why I published in PLoS ONE. And why I probably won’t again for awhile”. This one is largely about how an author was judged by others for publishing there. He writes,
Even though I personally like PLoS ONE and read a lot of ecology papers they publish, you won’t be seeing my name in there again any time soon. I’m just not brave enough.
This article states:
…you’ve heard rumors that they’re not peer reviewed, or that they’re “peer-review lite” journals. You’re concerned they’re journals of last resort, article dumping grounds. You’re worried your co-authors will balk, that your work won’t be read, or that your CV will look bad…Well, you’re not the only one. And it’s true: although they’ve got great potential for science as a whole, megajournals… carry some potential career liabilities.
For some people, PLOS One even appears to evoke ill-concealed disgust. “It’s a trash-bin journal” say some. I’ve heard that “publishing a paper there is like throwing it in the trash”. Much of this attitude comes from some unfair (albeit unsurprising) criticism from competing journals or publishing experts who too often miss the whole point. But I would argue that some of the negative backlash also comes from the fact that PLOS One violates some of the social norms of academia. And like most social norms, academics follow these with the utmost seriousness even when they serve no real scientific purpose. More specifically, I will argue that PLOS One has made itself more useful scientifically but less useful academically.
Explicit criticisms against PLOS One tend to take two basic forms:
Let’s consider them separately.
Is PLOS One full of bad science? No. Well… maybe, but only in that, the world is full of bad science. This myth of PLOS One being particularly bad comes from PLOS One’s “objective” peer-review system (with lacks any subjective evaluation of importance). This means the paper submissions are judged solely on the validity of their methods and results, not on their predicted influence, importance, novelty or the ‘cool’ factor.
In fact, it seems virtually impossible to get a paper rejected from PLOS One: all you have to do is make sure your conclusions match your data and methods. You did a study and didn’t find anything interesting? PLOS One is fine with that. This means PLOS One has a very high acceptance rate of ~70%.
I’ve reviewed a number of PLOS One papers, some of them pretty bad, but all of them ultimately useful and I learned something. So if the authors have made inappropriate claims, you just tell them that they need to reanalyze or reinterpret their data. “You didn’t find what you were looking for. That’s about it. No you can’t say the groups are equal because you found no difference. Rewrite the results.” It’s rare that the data themselves are completely worthless. Bad science is not boring data; it’s making false claims. There is no evidence (that I’m aware of) that papers in PLOS One have a higher rate of false claims than other journals. Afterall, the whole incentive structure of the journal is designed to prevent that from happening. You are way more likely to make false or overconfident claims when the publisher requires that your findings are novel, revolutionary, and surprising. You are less likely to be academically dishonest writing for a journal that expects nothing in the way of novelty and impact.
Another interesting and paradoxical prediction is that PLOS One papers might even be more reliable and yet simultaneously less convincing than papers in other journals. Why? Because to get a paper in a high-impact journal you spend more time refining and strengthening the story and making it sound better. A PLOS One paper can highlight all the limitations and still be published with ease. This leads me to issue #2.
Is publishing in PLOS One terrible for your career because it does not give your work a good stamp of approval? Maybe. I don’t know. I hope not. This idea is perhaps best summarized in an interesting and thoughtful article by Anurag Agrawal. He advices young scientists:
Consider the risks. Because it typically takes some years for most articles to achieve citations, evaluators of academic CVs often use journal metrics as a proxy for quality or likely impact. Although nothing can replace reading and directly evaluating a study, removing the standards associated with selective journals introduces ambiguity to a publication record, especially for young scientists looking for jobs. In other words, when a hiring committee examines a junior scientist’s CV, a publication in a traditional journal carries with it the weight associated with the journal’s reputation for selectivity, rigor, novelty, and yes, likely impact. On the surface, a publication in an open-access journal only imparts ‘not scientifically flawed’.
He then concludes,
…we have not yet arrived at an alternative model of publishing that suits the primary goals of scientists.
My only complaint is that I would argue that he should change the phrase “primary goals of scientists” to “primary goals of academics” because the “goals” he is talking about are related to getting an academic job. If I was a non-academic scientist developing the cure for Ebola, I would not be worrying about the “ambiguity of my publishing record”. That’s an academic concern based on academic goals, and its a concern for all academics, from Professors of Civil Engineering to Historians of French Literature, regardless of how much science they do.
But practical and purely academic career advice for junior scientists is becoming more common and more necessary. If you ask successful younger scientists for advice they have a lot to say about how to survive in today’s highly competitive academic environment, which is some say the worst research funding environment in 50 years. According to a 2007 study by the US government, professional scientists spend about 40% of their time trying (and largely failing) to get grants. It’s tough right now.
But if you ask for advice from older professors closer to retirement (who came up during a very different era), they tend to give you a completely different kind of professional advice: scientific and philosophical suggestions about replication, experimental design, and what to be careful about when drawing conclusions. And the biologists talk more about the animals :-). The prolific animal behavior experimentalist Jeff Galef told me,
“Science is a marathon, not a sprint.”
“One step at a time, experiment after experiment, frequently replicating your main effect, until you understand what you set out to understand and can be quite sure that, when others attempt to repeat your procedures, they will get the same results you did. And if not, you will know why not.”
I think that’s great advice. But then sadly he added that this is “not exactly the sort of approach likely to reap accolades today” and that “I doubt that I would fare particularly well in today’s academic environment.” And that is very sad in my opinion.
And it rings true. I have never met a young, successful scientist in my field who was not something of a good strategic careerist [I would like to use a softer, less judgmental synonym here, but I can’t think of one. So I’ll use this term assuming that it’s possible to use it in a respectful and affectionate manner, i.e. in a way in which I would also hope to apply it to myself.]
If we all think embracing PLOS One is tantamount to “career suicide”*, then perhaps we need to think less like good academics and more like good scientists. Because quite frankly, by thinking this way we are (ever-so-slightly) creating the problem. We need other methods for third-party validation; journal brands are just not designed well for that purpose. And that change has to come from the scientific community itself.
[*A reader pointed out this phrase was a bit of an exaggeration, which is true. Obviously, having a PLOS One paper is not bad for your career. But if that’s the only journal you publish in, you won’t be able to get a job. That’s what I meant here.]
A more basic problem people have with PLOS One (that no one wants to admit) is that we want to be able to judge a paper just by reading the citation. Here’s how it works:
Smith. 2015. Vampire bats have empathy. Nature. 123:23-24. ==> “Wow, sounds fascinating!”
Smith. 2015. Vampire bats have empathy. Journal of Peruvian Nature Studies. 123:23-27. ==> “Oh, I doubt that! Also, that’s boring.”
If PLOS One has its way and dominates the publishing world, it will be harder to think this way. And scientifically, that’s a good thing. [“Wait, what? You mean I have to read the paper?”]
In fact, I think PLOS One has marked the turning point to a world without journals, where all articles are online and evaluated on a completely individual level.
PLOS is trying to solve big problems, and they are doing it better than most. Many of us in academia complain about academic publishing, but how many of us are also participating and reinforcing the system that we criticize? We say we should publish all our negative results (even preliminary negative results), but who has time for that? We say we should cite and evaluate papers by their content, not by their covers, but we often fail to do that too. We say we should publish in open access journals, but we often don’t want to foot the bill (I certainly don’t want to). How many people are actively trying to fix academic publishing, rather than just complaining about it? (As you can see, I’m more of a complainer than a fixer myself). And how many of those people are actually fixing it successfully?
To fix things, we need to change large-scale incentive structures. That’s how you change behavior, not by asking everyone to voluntarily make sacrifices. PLOS is one of the leading institutions that is actually doing this and pushing science publishing forward in a good direction. They have helped start an important conversation, they sparked the open access revolution, and they changed our expectations of publishers. I support all of that 100%.
So that’s my love letter to PLOS and PLOS One.
Let’s back up though. What exactly is this huge problem I’m claiming that they are solving? And why does it exist. The real issue with publishing is part of a much larger elephant-in-the-room…
Academic incentives often hurt the quality of our science. We don’t want to talk about this because most of us scientists are also academics. Science might be our first love, but success in academia is what pays our bills.
In case you don’t know what I’m talking about yet, I’ll take a moment to explain my language. By “academics” I mean the process of successfully building one’s career in teaching and research at universities. And by “science” I mean the rigorous and systematic process of learning about the world with the highest reasonable standards of logic and evidence. A good scientist pursues falsifiability, repeatability, rigor, and measures of uncertainty. A good academic pursues scholarly impact and prestige. Most of us pursue both these things simultaneously. And ideally, they go together. For the most part, good science leads to impact and prestige. True that.
But does anyone doubt that academic and scientific incentives are also sometimes at odds? I don’t mean tradeoffs in time spent doing research and teaching, nor do I mean conflicts between humanities and the sciences. I mean that: scientific success is not the equivalent of academic success. Being falsifiable, repeatable, and correct is just not the same as being influential, recognized, promoted, and funded.
I would guess that most scientists have a little lab-coat scientist on one shoulder whispering in one ear (“Maybe you should take longer and replicate that result. Technically, you just inflated your alpha! Double-check those analyses. Maybe you should just re-run this experiment.”) and a little tweed-jacket academic on the other shoulder whispering in the other ear (“Publish this before you get scooped. I already know the best spin. You need this on your CV for the next grant application.”)
To put it bluntly, academia has some perverse incentives structures that we would never have intentionally built into science as a process. And we ignore this conflict at our own peril. Science is largely a set of intelligently designed incentives to keep people’s investigations as honest and rigorous as possible. In science, we have the double-blind experimental design, in which neither the observer nor the subject knows which treatments are experimental versus placebo. We have the standard of replication by independent labs. We have requirements to report statistical uncertainty, to report detailed methods, and to calculate the statistical power of our inferences. We have peer-review, which forces us to convince not only our fans, but also our competitors. We have a culture of open and constructive criticism–an environment where a young undergraduate can challenge and question the ideas of the head of the lab. We have a standard of rigor.
These are just some of the useful norms that are built into the structure and culture of science. Of course, not all science adheres to these best practices, but to the extent that something is “scientific” this is what we should mean. These scientific norms, from peer-review to p-values, are designed and revised by a global community of science with the collective purpose of aiding scientific goals.
The same is just not true for academic incentives. Consider for instance the various metrics that we use in hiring: the h-index, journal impact factor, and number of papers on one’s CV. These emerge from necessity in a world of limited time and money, with intense economic competition at every level of academic life. There is competition between publishing companies, between individuals, between labs, between departments, between universities, and between academia and other state-funded institutions. In this context, being a successful academic, like being a successful business, means having a successful brand that gives people confidence in what you say. Being a successful scientist, on the other hand, means not fooling yourself and not trusting your intuitions so you can follow the evidence wherever it leads. Not exactly two sides of the same coin.
For a clear example of how these academic incentives can erode science, consider how they can affect peer review. Several times as a reviewer, I have disagreed with another reviewer on whether a publication should be published even though we agree on the scientific validity and utility of the article. This typically happens because the other reviewer believes the paper is not “important” or “prestigious” enough. In at least two cases, the other reviewer explicitly agreed with me that the paper was correct and useful for their own research, but they then pointed out in private conversation that they had submitted a paper of similar “importance” and it was rejected– so why should this paper be accepted?
This way of thinking only makes sense if papers are viewed entirely as accolades in an academic competition. For working researchers, papers are sources of data and information. For academics, they are accolades. They cannot fulfill both purposes equally well.
In my opinion, other researchers are the most important audience of a paper. As a researcher, I want access to everything. I don’t want just the take-home message or the results in good-story format. Once a paper comes out, it can later be highlighted, summarized, and explained to all other audiences. But other researchers want the data fast, and they don’t need all the spin. Papers are how we communicate ideas and data, so they should all be published if they are scientifically valid and clear, regardless of their importance.
But let’s play devil’s advocate and take this to the extreme. Imagine that I do a “study” where I just measure the legs on 10 beetles of some particular species. That’s it. Just a list of 60 numbers. Surely, this is not meaningful enough to be published. Am I saying that this should be published? Actually, yes! Put it online. It takes no printed space. Maybe someone will find it useful, maybe not. But put it out there if you are not going to do anything else with it. People should share their data. They should share all their findings, their pilot findings, and their doubts. The more data and publications are out there, the better it is for every scientist. Can you imagine doing a search in the future for lengths of legs of beetle X and getting some raw data (with links to the methods and who collected them)?
The fear that we will be inundated to the point of paralysis with terribly boring data and “too much information” is unwarranted. That would have happened by now, and it hasn’t, because the more data and information we have, the more ways we develop for sorting it, filtering it, and parsing the best from the rest. Does a dataset on beetle leg length require peer-review? Probably not. That depends on what kind of conclusions we’re drawing. But why not just allow the data and findings to get published, and then allow further open peer-review over time– a continuous review period–so that a paper’s ‘impact’ is flexible. Basically, we need to make publishing easier and peer-review more rigorous.
One worry people have about open data and open access and open everything, is that the quality will get worse. But ask yourself, what kind of “quality” are we talking about? With more open access, the scientific quality (meaning the integrity of the data) can only get better, because there will be more transparency, and hence more and easier replication. What we have instead is a system that allows for, even encourages, sacrifices in real quality for perceived quality. Ironically, the easier it is to hide caveats and doubts, the easier the ability of the story to influence others. Daniel Kahneman described a now well-known cognitive bias he called “what you see is all there is“– which explains why a shorter story with less detail can actually be more convincing (and hence higher impact) than a more nuanced detailed one.
So as scientific quality decreases, academic “quality” can increase. And that’s a problem. Imagine for example that a scientific paper gets picked up by the New York Times. Say someone found a gene linked to psychopathy. At this point, the academic impact will skyrocket as more people are exposed to the story and it becomes famous. But the scientific quality doesn’t change and it may even be diluted as the message becomes exaggerated and distorted by the media over time. As we all know, the take home message can quickly go from “gene weakly linked to some psychopathic traits but more study needed” to “Eureka! Scientists find the serial killer gene!”.
This is not just a problem with how the public perceives science. The same thing happens on a much smaller scale when a scientist takes their results and packages them for Nature or Science, because they have to first sell the story to the journal editors in the same way that the NY Times has to sell a story to its readers. Now, to be clear: I am NOT trying to say that Nature or Science papers are all exaggerated or that they have bad science! In fact, as a academic, I’m always trying very hard to get a Nature or Science paper. It’s my dream to publish in these prestigious journals. But this is not just a bitter rant based on jealousy. My point here is that any desire to be radically and truly intellectually honest and skeptical about one’s findings has to be largely internal, because there are not enough structural incentives. So instead of talking about what’s the best scientific approach, we often talk about doing what’s necessary to convince the reviewers or “getting it past the reviewers”. That language gives insight into how we are often thinking like academics rather than scientists.
Let’s talk about unethical cheating in science. The main incentive for avoiding academic dishonesty is that if you get caught, your career is over (a reality which we hope that every scientist realizes and digests). But take my situation. How many other people are going to replicate my work on vampire bats? Hopefully, it will happen soon, but I’m not holding my breathe, because it took 30 years for me to come along and replicate Wilkinson’s original work on reciprocity. Too often, young researchers don’t realize that before doing the next-step experiment, you have to start by replicating the original finding. This is before we even get to the problem of an undergraduate science education that rewards students that conform their lab exercise results to the ‘correct’ answer. So should we be that surprised that serious scientific dishonesty is a growing problem? And that we have a replication crisis in various fields? Read this excellent story and wonder how often such dishonesty might go undetected. As the author writes,
Not only are most experiments not reproduced, most are probably not reproducible. This statement will shock only those who have never worked in a wet lab. Those who have will already suspect as much.
A few years ago, Glenn Begley put this suspicion to the test. As head of cancer research for pharmaceutical giant Amgen, he attempted to repeat 53 landmark experiments in that field, important work published in some of the world’s top science journals. To his horror, he and his team managed to confirm only six of them. That’s a meagre 11%. Researchers at Bayer set up a similar trial and were similarly depressed by the results. Out of 67 published studies into the therapeutic potential of various drugs (mostly for the treatment of cancer), they were able to reproduce less than a quarter.
As scientists, the incentive is, and should be, to make data and knowledge and ideas as open, accessible, and reproducible as possible. That’s the best way to ensure honesty in science. But it’s also just the best way to make normal everyday science faster and easier.
For that goal, journals themselves don’t really do anything useful. We, the scientists, do the writing, the reviewing, the editing, and even the tedious figure and text formatting. In this age of the internet, journals are merely brands that do not serve their original purpose of helping to disseminate information by printing lots of pages and mailing them out. Journal brands now serve a pivotal academic purpose, but they serve no scientific purpose whatsoever, because ideas in science should be evaluated based on their logic and evidence, not on their journal covers.
Obviously, we do need metrics of quality. Article-level metrics are important. We need to reduce papers to a single number (something like a rotten tomatoes score). The idea of a personal brand (an individual-level metric like an h-index) makes some sense too for hiring purposes, although there are other problems. We do need some kind of metric to compare scientists and their past accomplishments. But metrics of quality at the journal level (impact factors) are just needlessly uninformative. It’s been explained many times, so I won’t repeat it here. Impact factor are increasingly being influenced by gimmicks like publishing controversial papers that draw criticism or extending publication delay times.
There are also some old-fashioned journal requirements that no longer serve a real scientific purpose. One example is ink and paper publishing. My personal favorite journal is Proceedings B (sorry PLOS One). Every time an issue of Proc B comes out, I see something cool, interesting, and relevant to my interests. Plus all the articles are open access after the first year. But when I write a Proceedings B manuscript, I have to write two separate things. First, I have to write the article, which is limited to 6 manuscript pages. Then I write the online “supplement”, which contains all the details I can’t fit into those 6 pages. But the article is supposed to stand alone. And so I have to go back and forth moving text and revising to make it fit without the two being too dependent or too redundant. That’s how I spend (waste) much of the time when writing and revising. Because if you go over the 6 pages, you have to pay for each additional page. And paper is expensive. So is color ink. So yep, make a black-and-white version of every graph.
But wait a sec– why is there a paper version? Does anyone even read paper articles in paper journals?! In fact, why are there even articles that can only be found on paper. Can someone please just start uploading all those to the cloud?
When I write a PLOS One paper, there’s no page limit. And I don’t feel I have to “sell” the paper. I just say: Ok, here’s what I did and here’s what I found. Here’s why it’s interesting and here’s all the limitations. That’s it. And that’s what scientific writing should be. If it’s really brilliant, exciting or interesting, I should let someone else decide that. As the author, I’m probably in the worst possible position to know its real ‘importance’, especially so soon after I write it. Again, peer review is very important for that reason. We need more peer review and better peer review. That was one of the goals of PLOS One from the start. Why stop at two reviewers selected by some random editor who may or may not be my friend or my competitor. PLOS One encourages the notion that every paper is the start of a conversation, not the final word. When you publish there, you get an email stating:
“When your paper is published, you will be able to comment on the paper and respond to any reader comments. We hope that you and your co-authors will participate as widely as possible, as your contributions will be valuable to the community. “
That is great. All papers should be like that.
What we need is a central open-access repository of all curated data and peer-reviewed papers, with the peer-reviews and any revisions attached. Just let those peer reviews pile up (like movie or book reviews). In my imaginary future world, the role of journals would then be to pick through that pile of articles and repackage the best ones into magazine articles, which review a whole series of studies and that are aimed at larger audiences.
This hasn’t happened yet. There’s a perfect storm of converging interests for established journals to continue to act as the current method of third-party validation, which requires restricting publications to high-impact stories. Consider the interests of each shareholder:
From the publisher’s point of view, only the papers that are likely to be cited by many other people are worthy of publication, because only those papers will increase the journal’s brand. Most publisher’s are businesses, and their brand is all they really have. (No offense to Nature, but no one is submitting papers there because the font is nice or the editorials are so great. It’s because a paper in Nature is… a Nature paper.)
For academic authors, these elite brands are hugely important because the journal name acts as a third-party seal of approval. They become the building blocks of one’s personal brand, which will determine all aspect’s of one’s future career.
To understand the reviewer’s point of view, one must first see that most reviewers are authors that also publish in the journal they are reviewing for. So any article in a journal that lowers the journal’s brand value, will also lower the brand value of their own articles in that journal. That means that reviewer will want the articles to be at least as impactful as the article the reviewer has published there. The idea is: “Why should I allow articles in journal X that are not as good as my own article in journal X? They should be as good or better.”
So the collective interest of all three parties (reviewer, author, and publisher) is to keep the number of publications low, and the journal impact high. And to maintain high impact, you need to tell a good story with certainty and confidence. This leads to some irony. I have even seen reviewers (the supposed gatekeepers) who want the authors to simplify or sell the story better by removing inconvenient statistical results and analyses. This makes great academic sense (clearer story, more impact), but it makes no scientific sense (less information).
Science publishing is not about telling stories, it’s about describing a complex world riddled with uncertainty. The easy-to-read stories should come afterwards. The details need to be put out there first and foremost, maybe before the story even makes sense.
Academic incentives that lead science authors to sell their conclusions as confidently as possible cannot add to scientific goals; they can only detract. When articles are rejected on purely subjective measures of impact, they are usually sent to “lower” journals, where they will be eventually published anyway, just after a longer delay. Or they might end up as an unpublished manuscript buried on a computer’s hard drive. I can think of two manuscripts off the top of my head that never saw the light of day, because the authors could not find a ‘suitable’ journal. In one case, after redoing the statistics, I tried to convince the lead author into not making such dramatic claims and just say what the data showed most clearly. His response was to send it to two high-impact journals, and then to just forget about it and move on. In his rush to publish, he even included my name as a co-author when I had not approved (or even seen) the final version! (which is a gross contract violation but that’s another story). The bigger loss is the fact that these data were actually pretty interesting and they were never published. That’s a scientific loss, even if it’s a small one. And those losses add up over time. Whenever papers are rejected or discarded purely on the basis of predicted “low impact” the result is less information available.
Meta-analyses as a case study of problematic academic incentives
I’ll give a more subtle example of how incentives might shape science in unintended ways. Many a high-impact paper in organismal biology is a meta-analysis looking at patterns across many species. Examples would be comparative studies of cooperative breeding in birds, mating calls in frogs, or social interactions in primates. I love meta-analyses. Aren’t we all most interested in the “big picture”? Yet such studies rely on hundreds or thousands of published data extracted from multi-year field studies of a single species, each of which was not itself “high-impact”, and might not have even been considered publishable by itself. These studies might involve a student sitting in the hot sun and watching a baboon scratch itself for hours (because that’s how those behavioral data on social behavior are collected). Then later someone comes along runs computer code and puts all that hard-won data to work*. That is all good.
The first tragedy is that meta-analyses are affected by the publication bias, because people publish positive more than negative results. Much has already been written on that. The second tragedy is that the original studies providing the data are often not themselves cited in the references of meta-analyses. Instead, they are often hidden in the “supplement” because the journal does not want to use the page space for listing all of them. What are cited instead are more theories and meta-analyses (which are again, higher impact). As a consequence, the hard fieldwork that allowed the meta-analysis to take place, is often not given academic credit or recognition. It does not even get counted in metrics like impact factor or h-index.
[*A reader pointed out that this wording is pretty unfair and dismissive to all the work involved in a meta-analysis, which is a valid point. A meta-analysis is a lot of work. And my point was not to say that analyzing data is always easier than collecting it. This also applies to my argument below about theory vs empirical work. You need both.]
Too much theory as a case study of poor incentive structures
A similar unintended bias exists towards biological theory as opposed to descriptive natural history work (and I’m talking here about studies in ecology, evolution, or behavior where the natural history really matters). It is virtually impossible to build a career in biology by making rigorous but simple natural history observations, regardless of just how much you learn about the natural world. It is much easier to make a career out of publishing risky, ambitious theoretical models, which may all turn out to be wrong in a few years. What’s the incentive for doing those hard (but crucial) field-based observations (sitting and watching baboons)? Any existing incentives for collecting those kind of data are dwindling fast. And that’s a problem, because biology is not physics. There are not a few general simple laws that explain everything (beyond evolution by natural selection). Biology is largely complexity. There’s no simple formula that explain how a cell works. It’s all messy complicated details down and down.
Building a house in science involves both constructing the scaffolding (the theory) and adding the bricks (natural observations). And that process is highly rewarded, yet making the actual bricks is not. Was this planned? Perhaps somewhat. But mostly, I don’t think so; I think it’s an unintended byproduct of academic incentive structures. For scientific progress, we need both scaffolding and bricks. Without theory scaffolding, we have no direction and no big picture. Without enough bricks, we are left with flimsy but popular theories that are supported by many citations but little evidence. (I got this scaffolding-brick metaphor from Bernard Crespi by the way). But academia favors impact, which favors theory over facts.
Academia without science: a world to avoid
It’s getting much better, but the ‘tall-tower-with no-foundational-bricks’ problem has long plagued the social sciences (think Freud). And it’s a problem that largely defines the main reason the humanities have been in decline.
In a nutshell, there are a lot of very influential people in the humanities with lots of devoted followers and citations, but who have no actual sense, evidence or logic supporting their ideas (think Derrida). In these cases, the humanities can have all the problems of academia with none of the self-correcting practices of science. These influential academics are successful due to a positive feedback loop of being cited, then being cited because others cite you, and then being considered important because you are often cited, and so on. And nobody points out that the ’emperor wears no clothes’. (Well, until they do.) Of course, this ‘self-perpetuating echo chamber feedback loop’ is the sweet spot that all successful brands hope to hit. When you are riding on your own reputation, you don’t need to invest anymore in improving your product quality (think Pepsi).
Needless to say, there should be no place for this process in science. In science we have a different ethic. We do have to deal with imperfection, because science is an endeavor performed by flawed, biased, and irrational humans. But that doesn’t mean we can’t talk about it more openly and work harder to reduce these problems instead of ignoring them. As the writer Sam Harris put it so eloquently,
We need systems that are wiser than we are. We need institutions and cultural norms that make us better than we tend to be. It seems to me that the greatest challenge we now face is to build them.
PLOS is an organization that is doing a lot of good. And because of them, there is an increasing number of people talking about these issues. The success of PLOS One and similar journals shows that many of us see the problems and want to fix them. We could move things forward by a adopting new sets of social norms, that would make PLOS One a typical journal rather than a strange one. For example, here are some norms I wish we were all incentivized to follow:
The reason doing the right thing scientifically is difficult is that those who do it first will be punished and ostracized, at least until these actions become the new norm. It’s a coordination problem. The first dissenter who says “Hey, the emperor wears no clothes” suffers all the potential risk without enjoying any additional benefits. Yet we would all be better off if the collective norm was shifted over.
So ok, let’s get this academic culture moving forwards and evolving. Let’s question academic incentives and ask if they make the science better or worse. And when groups like PLOS are trying to fix things, let’s embrace that. Let’s think carefully before we advise others by saying, “but it’s better for your academic career if you do it the traditional way…”
Most of the time you have to adapt yourself to the world, but sometimes it’s the world that needs to be changed. And I’m hopeful because science changes faster than any other world I know.
Here are slides from the informal discussion at University of Maryland this essay is based on:
(Feel free to leave comments below. I have to approve them before they will show up. Otherwise, there will be spam ads for viagra.)
This photo taken by Jerry Wilkinson shows a tight cluster of bat pups in Africa (Rhinolophus darlingi). Several species of bats leave their pups behind in tight clusters like this called crèches. The smaller image shows one adult female greater spear-nosed bat (Phyllostomus hastatus) left behind with several pups unrelated to her. Both these images are from our book chapter on the social lives of bats.
Not all bats leave their pups in these clusters. In other species, bats leave solitary pups behind in the roost, and the isolated pups often go into torpor to save energy. And other bats even take their pups with them foraging. In the tropics, you sometimes catch tiny bats with their surprisingly larger pups clinging to them.
However, it’s unclear what factors determine these different maternal care strategies. It would be interesting to put them on a phylogeny and see what ecological factors predict them.
If this crèching behavior is cooperative, there might a possibility for some bats to exploit the ‘public good’ passive warming from of others’ warm bodies by using less of their own energy to actively warm themselves. David Haig called this the ‘huddler’s dilemma‘. From another bat’s point of view, this would be like snuggling up next to a bat that is colder than you and ‘stealing your warmth’ — which could lead to pups moving away and seeking out warmer bodies (a form of partner choice). Is it possible to experimentally make a bat’s body cooler to test this hypothesis?
This post is going to be about a recent paper about cancer co-authored by my PhD advisor. But to explain why I’m excited about it, let me start at the beginning…
I love it when I’m writing about a topic I know so well that I can drop the references into place as quickly as can type (Smith et al. 2010). Sometimes that’s not the case (Smith & whathisname? 197-something).
But here’s my guilty confession: when I’m writing a research proposal (as I’m supposed to be doing right now), I sometimes type a sentence with a scientific “fact” that I know must be true, but I don’t know the citation off the top of my head. So then I go fishing for a supporting reference to confirm my pre-existing belief. (Yes, I know this is how the internet becomes full of misinformation. And yes I know this is exactly how people irrationally form beliefs and opinions about morality and just about everything else when they are not thinking like a proper scientist.) But, it’s really never that bad. For example, I might write the following:
Most studies on spatial memory have tested rats as subjects.
…then I pause. That’s true right? Or is it humans? It can’t be pigeons. Definitely not. What about bees? No, probably rats. Then I think: Can I just find someone saying that on Google Scholar and cite them?
or I might write,
While in South America, Charles Darwin observed a vampire bat feeding on a horse.
Now, I’m pretty sure that’s true. But I have no idea of where I learned that, so I’m not 100% sure it’s true. So I have to look it up. Ok, I just looked it up. Yes, it’s true (scroll to bottom).
As you might expect, I often learn that a supposed “fact” I believed has little or no evidence behind it. I actually learn quite a bit trying to confirm stuff that’s not right.
But it gets worse. Sometimes, when I’m getting dangerously close to the BS zone, I look for a review paper (a hypothetical one) on a whole body of scientific work that I just assumed must exist to support some generalization I think must be true in an area I don’t know much about. For example, I might write:
“Social evolution theory has provided useful insights into cancer by viewing cancerous cells as ‘cheats’ surrounded by cooperative and altruistic normal somatic cells.”
Ok, I’ve actually written and then tried to support this statement several times back when I started by PhD. At this time, I was just learning, and writing and reading about social evolution more generally. And I was always trying to explain why all these cool ideas –inclusive fitness theory, evolutionary game theory, and so on– would help to explain everything and make the world a better place — help save the environment, end war, cure cancer, fix the economy, and, of course most importantly of all, explain with the utmost rigor, why vampire bats regurgitate into each others’ mouths.
But I was seriously astonished that there wasn’t already a huge and obvious well-developed body of work more explicitly linking cancer and social evolution. I was expecting to find piles of books, and instead I recall finding only one. I remember searching for “social evolution” and “cancer” and very few relevant papers came up. But, I was pretty sure that my statement above is true. Or at least it should be true. How could it not be? But really, I had no idea if anyone had developed this obvious notion of cancer cells as cheats into a rigorous scientific theory that makes useful predictions. Of course, there were scattered papers here and there. But nothing truly integrative like what I was imagining. Why not? Cancer is surely one of the most relevant and important topics one could explore with evolutionary theory!
Well, as an academic reader and consumer of science, I can proudly say that my prayers have been answered! In the last few years, there has been a new research center and conference series, a recent special review issue about this topic here and just recently here.
I learned all this just now because C. Athena Aktipis, and colleagues including my PhD advisor (while on his sabbatical) recently published a paper in one of these issues entitled, Cancer across the tree of life: cooperation and cheating in multicellularity
which was just picked up by this NY Times Article.
which was then picked up by this middle school science teacher who writes songs about science:
And my old advisor just emailed us to tell us this story. And my friend and fellow lab alumnus Kyle replied:
Ah, the old “sabbatical-collaboration-to-published-paper-to-popular-press-coverage-to-nerdy-song-parody” sequence. What a tired cliche.
So I just had to share that with the world…But it doesn’t seem as funny now as when it happened. Oh well. More about vampire bats in the next blogpost after I finish this proposal.
Do you know anyone who might be interested in a position studying the foraging behavior of frog-eating bats? If so, keep reading this update to my last post.
First, some background: the position is working with two of the best young scientists working with bat behavior. Yossi Yovel’s lab in Tel Aviv, Israel have invented the most exciting new technologies for studying bat behavior that I’ve ever seen. His lab is continually developing miniaturized modular tracking devices that can be mounted on free-ranging bats. The devices can integrate GPS, ultrasound microphones, barometric pressure, and even EEG signals from the bat’s brain. Yossi is deploying these devices on bat species all over the world that vary in their foraging ecology (from aerial hawking insectivores to fishing bats that hunt over the ocean and now bats that glean large insects and frogs). Yossi’s ability to integrate engineering and robotics with bat ecology and neuroscience leads to some jaw-dropping results.
Rachel Page’s lab in Panama is developing one of the best long-term model systems in cognitive ecology: the frog-eating bat. She does some of the most clever and most theoretically interesting cognition experiments with bats that I know about. Read more about that in my last post.
Their idea was simple: apply Yossi’s novel methods to Rachel’s well-developed study system. But you need someone to lead this project.
This last month I helped Yossi and members of Rachel Page’s lab to conduct a pilot study where we mounted these tracking devices on frog-eating bats. We tracked 3 bats for 2-3 days. I was worried that the GPS signals would not work through the forest canopy, but they performed better than I expected. Here is a quick image Yossi threw together after quickly scanning the data from one of the bats foraging on 3 nights.
Wow! From just these 3 days, one can already learn so much about the behavior of a wild frog-eating bat. Where did it go? When was it feeding? Where was it communicating with others? The next big step is to conduct field experiments that manipulate their behavior in the wild to tell us about the flexibility of their foraging strategies. We already know this is possible because Rachel’s lab uses playbacks to call the bats into their nets to capture them.
Yossi and Rachel had originally hired me to lead this project, but alas– I ultimately decided to continue my work on sensory and social cognition in vampire bats (more on that project later). But this was an excruciating decision, because I still believe this is a great opportunity.
So now Yossi and Rachel are looking to hire someone else. Ideally, this person would start by or before Spring 2016. The best possible applicant would be someone who has neotropical field experience with bats and radiotracking, and has the ability (or interest in developing the ability) to conduct data analyses in Matlab.
If you know anyone that is interested, pass this information along. If you are interested, feel free to contact Yossi and Rachel directly via their websites.
I just arrived in Panama and I’m very excited to be here.
I recently joined a collaboration between Rachel Page’s lab in Gamboa, Panama and Yossi Yovel’s lab in Tel Aviv Israel. Rachel studies the fringe-lipped or frog-eating bat, Trachops cirrhosus, a bat that eavesdrops on the mating signals of its prey, frogs and katydids. This system is a classic story in behavioral ecology, and an important model for understanding the evolution of behavioral and cognitive traits under both natural and sexual selection. Mike Ryan has long studied the dilemma faced by male túngara frogs – to attract mates the male frogs must call conspicuously, but the types of male frog calls that are more attractive to female frogs are also more attractive to the bats. Rachel and her lab have used controlled experiments to study the bat side of this story. She has showed that the bats can (1) rapidly alter associations about prey cue and quality, (2) adaptively switch between social and asocial learning strategies, and (3) learn from each other in such a way that information can be quickly spread via cultural transmission.
Yossi has developed miniaturized tracking devices (<4 g) that can be mounted on wild bats. These sophisticated tags contain a tiny GPS coupled with miniaturized ultrasonic microphones. They measure several behaviors that are crucial for describing foraging strategy, including the bat’s own biosonar signals and the signals of nearby conspecifics. The device will also record the vocalizations of frogs being targeted, and it will even record the chewing sounds after a successful bat attack.
This month, we are testing how well these tags work with Trachops. Thanks to Rachel, we know a great amount about information use and decision-making by these bats on a small scale. Hopefully, Yossi’s tags will tell use more about foraging trajectories and strategies in the bat’s natural habitat on a much larger scale.
I am also still writing up and submitting my last manuscripts from my PhD vampire work (on use of contact calls) and hope to be finished with that this summer as well.