Today was lab meeting, which for our lab (Matt Tinsley’s group, along with Andy Dobson and I) consists of a quick update of what everybody has been doing, and the discussion of a paper that had previously been sent around by one of the attendees. This happens in a café on campus. For the non-scientists among you, this is all pretty standard stuff. As is fairly common, the proposer of the paper sent it round having just looked at the summary, i.e., before he’d read it through properly. We all then read the paper and prepared for discussion.
What followed was a team slaughter of this paper. The summary made the authors look like they had done interesting stuff. But in reality they misunderstood the biology of the system, did some shoddy data collection and analysed the data badly. It was a car crash of a paper. It was worse than that though; the paper was written in a way to hide various issues – the old used-car salesman approach. The authors knew there were massive problems with their study but published it anyway.
We all had the rage. Onlookers in the café must have thought we were planning to fight the authors (which isn’t likely; I’m not particularly famed for my fighting prowess).
Then we got even more angry. It wasn’t just the authors that had failed (though I suspect they’re chalking this up as a win); the whole scientific process had failed. Again for the non-scientists, once a paper is written, it is sent out to be anonymously critiqued by two or three reviewers before an editor decides to accept, reject or advise changes before publication. In this case, the reviewers and/or the editor didn’t pick up on the obvious problems. Don’t get me wrong, I’m mindful of the fact that editors are facing an ever-growing pile of submissions and increased difficulties in finding reviewers. Nevertheless, this particular false interpretation of a natural process is out in the academic world, and has the power to influence.
Furthermore, this paper is in an otherwise respectable open access journal (you’ll have realised by now that I’m not going to name names). This isn’t a dig at OA journals at all; they’re important players in a changing scientific landscape. I have results that would not be in the literature without them. OA journals, however, face a higher burden of responsibility. Their whole raison d’etre is that anyone can access them, so a dodgy OA paper has exposure to a much wider audience*. On the one hand, post-publication peer review may deal with this (the paper just won’t get cited), but that’s a very academic view. What about journalists and policymakers? They might take the paper on face value, which could have important ramifications when the paper concerns something like climate change or disease management.
We all remember the massively flawed paper in 1998 that claimed the MMR vaccine was linked to autism. In the furore that followed, scared parents chose not to vaccinate their children and now we see entirely preventable measles epidemics causing pain and death**. Scientific research is going to throw out results that we discover aren’t right in the future, or at the very least aren’t the whole story (Newton’s theory of gravity wasn’t the whole story; Einstein added to it and others will likely add to Einstein’s work). However, authors and reviewers have a duty to ensure that the science we do is as good as it can be.
Many papers are great, contributing considerable new knowledge. The worth of these good papers is increased by an understanding that published work has been rigorously reviewed. Peer review is a collective responsibility of the whole scientific community and something we all need to do conscientiously.
This post was written after discussion with Matt Tinsley and Andy Dobson.
*Please note, I am not making comparisons between the peer review process in OA and non-OA journals; I’m merely arguing that people have greater access to papers that have been accepted in error.
**Most bad science won’t lead to deaths – I was just looking for a compelling example. Nevertheless, a bad paper could potentially de-rail subsequent research programmes.