The impact of bad papers

Today was lab meeting, which for our lab (Matt Tinsley’s group, along with Andy Dobson and I) consists of a quick update of what everybody has been doing, and the discussion of a paper that had previously been sent around by one of the attendees. This happens in a café on campus. For the non-scientists among you, this is all pretty standard stuff. As is fairly common, the proposer of the paper sent it round having just looked at the summary, i.e., before he’d read it through properly. We all then read the paper and prepared for discussion.

What followed was a team slaughter of this paper. The summary made the authors look like they had done interesting stuff. But in reality they misunderstood the biology of the system, did some shoddy data collection and analysed the data badly. It was a car crash of a paper. It was worse than that though; the paper was written in a way to hide various issues – the old used-car salesman approach. The authors knew there were massive problems with their study but published it anyway.

We all had the rage. Onlookers in the café must have thought we were planning to fight the authors (which isn’t likely; I’m not particularly famed for my fighting prowess).

Then we got even more angry. It wasn’t just the authors that had failed (though I suspect they’re chalking this up as a win); the whole scientific process had failed. Again for the non-scientists, once a paper is written, it is sent out to be anonymously critiqued by two or three reviewers before an editor decides to accept, reject or advise changes before publication. In this case, the reviewers and/or the editor didn’t pick up on the obvious problems. Don’t get me wrong, I’m mindful of the fact that editors are facing an ever-growing pile of submissions and increased difficulties in finding reviewers. Nevertheless, this particular false interpretation of a natural process is out in the academic world, and has the power to influence.

Furthermore, this paper is in an otherwise respectable open access journal (you’ll have realised by now that I’m not going to name names).  This isn’t a dig at OA journals at all; they’re important players in a changing scientific landscape. I have results that would not be in the literature without them. OA journals, however, face a higher burden of responsibility. Their whole raison d’etre is that anyone can access them, so a dodgy OA paper has exposure to a much wider audience*. On the one hand, post-publication peer review may deal with this (the paper just won’t get cited), but that’s a very academic view. What about journalists and policymakers? They might take the paper on face value, which could have important ramifications when the paper concerns something like climate change or disease management.

We all remember the massively flawed paper in 1998 that claimed the MMR vaccine was linked to autism. In the furore that followed, scared parents chose not to vaccinate their children and now we see entirely preventable measles epidemics causing pain and death**. Scientific research is going to throw out results that we discover aren’t right in the future, or at the very least aren’t the whole story (Newton’s theory of gravity wasn’t the whole story; Einstein added to it and others will likely add to Einstein’s work). However, authors and reviewers have a duty to ensure that the science we do is as good as it can be.

Many papers are great, contributing considerable new knowledge. The worth of these good papers is increased by an understanding that published work has been rigorously reviewed. Peer review is a collective responsibility of the whole scientific community and something we all need to do conscientiously.

This post was written after discussion with Matt Tinsley and Andy Dobson.

*Please note, I am not making comparisons between the peer review process in OA and non-OA journals; I’m merely arguing that people have greater access to papers that have been accepted in error.

**Most bad science won’t lead to deaths – I was just looking for a compelling example. Nevertheless, a bad paper could potentially de-rail subsequent research programmes.

Advertisements

4 thoughts on “The impact of bad papers

  1. I wonder if this isn’t another argument in favour of non-anonymous reviews. I’ve certainly had this same rant myself, along with the more common “the reviewer of my paper didn’t properly read and understand it before concluding whether or not it should be published”. It seems to me that devoting a minimum level of care and thought to a review is a basic ethical responsibility for a scientist (it should go without saying that even greater care and honesty are required before submitting things to publication…). I suspect, however, that there just isn’t enough of an incentive to spend much more valuable time on reviewing, unless perhaps you’re aiming to impress an editor at that particular journal. Signing a review, though – given the pride most scientists place in their work – might be enough incentive to put the same caution, time and effort into it that they would for a comment paper or a talk.

  2. Stu we’ve had several similar examples in recent journal clubs – ours tend to be a bit more formal, in a meeting room setup, but the process is pretty much the same. What is a worrying trend in cell biology at the moment is doing a reasonable study in a simple organism, like yeast, then attempting to extend it to animal cell cultures without any of the proper controls and this still all getting through peer review. Then these poorly controlled experiments are taken as fact, or at least have to be cited in a non-negative way.

  3. Iain, I see your point and I agree with you in principal. However, there’s a lot to be lost through signing reviews, especially if you’re at an earlier stage in your career. Some very senior people can hold grudges, even subconsciously (which you don’t want when they may be on a grant panel). Likewise, if someone writes nice stuff about one’s own work, subconsciously, one might feel more charitable towards them in their submissions.

    Further, I wonder how editors would feel. Getting people to review manuscripts is already a challenge; identifying reviewers to authors would amplify this problem and reduce the ratio of early career reviewers to senior reviewers.

    What about double-blind review? Guessing the authors will be straightforward in many cases (from the system/subject area), but one could never be entirely sure. That way reviewers are less likely to be positively influenced by there being a big name on a paper.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s