Correcting the literature

Mathias Brust in Chemistry World:

Ideally, science ought to be self-correcting. … In general, once a new phenomenon has been described in print, it is almost never challenged unless contradicting direct experimental evidence is produced. Thus, it is almost certain that a substantial body of less topical but equally false material remains archived in the scientific literature, some of it perhaps forever.

Philip Moriarty expresses similar concern in a post at Physics Focus. Openly criticising other scientists’ work is generally frowned upon—flaws in the literature are “someone else’s problem”. Erroneous papers sit in the scientific record, accumulating a few citations. Moriarty thinks this is a problem because bibliometrics are (unfortunately) used to assess the performance of scientists.

I think this is a problem too, although for a different reason. During my MRes I wasted a lot of time trying to replicate a nanoparticle synthesis that I’m now convinced is totally wrong. Published in June 2011, it now has five citations according to Web of Knowledge. I blogged about it and asked what I should do. The overall response was to email the authors but in the end I didn’t bother. I wanted to cut my losses and move on. But it still really bugs me that other people could be wasting their limited time and money trying to repeat it when all along it’s (probably) total crap.

I did take my commenters’ advice and email an author about another reaction that has turned out to be a “bit of an art”. (Pro tip: if someone tells you a procedure is a bit of an art, find a different procedure.) I asked some questions about a particular procedure and quoted a couple of contradictions in their papers, asking for clarification/correction. His responses were unhelpful and after a couple of exchanges he stopped replying. Unlike the first case, I don’t believe the results are flat out wrong. Instead I suspect a few experimental details are missing or they don’t really know what happens. I think I’ll get to the bottom of it eventually, but it’s frustrating.

What are your options if you can’t replicate something or think it’s wrong? I can think of four (excluding doing nothing):

  1. Email the corresponding author. They don’t have an incentive to take it seriously. You are ignored.

  2. Email the journal editor. Again, unless they’re receiving a lot of emails, what incentive does the journal have to take it seriously? I suspect you’d be referred to the authors.

  3. Try and publish a rebuttal. Can you imagine the amount of work this would entail? Last time I checked, research proposals don’t get funded to disprove papers. This is only really a viable option if it’s something huge, e.g. arsenic life.

  4. Take to the Internet. Scientists, being irritatingly conservative, think you’re crazy. Potentially career damaging.

With these options, science is hardly self-correcting. I’d like to see a fifth: a proper mechanism for post-publication review. Somewhere it’s academically acceptable to ask questions and present counter results. I think discussion should be public (otherwise authors have little incentive to be involved) and comments signed (to discourage people from writing total nonsense). Publishers could easily integrate such a system into their web sites.

Do you think this would work? Would you use it? This does raise another question: should science try and be self-correcting at all?

Thanks to Adrian for bringing Mathias Brust’s article to my attention.

6 thoughts on “Correcting the literature

  1. Andrew (@_byronmiller)

    Important questions – both for economic reasons, and for the integrity of science.

    I think the idea of rebuttals is particularly crucial here. This can be done very well under the current model, but seems to be the exception rather than the rule. I’ll give a very specific example in order to illustrate a couple of points:

    In the 90s, Rebek had a series of papers about new self-replicating molecules. They got published in good journals, including Science. Between 1994-1996, the mechanistic basis for the work was attacked strongly by Menger et al., and a few papers went back and forth between the groups. A kinetic analysis by a third party in 1996 seemed to settle the discrepancies between the two groups’ work, essentially vindicating Rebek. This is perhaps self-correction at its best: rigorous work, careful attention to detail, and all published transparently.

    It raises three points about rebuttals:

    - The work has to be interesting, and high-profile. This is simply because it takes time and effort to write a paper rebutting work; who is going to bother to rebut a random methodology paper that will only get 5 citations anyway? Who would read such a rebuttal? Rebuttals are always going to be a side-project, rather than the focus of a student’s work. As such, only important, controversial work that is close to the expertise of the researchers is likely to be responded to.

    - The rebuttal has to be a body of work in its own right. It’s not enough to simply say “I repeated this exactly and it didn’t work”. There are a million and one reasons why that could be the case; you have to make some attempt to pin down what’s actually going wrong. That’s not a trivial thing to do! If you eventually get it to work and can show why, great… but if it never works, it’s difficult to write a good paper.

    - How long are we willing to wait for rebuttals? The example I gave took the better part of a decade of work by three independent groups. If we care only about posterity, that’s fine – we can probably just wait 20 years for a new method to come along, and the authors to say “this old method is rubbish, as everyone knows, and we’ve got a better way”. If we care about saving the time and money of workers in the meanwhile, we need a much faster response – and hence a new model of review.

    It’s not clear to me that comments on journal sites will solve these problems. A comment saying “I did this and it didn’t work for me” is basically useless, and it’s hard to imagine journal site comments reaching the level of detail necessary to pin down what has gone wrong. By the time you have that much detail, you’re ready to write a short paper anyway!

    More promising would be active efforts by journals to solicit replication, or at least a pledge to publish brief replication studies and link them to the original paper. This would include positive replication – imagine if, when you pull up a methodology paper, next to the citations you have a list of papers that have used the work successfully or unsuccessfully (total synthesis papers, etc), or short (1 – 2 page) communications by other researchers who have tried to repeat the work.

    Reply
    1. Tom Phillips Post author

      I mostly agree with your points, Andrew. I very much like your idea of positive replication and I’d like to see something like that happen. As Moriarty said in Physics Focus the problem is that the number of citations simply doesn’t really reflect whether a paper is good or bad. I can imagine a system where citations aren’t equal—perhaps rather than just citations, we could have some additional metadata to express that we’ve actually replicated some results rather than just like the paper. Kind of adding some depth to an ordinary citation, sort of like what altmetrics do but still within the literature.

      I still think comments on journals would be useful though. A recent example I can think of is a particular nanoparticle synthesis. The authors wrote up a couple of papers on it in some ACS journals with experimental sections the usual length and style. But then they published a Nature Protocols follow up and I honestly couldn’t believe the number of notes saying “CAUTION” followed by “don’t do … or the synthesis will fail”. Even stuff like recommending particular vials from VWR to carry out the reaction in. If I had tried to repeat their first few papers, I probably wouldn’t have been able to get their results. If I could comment on articles—and I would be more than happy to with my name—I could ask what’s going on, hopefully get suggestions from the authors and other people could see them too. Time saved for everyone.

      Reply
  2. Andrew (@_byronmiller)

    Clearly you have a deeper understanding of how this would work than I do – but adding more metadata as you describe doesn’t sound too technically challenging (just a huge amount of data to retroactively add…). Maybe this is a crowdsourcing job… This is exactly what I had in mind though, a kind of “we used this paper’s results” tag, to distinguish those papers from “we cited this in the introduction” type citations.

    Comments could be useful in the way you describe, if only to save the authors from responding to multiple emails asking the same thing (more transparency is always good in my book).

    Would be interesting to get the views of those in the publishing industry on ideas like this!

    Reply
    1. Tom Phillips Post author

      The hardest bit would be convincing people to actually do it. I’ve noticed ChemSpider has Synthetic Pages, a repository of procedures, but what I want see is something that references the primary literature rather than an extra thing in its own right.

      I’d like to know what publishers think too. The cynic in me says that they’re not too bothered but I can see places like Digital Science/Nature giving it a go. Maybe scholarly societies like the RSC too.

      Transparency, now that’s another blog post. (Lately it occurred to me: why do we keep it secret what happens between “accepted”, “revised” and “published”?)

      Reply
  3. stu

    FYI – a commenting system has been implemented on some Nature journals (not all yet and alas, not Nature Chemistry – should be coming though) – see here for where it gets placed: http://www.nature.com/ncomms/2013/130604/ncomms2926/full/ncomms2926.html#comments

    As you will see, however, there are no comments there and very few elesewhere. I think this is where updates/changes/notes about a paper should go (it’s then all in one place – the same place as the paper), but very few people actually comment. You would think there would be an incentive for an author to reply if a questioning (or critical) comment is left on a paper, but I’m not even sure that applies, see: http://www.nature.com/nature/journal/v479/n7371/full/nature10542.html#comments (the comment from Dave Smith was polite and reasonable, but no response ever came – disappointingly). I might think some more on the general questions posed (when I have time), but just wanted to comment on commenting for now. Some other publishers do this too (such as Plos) and I think they get very few comments as well, sadly.

    Reply
    1. Tom Phillips Post author

      Glad to hear the Nature Chemistry might be getting comments. Nat. Chem. seems quite engaged with its readers, at least on Twitter, so I wonder if you’ll get more discussion than other publishers have. Also pleased that you think all the updates to a paper go in a single place (must be citable to be useful of course, i.e. it has a DOI).

      Indeed that’s a great comment from Smith and I think it reflects very poorly on the authors to simply ignore it. I would never leave a comment unanswered like that on my own work. But, yeah, I can’t ignore that articles with comment facilities attract very few comments, if any. Could be because (older?) scientists associate comments with the horrible world of blogs? And I think a lot of people simply don’t like to criticise other people’s work publicly…

      Reply

Leave a Reply