Correcting the literature

Mathias Brust in Chemistry World:

Ideally, science ought to be self-correcting. … In general, once a new phenomenon has been described in print, it is almost never challenged unless contradicting direct experimental evidence is produced. Thus, it is almost certain that a substantial body of less topical but equally false material remains archived in the scientific literature, some of it perhaps forever.

Philip Moriarty expresses similar concern in a post at Physics Focus. Openly criticising other scientists’ work is generally frowned upon—flaws in the literature are “someone else’s problem”. Erroneous papers sit in the scientific record, accumulating a few citations. Moriarty thinks this is a problem because bibliometrics are (unfortunately) used to assess the performance of scientists.

I think this is a problem too, although for a different reason. During my MRes I wasted a lot of time trying to replicate a nanoparticle synthesis that I’m now convinced is totally wrong. Published in June 2011, it now has five citations according to Web of Knowledge. I blogged about it and asked what I should do. The overall response was to email the authors but in the end I didn’t bother. I wanted to cut my losses and move on. But it still really bugs me that other people could be wasting their limited time and money trying to repeat it when all along it’s (probably) total crap.

I did take my commenters’ advice and email an author about another reaction that has turned out to be a “bit of an art”. (Pro tip: if someone tells you a procedure is a bit of an art, find a different procedure.) I asked some questions about a particular procedure and quoted a couple of contradictions in their papers, asking for clarification/correction. His responses were unhelpful and after a couple of exchanges he stopped replying. Unlike the first case, I don’t believe the results are flat out wrong. Instead I suspect a few experimental details are missing or they don’t really know what happens. I think I’ll get to the bottom of it eventually, but it’s frustrating.

What are your options if you can’t replicate something or think it’s wrong? I can think of four (excluding doing nothing):

  1. Email the corresponding author. They don’t have an incentive to take it seriously. You are ignored.

  2. Email the journal editor. Again, unless they’re receiving a lot of emails, what incentive does the journal have to take it seriously? I suspect you’d be referred to the authors.

  3. Try and publish a rebuttal. Can you imagine the amount of work this would entail? Last time I checked, research proposals don’t get funded to disprove papers. This is only really a viable option if it’s something huge, e.g. arsenic life.

  4. Take to the Internet. Scientists, being irritatingly conservative, think you’re crazy. Potentially career damaging.

With these options, science is hardly self-correcting. I’d like to see a fifth: a proper mechanism for post-publication review. Somewhere it’s academically acceptable to ask questions and present counter results. I think discussion should be public (otherwise authors have little incentive to be involved) and comments signed (to discourage people from writing total nonsense). Publishers could easily integrate such a system into their web sites.

Do you think this would work? Would you use it? This does raise another question: should science try and be self-correcting at all?

Thanks to Adrian for bringing Mathias Brust’s article to my attention.

On the value of journal editors (and why green open access won’t work)

Previously I argued that traditional journals should be abandoned and green open access repositories like arXiv are the way forward. More recently I praised the “DIY” open access journal The Journal of Machine Learning Research run by researchers, writing that chemists should do something similar.

But now I think I’m wrong because I’ve underestimated the value of journal editors.

On Stephen Curry’s blog a commenter said:

“The current system of peer-reviewed journals is altogether very flawed. … [A]t the end of the day, the journals make millions just formatting, laying them out and sending a few emails. This just cannot be right.”

6 months ago I would have probably agreed. Anonymous Publishing Employee replied (it’s worth reading in full) saying that they are wrong because they underestimate the work a journal really does. Editors have to decide whether a paper fits in with their journal and is worth sending for review, obviously requiring technical knowledge. If it is worth sending they have to decide who to send it to, requiring personal knowledge of the scientists. A lot of administrative time is spent chasing up reviewers, but once the reviews are in the editor has to make a decision or repeat the review process again. If accepted, subsequent copy editing and layout takes time (money) and there are other indirect costs too, e.g. IT and rent. The main expense, they believe, is salaries (not that surprising).

Before I’ve said that peer review would work in green OA repos, but now I think I was wrong. Editors have a lot of specialist knowledge that ensures the right people review papers. It’s also required to finally decide whether to accept or reject a paper. I now doubt that a comparable level of peer review would happen in a repository. There’s no incentive for scientists to review post-publication. With a journal, there’s a certain amount of flattery involved when a scientist is asked to review by an editor. In effect, the editors drive the peer review process forwards, whereas it might never get started in a repo.

Furthermore, if we only had green OA repositories there would be another loss that I’ve never considered before: the commentaries, reviews, editorials and research highlights that complement the original research articles.

Screenshot from the current issue web page for the latest issue of Nature Chemistry

These are written or commissioned by editors. Recently I’ve really enjoyed the extra content in Nature Chemistry. An interview with Chief Editor Stuart Cantrill goes into more depth about the work behind the scenes. Lab on a Chip is another journal that I like to keep track of—obviously much more specialised than Nature Chemistry—and it has similar articles.

A complete shift to green OA would result in the loss of this valuable content. Websites or blogs might spring up to take it’s place, but I doubt it would be of the same calibre. It would be a real shame to lose it because it’s a great way to broaden one’s knowledge and stumble across interesting work.

Overall I think I was wrong about green OA repositories. Journal editors (rather than the “journal” in itself) are a valuable asset to the peer review process and scientific endeavour as a whole. Still, more could be done to enhance the transparency of the peer review, but I think that open access publication simply won’t succeed post-publication peer review in green repositories.

Questionable research practices, peer review and an open access future?

Blimey—it’s been five weeks since my last post and I’m now a five weeks into my postgraduate studies. It’s gone quickly and I’ve been very busy.

As part of the doctoral training centre’s new/modern/[positive adjective] approach to a PhD we get (well, have) to take courses that ’round us out’ as modern researchers. A few weeks ago, we had a course on research ethics taught by Marianne Talbot. I did Philosophy A-level and especially enjoyed moral philosophy, so I was looking forward to it.

The course was attended not just by PE DTC students but also the CQD and TMS DTCs. Rather unsurprisingly (but disappointingly) there was a bit of a unfriendly vibe between the different DTCs. “We get MacBook Pros!” said one, “we don’t have to do experiments!” said another, to which we all replied “we get £18,000 to spend and we like lab experiments!” The conversation never progressed any further…

Overall the course was excellent and very enjoyable. I loved how Marianne dealt efficiently and firmly with the few people who wanted to deny the existence of everything! One of the afternoon sessions was on open access publishing, a topic I already had an interest in. I’ve read about it before but have never been entirely convinced (I’m not sure why). Marianne gave a strong case for open access is good. She referenced this website as a good overview. If you don’t know what open access is, then it’s worth a quick read. There was unanimous support of the open access concept.

Marianne then introduced a distinction I had never heard of before: green and gold open access methods. In the green method, papers are deposited in a public online repository. Papers are not peer reviewed prior to being published and anyone can upload an article. The most famous example of this is probably arXiv. In the gold method, you submit a paper to journal, it’s peer reviewed, and if accepted it’s published in a journal that is either entirely open access or permits some open access articles. An example of the former type is PLoS ONE.

The question Marianne asked us to discuss was “Do you think it is acceptable for scientist to self-archive pre-prints in repositories with peer-review?” The answers from students were quite vague. But generally it seemed that peer review was held in extremely high, almost reverent, regard.

I found this odd considering we had just been discussing questionable research practices. One example of a questionable research practice that stuck out to me was:

leaving important information out of methodology section of a manuscript or refusing to give peers reasonable access to unique research materials or data that support published papers.

One would expect that if peer review functioned as well as my fellow students said then readers would rarely come across this practice in the literature. Yet in my field of research, I encounter it all the time! Authors brag that they’ve found the way to make the biggest, smallest, longest or generally ‘best’ nanoparticle but then fail to tell you crucial information such the number of moles of reagents, reaction times and temperatures that allow you to repeat the work. I spent an unbelievable amount of time last year trying to figure out the required conditions to synthesise heterostructured quantum dots. If peer review did it’s job, then things like this wouldn’t get through.

Other students were arguing that because anyone can publish a paper in a green OA repository that there is no quality control. I disagree. I think a lot of students are assuming that readers are idiots and need peer review. If you uncritically read a paper or think that because it’s in a journal it must be true then you’re at best naive or at worst incompetent. Decent researchers will spot questionable claims and results.

Is peer review even really that good a quality control method? Typically you only have two reviewers. Can you be sure they read the paper instead of give it to a PhD/postdoc?

Imagine that rather than submitting papers to traditional peer reviewed journals researchers published their work in open access green repositories. No real scientist is going to post rubbish because their reputation is on the line. Rather than having only two reviewers as with traditional journals, you could have tens or even hundreds of reviewers. They could post their comments—the peer review—publicly on the repository article page (I’m thinking more along the lines of threaded discussions rather than linear blog-style comments).

I think it would be awesome. The authors could respond to readers’ questions, for example, asking for clarification of an experimental technique or reagent used, or post new versions of the article correcting mistakes or providing further information.

At present, reviewers’ comments are made privately and anonymously. These comments would be useful to the scientific community. There’s no reason why it should stay private. Science is all about debate, questioning and (a moderate dose of) scepticism. At conferences and in department presentations, researchers handle criticism and questions. There’s no reason why journal articles should be any different.

I do wonder whether I’m being overly optimistic or if I’ve missed out something crucial. What do you think? I’d like to know…

[^mywork]: I hope to blog about my work in less vague terms at some point but I’ll probably have to wait a while for various reasons.

*[DTC]: doctoral training centre

*[PE]: plastic electronics