No funding, no placement

Today the Social Mobility and Child Poverty Commission published Elitist Britain?, a report on social mobility in the UK. The conclusions aren’t surprising. Numerous outlets have covered it (e.g. BBC, Guardian).

Careers in the media, politics and law are often singled out as being tough to crack unless you’re from a privileged background. What about science? A search for “science” in the report returns zero hits. It’s interesting that it’s not mentioned.

Every summer, departments at Imperial host students on the Undergraduate Research Opportunities Programme, my department included. I’m sure other universities run similar things. There are bursaries for living costs but competition is tough. In roughly five years I’ve yet to come across a recipient. So really they are unpaid internships, no different to those that are criticised in industries like the press or fashion. Only the offspring of the rich can afford to work for free, especially in London.

In 2009 I applied for a college bursary but I was unsuccessful. I thought it was game over, as I worked full time every holiday to pay off debts that accumulated during term time (when I only worked part time). But the principal investigator generously paid me to do the project anyway, for which I’m still very grateful. I think this is quite rare.

If I had not completed the placement would I have been accepted onto my fully-funded PhD programme? I don’t know. Is it fair that only the wealthiest students can afford to undertake placements and gain valuable research experience? No. If I were a PI, I would not employ unpaid students in my lab, even though I’d be losing out on free labour. Considering the gulf between the richest and poorest and lack of social mobility in the UK, I think a policy of “no funding, no placement” is well overdue.

Christmas wishes for nanoparticle synthesis

Back in August The Baran Laboratory blog posted some thoughts on yields and the need for qualitative assessment of reactions. I agree with their points, particularly about only believing in “0%, 25%, 50%, 75%, or quantitative” yields.

Unfortunately I can’t recall ever reading a paper on nanoparticle synthesis where the authors report a yield. It is difficult to define the yield of a nanoparticle because, unlike molecules, nanoparticles have a distribution of sizes and shapes which makes the concept of molecular weight and consequently yield somewhat hazy. Still, it’d be useful if chemists reported the mass of dry nanoparticles obtained per batch. Practically, it’s quite straightforward and I think it would be a handy metric for assessing reactions.

Their post inspired me to write some Christmas wishes for the field of nanoparticle synthesis. Here we go:

Reporting centrifugation speeds in relative centrifugal force rather than rotations per minute

Centrifugation is the nanoparticle equivalent of column chromatography. Different size and shape particles sediment at different rates so by centrifuging them you can achieve separation. Typically papers report revolutions per minute (rpm), but the relative centrifugal force (RCF) is a more useful number as it’s this force which causes the particles to sediment at different velocities and separate. RCF is dependent on not just the angular velocity \omega (i.e. rpm) but also the radius r of the centrifuge rotor:

\textrm{RCF} = r \omega^2 / g

where g is acceleration due to gravity. Our centrifuge has the option to set this instead of rpm. As an example, if some authors used a centrifuge at 2000 rpm and then your centrifuge has a rotor with a radius n times larger, the RCF will be n times higher and you’ll need to either reduce the rpm or centrifuging time. But you probably won’t know what rotor they had so you’ll have to guess and waste time working it out for yourself…

Characterisation of what’s been washed away during washing/purification procedures

This is obvious to me but no one does it. If you have to centrifuge, decant the supernatant off the sediment, then wash the product multiple times, what are you getting rid of? I want to know.

Representative electron microscopy

I want big, high resolution images with good contrast. Close ups of particles of interest are fine but I also want to see lower magnification shots showing representative samples of the product. A paper claiming to make nanoparticle X but only one image showing just a few of X? Then I won’t bother trying to reproduce it. Electron microscopy leads me on to…

Histograms!

I love histograms and I want to see them showing size distributions of your product. At least 50 particle measurements, preferably a hundred or more. Rather than simply stating that your product “is monodisperse”, actually give statistical data like the standard deviation to back up your claims. I also like seeing ratios of shape X to shape Y. If you used ImageJ to automatically measure the particles, then include what algorithm and parameters you used so that others can reproduce it.

Papers that do what they claim

If you claim to make nanoparticle X, you should be making at least 75% X by mass. I don’t have a problem with other crud as long as it’s a minority product and you acknowledge that it’s there.

I’d be a very happy boy if I got all of these granted…

Nature Chemistry Blogroll: Exposing fraud

Every month Nature Chemistry’s Blogroll column features interesting posts from the chemistry blogosphere. I wrote the column for the November 2013 issue, titled Exposing Fraud. Despite having to submit the copy by 16th September for publication on 24th October, the theme turned out to be quite timely, coinciding with the publication of ACS Nano’s editorial Be Critical But Fair. “The best way to avoid potential academic fraud is through rigorous peer review”—it’s a way for sure, but the best way? I’m not convinced.

The size of the challenge for OPV

In a recent paper titled Green chemistry for organic solar cells, published (open access) in Energy and Environmental Science, the authors Burke and Lipomi ask an interesting question: how much organic semiconductor do we need in order to make a sizeable dent in global energy consumption with organic photovoltaics (OPV)?

[I]f 10 TW of the 30 TW of power demanded in the year 2050 is to be generated by photovoltaics, and if organics account for 500 GW to 5 TW, then 10–100 kilotonnes of organic semiconductors will be required… given an average solar flux of 200 W m-2, a module efficiency of 5 %, a typical thickness of the active layer of 200 nm, and a density 1000 kg m-3. This extremely rough estimate assumes 100 % yield of working modules, no waste in the coating processes, and infinite lifetime of devices.

To put 10,000–100,000 tonnes in perspective, polyethylene is the most common plastic with an annual production of 80 million tonnes (according to Wikipedia, but it sounds reasonable). Taking the higher end of their rough estimate, we’re talking 0.1 million tonnes over nearly 40 years—basically nothing compared to commodity polymers. But organic semiconductors have more complicated structures than commodity polymers and require more complicated chemistry. Burke and Lipomi also compare their estimate with pharmaceuticals, with their figure “2–3 orders of magnitude greater than those of top-selling small-molecule drugs of similar structural complexity”.

Making this amount of high quality polymer—with a specific molecular weight and acceptable PDI, high purity and low batch-to-batch variation—is a challenge. Material currently on the market is terrible, meeting none of these requirements.

Both conjugated polymers and structurally complex drugs require synthetic sequences of 5–10 steps to produce. The multi-tonne synthesis of conjugated polymers will be a challenge in process chemistry with few precedents, and will in consequence the materials that could be seriously considered for installations that cover many square kilometers.

The challenge is made even greater by the need for it to be cheap because if OPV isn’t cheap, it’s commercially inviable. If OPV is to stand a chance researchers must remember that they’re working on a technology that has to cost <$10 m-2 to compete with fossil fuels.1 Burke and Lipomi point out is roughly the same price as carpet.

No matter how efficient your polymer, if its synthesis can’t be scaled up or is too expensive (e.g. PCDTBT), it will fail. It worries me a little that this is lost in the quest for high device efficiency (because that’s what makes for a HIGH IMPACT2 paper). Taking a “not my problem” approach is unacceptable. We must stop passing the responsibility off to process chemists and engineers and instead remember the scale of the problem that OPV is trying to solve, otherwise it’ll never make the slightest dent in our global energy consumption.


  1. Burke and Lipomi get this price from Lewis and Nocera

  2. Just to be clear, I’m being flippant. 

Correcting the literature

Mathias Brust in Chemistry World:

Ideally, science ought to be self-correcting. … In general, once a new phenomenon has been described in print, it is almost never challenged unless contradicting direct experimental evidence is produced. Thus, it is almost certain that a substantial body of less topical but equally false material remains archived in the scientific literature, some of it perhaps forever.

Philip Moriarty expresses similar concern in a post at Physics Focus. Openly criticising other scientists’ work is generally frowned upon—flaws in the literature are “someone else’s problem”. Erroneous papers sit in the scientific record, accumulating a few citations. Moriarty thinks this is a problem because bibliometrics are (unfortunately) used to assess the performance of scientists.

I think this is a problem too, although for a different reason. During my MRes I wasted a lot of time trying to replicate a nanoparticle synthesis that I’m now convinced is totally wrong. Published in June 2011, it now has five citations according to Web of Knowledge. I blogged about it and asked what I should do. The overall response was to email the authors but in the end I didn’t bother. I wanted to cut my losses and move on. But it still really bugs me that other people could be wasting their limited time and money trying to repeat it when all along it’s (probably) total crap.

I did take my commenters’ advice and email an author about another reaction that has turned out to be a “bit of an art”. (Pro tip: if someone tells you a procedure is a bit of an art, find a different procedure.) I asked some questions about a particular procedure and quoted a couple of contradictions in their papers, asking for clarification/correction. His responses were unhelpful and after a couple of exchanges he stopped replying. Unlike the first case, I don’t believe the results are flat out wrong. Instead I suspect a few experimental details are missing or they don’t really know what happens. I think I’ll get to the bottom of it eventually, but it’s frustrating.

What are your options if you can’t replicate something or think it’s wrong? I can think of four (excluding doing nothing):

  1. Email the corresponding author. They don’t have an incentive to take it seriously. You are ignored.

  2. Email the journal editor. Again, unless they’re receiving a lot of emails, what incentive does the journal have to take it seriously? I suspect you’d be referred to the authors.

  3. Try and publish a rebuttal. Can you imagine the amount of work this would entail? Last time I checked, research proposals don’t get funded to disprove papers. This is only really a viable option if it’s something huge, e.g. arsenic life.

  4. Take to the Internet. Scientists, being irritatingly conservative, think you’re crazy. Potentially career damaging.

With these options, science is hardly self-correcting. I’d like to see a fifth: a proper mechanism for post-publication review. Somewhere it’s academically acceptable to ask questions and present counter results. I think discussion should be public (otherwise authors have little incentive to be involved) and comments signed (to discourage people from writing total nonsense). Publishers could easily integrate such a system into their web sites.

Do you think this would work? Would you use it? This does raise another question: should science try and be self-correcting at all?

Thanks to Adrian for bringing Mathias Brust’s article to my attention.

Routine operations

On Friday I went to a talk by Steven Ley titled Going with the Flow: Enabling Technologies for Molecule Makers. His group at Cambridge have done a lot of impressive work on flow chemistry over many years, both developing the technology and using it to synthesise organic molecules.

He covered a lot of ground in the talk, but one of his main points was that it is “unsustainable to use people for routine operations”. Chemists train for 10 years to then stand in front of a fume hood running columns. Ley wants to develop tools that allow researchers to make better use of their time in the laboratory. Flow chemistry has many benefits over batch chemistry, one of them being that it is easy to automate.

His talk left me wondering where I’m particularly inefficient in the lab. Sample collection and recording absorption spectra are particularly time consuming. Last year I started to build an (Arduino-powered) automatic sample collector, but made it far too complicated and never finished it. Now I’ve drastically simplified it (to the design my supervisor said I should use in the first place, as he often likes to remind me) and hope to have it working by the end of next week. I reckon it could save me anywhere between 5–10 hours a week of standing around swapping vials. I’m also going to make a start on recording absorption spectra inline. Again, this will save me a few hours a week, leaving me to do something more valuable.

I completely agree with Ley about the benefits of flow chemistry, but you can’t ignore that all this equipment costs money. Ley’s group use a lot of commercially available equipment and it’s not cheap. In my group, we build a lot of apparatus ourselves because we can tailor it to our needs and it’s a lot more “hackable” (as well as cheaper).

Someone in the audience tried to make the point during questions that funding is tight, especially for those working in organic synthesis. How they meant to afford equipment like £40,000 inline infrared spectrometers? Ley didn’t really answer this question (and I’m not sure he can). He’s obviously very well funded so he can build and develop the “lab of the future“.1 A lot of this technology might be out of the budget of the chemists who will benefit from it the most. Unfortunately they might be performing “routine operations” for some time to come.


  1. M.D. Hopkin, I.R. Baxendale, S.V. Ley, Chim. Oggi./Chemistry Today, 2011, 29, 28-32. 

Details matter

Blog Syn is a new chemistry blog where chemists post their attempts to reproduce reactions from the literature. Each post starts with the following disclaimer:

The following experiments do not constitute rigorous peer review, but rather illustrate typical yields obtained and observations gleaned by trained synthetic chemists attempting to reproduce literature procedures…

I disagree completely. What could be more rigorous than actually trying a reaction?

So far there are three posts. The first gave a lower yield than reported. The second was “moderately reproducible”. The paper omitted details essential to the reaction’s success. The third was “difficult to reproduce” and is well worth reading—there’s a great response from one of the authors, Prof. Phil Baran.

It’s unacceptable for anyone to publish a paper without all the information necessary to replicate the results. It wastes researchers’ time and money. I’ve written before about my difficulties trying to replicate results. It’s infuriating. How do papers like this slip through peer review?

I suspect some authors don’t really know why a reaction gives a particular product, especially in nanoparticle synthesis. They manage to pull something off a few times and publish their findings, but (unknowingly) neglect parameters crucial for other researchers to be able to reproduce it. It could be something seemingly trivial, like the method used to wash the glassware. The next researcher does it differently because it’s not mentioned in the paper and gets a different result.

The only way to deal with this is for reviewers to demand thorough experimental sections. (But to do so they must have a good understanding of typical experimental procedures. This is a problem if your reviewer hasn’t been in the lab for years.)

An alternative scenario could be that the researchers, in the early stages of the work, find that doing X doesn’t work. Later they find doing Y does work. Y gets published. X stays in the laboratory notebook.

X is a negative result. On it’s own, it’s not very useful. Loads of attempted reactions don’t work. But in the context of the positive result (i.e. the paper) the negative result is actually very valuable to anyone who wants to repeat the paper. Serious consideration should be given to including them in the supplementary information.

Experimental methods are grossly oversimplified. We like things to be elegant and simple, but chemistry is complicated. There’s no excuse not to include more information because everything is published online and space constraints aren’t a problem.

Blog Syn shows that subtleties in chemistry are important. We should all acknowledge that in our own papers and demand that others do the same.

Tools and technologies for researchers

The Library at Imperial run a course called Blogs, Twitter, wikis and other web-based tools. They asked me (and also Jon Tennant) to give a quick talk to the attendees yesterday on the things I use to do my work.

Rather than give a slide-based presentation I decided the best thing to do was give a demo. I quite like mind mapping to help me structure ideas so I made one for this. I’ve included links to web sites where appropriate. You can download a PDF of the mind map here (PDF).

It’s split into two halves: the tools that I do use, categorised into “inputs” (e.g. Twitter and RSS) and “outputs” (e.g. Google Drive), and those that I don’t with some short reasons why. If you’re interested in trying some of this out, give one or two a go and see if you find them useful. If you use something that I haven’t mentioned, let me know in the comments.

Light- and power-making things

Inspired by xkcd’s Up Goer Five comic Theo Sanderson created the Up Goer Five Text Editor. It challenges you to explain a hard idea using only the thousand ten hundred most commonly used words in the English language. Lots of scientists on Twitter have been using it to try and describe their work. It’s a lot harder than it sounds! Here’s my attempt:

Many years ago a few people were doing some work and, to their surprise, they managed to make light come out of something that had never had light come out of it before. People were very excited about it and now lots of groups of people spend their time trying to answer questions like “how does it work?” and “how can we make it work better?”. Everyone was interested because they thought it could be used to make new things like better TVs, very small computers and different kinds of lighting. But the perhaps the most important thing it could maybe do was give us all a new way to turn light from the sun into power for not very much money.

At the moment only a few people get to see them because they are hard to make. They are hard to make for lots of reasons, but perhaps the biggest reason is that the parts you need are themselves hard to make. Everyone struggles to make enough of them exactly as they need them to be. If the parts aren’t good enough, sometimes not very much light comes out, or for only a little while, or the ones that turn light from the sun into power don’t do it very well. No one wants any of those.

It doesn’t help that the normal ways of making the parts are often only good enough for making a little at a time. If you try to make more in the same way it stops working so well. I’m part of a group of people trying to make the parts in a new way that can make lots and lots and it still be good enough. In fact, our stuff is usually better than the best stuff you can buy.

I try lots of different ways to make things. I look in books to read how other people did things to get new ideas that no one else has had before. Sometimes they don’t work, but sometimes they do and when that happens it makes me very excited and happy. Sometimes we tell everyone but sometimes we only tell a few people. We can use my new way to make the light-making and power-making things work better and for less money than ever before so everyone can have them.

What do you think?

Microwave heating: still nothing special

For many years there has been debate over whether there is a specific microwave effect on chemical reactions or if it’s just a thermal effect. A couple of years ago I took lecture course on microwave and ultrasound chemistry. The course covered a few papers on the existence of a microwave effect and concluded that there isn’t anything special going on—microwaves just give very efficient and fast heating compared to normal convective heating in an oil bath or dry-syn block.

I found course particularly interesting, so whenever I see a paper on the subject I at least read the abstract to see if anything has changed. Angewandte Chemie have recently published a paper titled Microwave Effects in Organic Synthesis—Myth or Reality? (DOI: 10.1002/anie.201204103) by C. Oliver Kappe, Bartholomäus Pieber, and Doris Dallinger.

They looked at two recently published papers that allegedly found a specific microwave effect. Both claimed microwave irradiation significantly enhanced the reaction rate or yield in a way that couldn’t be replicated by regular heating to the same temperature.

Summarising a few pages: Kappe et al. couldn’t replicate the findings and argue that the problem lies in poor temperature management. To test the existence of a specific (non-thermal) microwave effect you need to run the same reaction twice at the same temperature, one with microwaves and the other normally (e.g. with an oil bath).

However the researchers who report a microwave effect use external infrared temperature probes, which record a lower temperature than the bulk reaction mixture. Microwaves heat more efficiently than the normal heating, so the microwave reaction will give you a higher yield and both vessels are in fact not at the same temperature. Instead you must use fibre optic temperature probes placed inside the reaction vessels. Doing this eliminates any microwave specific effect. To quote:

Importantly, we firmly believe that the existence of genuine nonthermal microwave effects is a myth, as all our attempts to verify these often claimed “magical” microwave effects during the past decade have failed.

It’s a good read and, I think, a nice example of science at its best. I’m also glad I read it because a colleague and I had, for some reason, been looking at getting a microwave flow reactor—which would be completely pointless, as all the benefits of microwaves in batch chemistry (high pressures and homogeneous heating) can be readily achieved in flow using normal convective heating. If anyone could tell me why such an apparently pointless bit of kit exists, I’d like to know…

Reference: C.O. Kappe, B. Pieber and D. Dallinger, Microwave Effects in Organic Synthesis—Myth or Reality?, Angewandte Chemie International Edition, 2012. DOI: