Routine operations

On Friday I went to a talk by Steven Ley titled Going with the Flow: Enabling Technologies for Molecule Makers. His group at Cambridge have done a lot of impressive work on flow chemistry over many years, both developing the technology and using it to synthesise organic molecules.

He covered a lot of ground in the talk, but one of his main points was that it is “unsustainable to use people for routine operations”. Chemists train for 10 years to then stand in front of a fume hood running columns. Ley wants to develop tools that allow researchers to make better use of their time in the laboratory. Flow chemistry has many benefits over batch chemistry, one of them being that it is easy to automate.

His talk left me wondering where I’m particularly inefficient in the lab. Sample collection and recording absorption spectra are particularly time consuming. Last year I started to build an (Arduino-powered) automatic sample collector, but made it far too complicated and never finished it. Now I’ve drastically simplified it (to the design my supervisor said I should use in the first place, as he often likes to remind me) and hope to have it working by the end of next week. I reckon it could save me anywhere between 5–10 hours a week of standing around swapping vials. I’m also going to make a start on recording absorption spectra inline. Again, this will save me a few hours a week, leaving me to do something more valuable.

I completely agree with Ley about the benefits of flow chemistry, but you can’t ignore that all this equipment costs money. Ley’s group use a lot of commercially available equipment and it’s not cheap. In my group, we build a lot of apparatus ourselves because we can tailor it to our needs and it’s a lot more “hackable” (as well as cheaper).

Someone in the audience tried to make the point during questions that funding is tight, especially for those working in organic synthesis. How they meant to afford equipment like £40,000 inline infrared spectrometers? Ley didn’t really answer this question (and I’m not sure he can). He’s obviously very well funded so he can build and develop the “lab of the future” [1]. A lot of this technology might be out of the budget of the chemists who will benefit from it the most. Unfortunately they might be performing “routine operations” for some time to come.

[1]: M.D. Hopkin, I.R. Baxendale, S.V. Ley, Chim. Oggi./Chemistry Today, 2011, 29, 28-32.

Details matter

Blog Syn is a new chemistry blog where chemists post their attempts to reproduce reactions from the literature. Each post starts with the following disclaimer:

The following experiments do not constitute rigorous peer review, but rather illustrate typical yields obtained and observations gleaned by trained synthetic chemists attempting to reproduce literature procedures…

I disagree completely. What could be more rigorous than actually trying a reaction?

So far there are three posts. The first gave a lower yield than reported. The second was “moderately reproducible”. The paper omitted details essential to the reaction’s success. The third was “difficult to reproduce” and is well worth reading—there’s a great response from one of the authors, Prof. Phil Baran.

It’s unacceptable for anyone to publish a paper without all the information necessary to replicate the results. It wastes researchers’ time and money. I’ve written before about my difficulties trying to replicate results. It’s infuriating. How do papers like this slip through peer review?

I suspect some authors don’t really know why a reaction gives a particular product, especially in nanoparticle synthesis. They manage to pull something off a few times and publish their findings, but (unknowingly) neglect parameters crucial for other researchers to be able to reproduce it. It could be something seemingly trivial, like the method used to wash the glassware. The next researcher does it differently because it’s not mentioned in the paper and gets a different result.

The only way to deal with this is for reviewers to demand thorough experimental sections. (But to do so they must have a good understanding of typical experimental procedures. This is a problem if your reviewer hasn’t been in the lab for years.)

An alternative scenario could be that the researchers, in the early stages of the work, find that doing X doesn’t work. Later they find doing Y does work. Y gets published. X stays in the laboratory notebook.

X is a negative result. On it’s own, it’s not very useful. Loads of attempted reactions don’t work. But in the context of the positive result (i.e. the paper) the negative result is actually very valuable to anyone who wants to repeat the paper. Serious consideration should be given to including them in the supplementary information.

Experimental methods are grossly oversimplified. We like things to be elegant and simple, but chemistry is complicated. There’s no excuse not to include more information because everything is published online and space constraints aren’t a problem.

Blog Syn shows that subtleties in chemistry are important. We should all acknowledge that in our own papers and demand that others do the same.

Tools and technologies for researchers

The Library at Imperial run a course called Blogs, Twitter, wikis and other web-based tools. They asked me (and also Jon Tennant) to give a quick talk to the attendees yesterday on the things I use to do my work.

Rather than give a slide-based presentation I decided the best thing to do was give a demo. I quite like mind mapping to help me structure ideas so I made one for this. I’ve included links to web sites where appropriate. You can download a PDF of the mind map here (PDF).

It’s split into two halves: the tools that I do use, categorised into “inputs” (e.g. Twitter and RSS) and “outputs” (e.g. Google Drive), and those that I don’t with some short reasons why. If you’re interested in trying some of this out, give one or two a go and see if you find them useful. If you use something that I haven’t mentioned, let me know in the comments.

Microwave heating: still nothing special

For many years there has been debate over whether there is a specific microwave effect on chemical reactions or if it’s just a thermal effect. A couple of years ago I took lecture course on microwave and ultrasound chemistry. The course covered a few papers on the existence of a microwave effect and concluded that there isn’t anything special going on—microwaves just give very efficient and fast heating compared to normal convective heating in an oil bath or dry-syn block.

I found course particularly interesting, so whenever I see a paper on the subject I at least read the abstract to see if anything has changed. Angewandte Chemie have recently published a paper titled Microwave Effects in Organic Synthesis—Myth or Reality? (DOI: 10.1002/anie.201204103) by C. Oliver Kappe, Bartholomäus Pieber, and Doris Dallinger.

They looked at two recently published papers that allegedly found a specific microwave effect. Both claimed microwave irradiation significantly enhanced the reaction rate or yield in a way that couldn’t be replicated by regular heating to the same temperature.

Summarising a few pages: Kappe et al. couldn’t replicate the findings and argue that the problem lies in poor temperature management. To test the existence of a specific (non-thermal) microwave effect you need to run the same reaction twice at the same temperature, one with microwaves and the other normally (e.g. with an oil bath).

However the researchers who report a microwave effect use external infrared temperature probes, which record a lower temperature than the bulk reaction mixture. Microwaves heat more efficiently than the normal heating, so the microwave reaction will give you a higher yield and both vessels are in fact not at the same temperature. Instead you must use fibre optic temperature probes placed inside the reaction vessels. Doing this eliminates any microwave specific effect. To quote:

Importantly, we firmly believe that the existence of genuine nonthermal microwave effects is a myth, as all our attempts to verify these often claimed “magical” microwave effects during the past decade have failed.

It’s a good read and, I think, a nice example of science at its best. I’m also glad I read it because a colleague and I had, for some reason, been looking at getting a microwave flow reactor—which would be completely pointless, as all the benefits of microwaves in batch chemistry (high pressures and homogeneous heating) can be readily achieved in flow using normal convective heating. If anyone could tell me why such an apparently pointless bit of kit exists, I’d like to know…

Reference: C.O. Kappe, B. Pieber and D. Dallinger, Microwave Effects in Organic Synthesis—Myth or Reality?, Angewandte Chemie International Edition, 2012. DOI:

Conference talks: generally a bit rubbish?

Athene Donald recently wrote about what you don’t see at academic conferences. Academics may go to conferences in exotic places but they only see the inside of conference centres, hotels, airports and restaurants.

In the last year I’ve only been to two conferences. Unfortunately neither of them were in exotic places. The first was in York and I went with a few people from my group. As none of us are especially well-known in our field we unlike Athene had the freedom to explore York in the evenings. The second was held at Imperial and attendance was compulsory for DTC students. They were both small (no parallel talks) and lasted two days.

The speakers at both conferences, with the exception of one or two each day, were incredibly uninspiring and unenthusiastic. I remember trying to fall asleep one afternoon in York after nearly exhausting my iPhone battery reading papers. I was very disappointed as I had hoped to come back with fresh ideas but instead felt that it was a massive waste of time and money.

How can people talk so blandly about their own work? If the speaker isn’t excited by it then they most certainly can’t expect the audience to be interested. Many talks didn’t have any questions—the presentation equivalent of a death knell.

How have we ended up in this situation? I find it particularly baffling when I think about talks given by PhD students in my DTC. Recently we had a day with industry sponsors and visitors from other universities to listen to some third and final year PhD students present their work. The presentations were largely fantastic. Enthusiastic, confident, engaging, interesting… Really very good. Last month my cohort gave our MRes talks and the comments from markers were (nearly) all positive too. A world apart from the dreary, mind numbing talks I’ve sat through at my last two conferences.

Perhaps I’m overreacting, but I’ve really been put off going to anything other than something massive like the MRS conference where there will always be something related to my field and hence tolerable, even if the speaker is a bit tedious.

Does anyone else find most talks bad too? Are good talks unfortunately the exception? On the positive side, at least I’m at the beginning of my career so I can follow Athene’s advice, especially for my next trip to Italy in April:

Early career researchers, don’t kid yourself your professors enjoy themselves on such trips by seeing all the sights of the world you’ve always wanted to see yourself. Chances are, if you get to visit some far-flung place for a conference, you will enjoy your trip much more than your seniors because you live your life at a more leisurely pace. Make the most of it!

The way-it-should-be-ness

The BBC have published an audio slideshow called Chair Champions on Charles and Ray Eames, designers best known for their furniture. The Eames Lounge Chair is probably their most famous work.

I like well-designed things. Not in the sense that they look a particular way, but that they fulfil a specific function extremely well. The couple designed objects that were both functional and stylish. The following from the end of the slideshow has stuck in my mind:

“Charles and Ray had this idea that good designs had ‘way-it-should-be-ness’. If something was really well-designed, then the idea of it being designed shouldn’t come up at all.”

I love this idea of ‘the way-it-should-be-ness’. In my own work, I try to find solutions to problems that are elegant. I want my solutions to have ‘way-it-should-be-ness’. Writing my MRes report led me to reflect on the last year and I’ve noticed that this desire to get the perfect solution has actually been a bit of hindrance.

I spent far too long sat at my desk searching the literature for the best solution. When I finally settled on a plan, it was a bit of a long shot. If it worked, it really would have been awesome. But it didn’t. The paper that I based my idea on was almost certainly suspect.

Around the time I was working on the dodgy reaction I read Tim Harford’s Adapt: Why Success Always Starts with Failure. It’s quite good. He’s like a better Malcolm Gladwell. Harford summarises the way Russian engineer Peter Palchinsky, who ended up being executed by the Soviet government for criticising them, solved problems as three ‘Palchinksy principles’:

  1. Seek out new ideas.

  2. When trying something new, do it on a scale where failure is survivable.

  3. Seek out feedback and learn from your mistakes as you go along.

I like them. I do the first, but my problem lies with the other two. Over the last year everything depended on this one reaction—a risky, naive strategy. There was little feedback and refinement. I ended up rushing another reaction towards the end so that my report on ended on a high note. After all, no one likes a sad thesis.

I bet Charles and Ray Eames didn’t come up with their objects overnight. There must have been hundreds of drawings and prototypes of the Eames Lounge Chair, but it’s easy to forget them as you only think of the final product. They probably worked in a similar way to Palchinksy.

Now I’m making an effort to work more iteratively. I still think of rather grand ideas, but instead of going for it in one enormous optimistic leap, I’m working towards them bit by bit, in a process of steady refinement.

I’ve already had some success last week working in this way. It gives much more positive mindset of working too. Hopefully I’ll soon have my own chemistry equivalent of the ELC and after refining it down I’ll look at it and think “yep, that’s the way it should be.”

Sentences were written

Yesterday I was dicussing a draft of my MRes thesis with my supervisor and one of my questions was whether, in a few particular cases, I should write in the active or passive voice, and if I do use the active should I use the pronoun we or I?

Active or passive?

The sentence I had written was (key bit emphasised):

Despite Bloggs et al.’s description of the growth precursor as “extraordinarily stable”, I found that the growth precursor formed a fine red-brown precipitate within approximately 30 min of being loaded into a syringe, blocking the syringe outlet.

To begin with please ignore whether you think that’s a good sentence or not (I’ve read it so many times I’m beginning to think the word order is completely wrong).

It’s written in the active voice: I (the subject) found (the verb) that something was the case. I chose to use the active because I want to make it clear that I found that the precursor was unstable, in disagreement with what some other researchers found. I also tend to choose the active voice because it’s more direct; the passive can feel rather viscous and verbose. I’m often told to make it easy for the reader.

I could take out I:

Despite Bloggs et al.’s description of the growth precursor as “extraordinarily stable”, the growth precursor formed a fine red-brown precipitate within approximately 30 min of being loaded into a syringe, blocking the syringe outlet.

It’s readable, but I don’t like it because it’s slightly ambiguous.

In the passive voice (I think—this just sounds incredibly weird to me so I could be wrong):

Despite Bloggs et al.’s description of the growth precursor as “extraordinarily stable”, the growth precursor was found to form a fine red-brown precipitate within approximately 30 min of being loaded into a syringe, blocking the syringe outlet.

Both ambiguous and horrible. Hence I chose the first option: active and I.

I, we—no one?

But the problem now is the dreaded I. It does sound a bit schoolboyish. We is used in scientific writing all the time, but I—shudder—never, because science isn’t conducted by individuals, but by groups. In fact, no, not even groups, but by the whole scientific establishment. No one does science, science does itself! Hmm… But ignoring that, I does make me cringe a bit.

Putting we in place of I is significantly less cringeworthy but completely nonsensical. It always amazes me to see a single author paper start with “We [verb]…”. Is I really that repulsive? Is it meant to give an illusion of absolute truth? Perhaps it’s meant to say “this is not my opinion , it’s scientific fact”, but there’s always a personal element in science and to pretend it isn’t there is ludicrous and delusional.

So I’ve got to decide whether to stick with I or not. It’s only my MRes thesis not my PhD thesis, but I still care about the details. I’m definitely not writing we. My supervisor was told by his supervisor to do a find on “we” and replace with “I”. I’m leaning towards I because it’s concise and unambiguous about was my work (apparently it’s good to show in an MRes that you’ve done a good amount of work) and, whilst it may be a bit cringeworthy, it is definitely the easiest to read. Hopefully the mysterious anonymous marker will agree.

Friday Night Experiments

Tonight I watched a BBC documentary about Nobel Laureate Andre Geim. Each episode of Beautiful Minds (am I alone in thinking the title is a little bit naff?) covers the story behind the success of a particular scientist. Geim is a really interesting character and I recommend watching it.

One part I found particularly inspiring. He attributes a lot of his success to “Friday Night Experiments”, during which he does some quick experiments to try out new, more adventurous ideas. It was during one of these that he discovered that you can use scotch tape to mechanically exfoliate graphene from graphite.

Obviously there were loads of unsuccessful Friday nights before the discovery of graphene. He went on to say that the most important thing to remember is to know when to cut your losses and try something else. By trying out lots of new ideas every week, seeing what doesn’t work and what is promising, he has made some great breakthroughs in a wide range of fields.

I can see how for a postgraduate it is easy to become obsessed with getting a particular experiment to work or become completely blinkered on a particular sub-sub-sub-area of a discipline. You are meant to work really hard on a particular area in a PhD after all. But rather than working solely on one approach to my research, Geim has inspired me to get in the lab and try out some of my slightly more adventurous ideas every now and then. Most probably won’t work, but one might.

Teach Children to Code

I read a fair few tweets last night on subject of teaching children to program in school. A lot of the discussion appears to have been prompted by Ben Goldacre’s link to a post by programmer/author John Graham-Cumming supporting a petition entitled “Teach Our Kids to Code”. The petition argues that we should teach kids to program from Year 5 (9-10 years old). Definitely! Just as I was about to sign the petition this morning I saw a tweet by Mark Henderson, The Times Science Editor, saying that David Willets MP had just announced a pilot programme to teach programming in schools! Great stuff.

David Willetts has just announced pilot programme to teach schoolchildren coding & to develop a programming GCSE.

11:23 AM Thu Sep 15, 2011

I was about 10 years old when my parents bought our first computer. They had saved up for a long time and I was so excited about it. I remember the day we got it very clearly. It was a Compaq Presario with 2.2 GB hard drive, 64 MB RAM and 600 MHz Celeron CPU. The power! If I wasn’t out on my bike with friends you could find me endlessly fiddling, breaking and then fixing the computer (all whilst trying to hide the fact that I had broken it from Dad—I had to fix it otherwise they were going to be pretty angry/worried that I had broken their expensive new PC).

At a young age I was a logical thinker and quickly became computer literate, teaching myself HTML and then Javascript and PHP. If I, by no means a “child genius”, can work it out on my own from Internet resources then there is no reason why other children couldn’t learn to program in school with some good teaching. I think a 10 year old could easily cope with logical statements such as “if this then that” or “while this do that”—they naturally think like it in everyday life, they just need to help to translate it into the formal instructions a computer understands. Programming is fun and intellectually satisfying, much more so than the ridiculous ICT “lessons” that I used to have: open a Word document, copy some text, print, make a change, print again. Completely pointless.

It’s a useful skill too. Being able to program has been really handy for me at university. Last year I recorded probably around a hundred absorption and emission spectra (and maybe thousands using an automated system) which would have been impossible to analyse using Excel, the standard tool of choice amongst undergrads in my department. A bit of code in MATLAB and you can analyse as much data as your computer can cope with. For some reason my department didn’t teach a programming language like other departments such as Physics who taught C++. Instead we had “maths lab” where we used Excel for numerical methods. Not very useful (and very dull). Solving Project Euler-style problems with something like MATLAB or even better a proper, open source, high level language like Python (with SciPy and matplotlib) would be much more useful. I’ve been learning it myself over the summer. It’s a fun language that I’d love to teach to a class of undergraduate chemists.

I think it’s clear that school children would benefit from being able to program. Even if they never use code again, they would gain an understanding of how a computer functions and can then use this knowledge to work out how new software works. Rather than teach specific software applications, teach computing. It’d benefit industry too. Fingers crossed that the government doesn’t force schools to teach a horrible proprietary language outsourced as a “solution” on a ludicrously expensive contract and instead choose something open source and useful. A recent report about open source in Whitehall doesn’t bode well…