Lots of tweets on the subject of great reads in the run up to Christmas, and, reflecting my preponderance for following science, most have been science flavoured. At the start of October this year I came across an article in the Guardian on a new translation of Herodotus’ Histories.
This is my Christmas read and I am extremely impressed. I knew of Herodotus, but had never read his work. Not without controversy in the ancient and modern world, there is no doubt that he does indeed present evidence and the source, and often weighs up the quality of the evidence. I find this refreshing, because in science now we seem to have drifted into territory where the quality of data are often ignored and the conclusion, regardless of the quality of the data is all. The truth is the opposite; data are everything, though truth remains awkward at the best of times.
A while back we went through in a journal club an excellent paper of Moosa Mohammadi’s in the Journal of Biological Chemistry, on the structural basis of the promiscuity of fibroblast growth factor 1 (FGF1) binding FGF receptors. The authors used improved protein crystals, which diffracted better, to show that their previous model was incorrect and propose a new model, involving the N-terminus of the FGF1. The new model fits with other structural data, some of it from their own lab, for complexes of FGF1 with different FGF receptors.
This paper is interesting to far more than the FGF community. Why? Because improved data that challenge a previous model are used to develop a new model. This is how science should work and why understanding and acknowledging the quality of data is so important.
Julian Stirling’s paper on stripy nanoparticles (guest post on Raphaël’s blog) provides another insight into why data and data analysis are key. The stripy nanoparticle controversy has been rumbling for some time and in full view since Raphaël’s paper “Stripy nanoparticles revisited” was published over a year ago in Small (post and link to paper here). In the past 13 months there has been a fair amount of commentary and some further papers on the subject published by the Stellacci group and colleagues. What is interesting, in terms of reproducibility of science is that these latter papers are best described as “more of the same”. That is, there is a continued affirmation of the conclusion, but little critical evaluation of data, data quality and data analysis. This is the opposite of what Moosa Mohammadi does in his paper in Journal of Biological Chemistry. It is also the opposite of what Julian Stirling and colleagues do in their paper.
So what happens next? Given that the Stirling paper is entirely open access, from data to code, one would hope that this would either close the debate and/or provide the tools to do so. However, I am less sanguine, because science does have a reproducibility problem and the stripy nanoparticles controversy provides an excellent case study to illustrate the point.
Journals are generally loathe to take any sort of action, plenty of examples are documented on Retraction Watch and this article in The Chronicle of Higher Education summarises some of the issues. Indeed, it took a lot of effort to get journals to acknowledge some duplicated images, sometimes describing different experiments (see one example here). Moreover, as noted in various studies on retractions, and frequently discussed at Retraction Watch, even retracted papers seem to live on, being cited well after their retraction; the scientific community is also rather complacent.
So science does have a problem with reproducibility. I am not certain that a reproducibility initiative that sets out to test a small number of papers is of any use. It can only target a few papers, so for a start, a big problem with small numbers. Leaving matters in the hands of editors was suggested as the solution in an October editorial at ACS Nano. The issues with this “solution” are that the status quo is not working; these are discussed at length in an excellent post at Chembark).
To conclude, an important aspect of reproduction is that it is not necessarily actual reproduction, but a re-examination of observations made with better/alternate methods and/or reagents. I would not call this progress (consider Herodotus’ approach, a historian of the 5th Century BC), but simply a cornerstone of science. To do otherwise is to not engage in science, but in some other activity. We also need to remember is that a paper is the start, not the end; peer review is continuous. In this light, the opening up of comments on PubMed to the great unwashed is a step in the right direction, though the lack of anonymity means that these comments will be limited. PubPeer performs an excellent service and the stripy nanoparticle controversy has started to feather there too (here
Time to return to Herodotus’ most interesting take on Croesus and Solon of Athens.