An interesting discussion has arisen on science and the humanities, sparked by Steven Pinker’s essay in New Republic. Personally, I side with Massimo Pigliucci.
Indeed, my initial reaction to Steven Pinker’s essay was that science has a long way to go before it can explain (if it is even possible – the problem may require more computation than available in the universe) as much about the human condition as Jacques soliloquy in As You Like It.
A far more pertinent exercise is the podcast of See Arr Oh of the Just Like Cooking blog, ChemJobber of the eponynous blog
and Stuart Cantrill on plagiarism and how one journal (Stuart Cantrill is editor of Nature Chemistry) deals with the editorial process.
Plagiarism here is taken as the unattributed reproduction of text, passing it off as one’s own.
The podcast is here, though it is useful to listen to an earlier podcast between See Arr Oh and ChemJobber here.
The focus on text plagiarism is interesting – it certainly does happen and Stuart Cantrill considers that Review articles are more prone to the problem. But focusing on text plagiarism is rather similar to a government investigation into a major politically charged issue: dig very deeply and thoroughly where you know there are just a few skeletons. Data are where the real problems lie.
Some of the problems with data are:
Re-use of data between papers, sometimes describing different experiments.
Manipulating data, e.g., splicing data from different experiments together in one figure panel. Image data appear to be very prone to this, but perhaps other data are equally prone, but the problem is more difficult to detect without the original numbers? How easy is it to make up a spectrum? It would certainly be easy to edit one.
We have few tools and these are not generally used. It would be easy to run all images through software to detect manipulation, but as far as I know, only the Journal of Cell Biology does this.
Stuart Cantrill posited that
everything gets corrected as evidence accumulates ” we should do our best to try and catch it [plagiarism] but I think we should accept the fact that no system is foolproof, but I don’t think we should really get ourselves worked up if something does slip through the system and does get missed because by-and-large the scientific literature is self-correctling for the most part”. A supporting assertion he used is that the vast majority of journals are concerned about their reputation.
The evidence points to the contrary and I have posted about this before.
A recent example from the literature is
“Chopstick Nanorods” Anumolu et al., NanoLetters DOI: 10.1021/nl400959z, where the TEM has clearly been subjected to manipulation. This has been blogged on at length by others here and here
We can add to this the comment in SI from the PI that found its way, inadvertently, into the published article, which has gone viral. The note is suggestive of pressure on one of the authors to “make up” an elemental analysis.
So what happens next?
Contrary to Stuart Cantrill’s assertion, not much. One has many fine words, this list of editorials from Nature being but one example
but very little action.
The posting on “Chopstick Nanorods” was subjected to some legal pressure from the University of Utah. This has a familiar whiff: recall what occurred to Science Fraud and the false DCMA Retraction Watch was subjected to?
According to a tweet from Richard van Noorden (@Richvn), corrections remain steady at ~0.75% to 1%, though this includes “trivial” ones. Retractions on the other hand are up 10-fold and continue to rise.
Am I too cynical? No, read David Vaux’s excellent guest post on Retraction Watch, where he catalogues his efforts to get an article, for which he wrote the News and Views retracted – in the end all he could do was to retract his News and Views, the offending article still stands.
The extensive comments are worth reading too, Michaelhbriggs summarises one important aspect of the problem, I quote:
” …and the head of the institute and the head of the US group were appointed to the editorial boards of high impact factor journals. … and so it go on”
The problem, put simply is conflict of interest. Not surprisingly, science develops a system, peer review, to prevent this, but no defense is absolute. Add in a hefty dose of profit motive in publishing (journals, impact factors, “chasing the spash”), ego in PIs (“chasing the splash” and a notion of self importance added to “lifestyle” and the unavoidable result is a degree of corruption.
Various solutions have been discussed, including a “reproducibility index”
I would argue that peer review is the only “regulator” required. However, the very notion of peer review is that it is a continuous process. So the peer review that occurs prior to publication is but the first step. What should follows is peer review by the wider community. It is this step where we have a problem. Science is incredibly uncritical, perhaps because science students do not take courses in philosophy and logic (some intriguing thoughts on what are “useful” courses for a science student to take here).
or because people are too anxious to get up some notional career ladder.
Continuous peer review is made really easy by the internet. A variety of sites have sprung up to meet this need.
At the very end of the process, we have Retraction Watch, which catalogues retractions. At earlier stages we had sites with sharp elbows, born, perhaps, out of frustration at the mountain of misconduct: Abnormal Science and Science Fraud, which have since closed. We now have a more mature forum, PubPeer , which can be considered to be their successor. There is a lot of very enlightening discussion between peers and authors on the site. Using Chrome, you can link through from Pubmed, so that when you access a paper in a PubMed search, you also access what is up on PubPeer.
PubPeer does work, see here.
The “faceless judges” are having an effect – internet enabled post publication peer review is most likely the reason for the rise in retractions.
The weakness of PubPeer?
There is no pressure for authors, journals or institutions to engage. This is clear from my recent reading (Chrome can link PubMed searches to PubPeer), where I came across a fascinating paper on nanoparticles with a fair number of comments from various peers, but nothing back from the authors – though it is August so they may be on holiday.
I would suggest with a few more steps, we will come closer to the aspiration of self-righting science.
1. Readers – yes YOU – use PubPeer to ask questions, don’t be shy.
2. Authors have an obligation to respond if they are to continue to publish or obtain grants. A corollary is that grant agencies and journals should cross check CVs versus PubPeer (or its successor) comments.
3. Journals have to respond proactively, rather than defensively, as they have hitherto. This would mean linking articles to PubPeer comments. This links to the Open Access and Open Data debates
4. Institutions have to respond.
This way we would have a real debate and a sifting mechanism to understand collectively, the messiness of science and grope our way to a better understanding. Science, as Stephen Curry noted in a post is messy. I would agree.
Update on 3 Nov 2013