An interesting discussion has arisen on science and the humanities, sparked by Steven Pinker’s essay in New Republic. Personally, I side with Massimo Pigliucci.
Indeed, my initial reaction to Steven Pinker’s essay was that science has a long way to go before it can explain (if it is even possible – the problem may require more computation than available in the universe) as much about the human condition as Jacques soliloquy in As You Like It.
A far more pertinent exercise is the podcast of See Arr Oh of the Just Like Cooking blog, ChemJobber of the eponynous blog
and Stuart Cantrill on plagiarism and how one journal (Stuart Cantrill is editor of Nature Chemistry) deals with the editorial process.
Plagiarism here is taken as the unattributed reproduction of text, passing it off as one’s own.
The podcast is here, though it is useful to listen to an earlier podcast between See Arr Oh and ChemJobber here.
The focus on text plagiarism is interesting – it certainly does happen and Stuart Cantrill considers that Review articles are more prone to the problem. But focusing on text plagiarism is rather similar to a government investigation into a major politically charged issue: dig very deeply and thoroughly where you know there are just a few skeletons. Data are where the real problems lie.
Some of the problems with data are:
Re-use of data between papers, sometimes describing different experiments.
Manipulating data, e.g., splicing data from different experiments together in one figure panel. Image data appear to be very prone to this, but perhaps other data are equally prone, but the problem is more difficult to detect without the original numbers? How easy is it to make up a spectrum? It would certainly be easy to edit one.
We have few tools and these are not generally used. It would be easy to run all images through software to detect manipulation, but as far as I know, only the Journal of Cell Biology does this.
Stuart Cantrill posited that everything gets corrected as evidence accumulates ” we should do our best to try and catch it [plagiarism] but I think we should accept the fact that no system is foolproof, but I don’t think we should really get ourselves worked up if something does slip through the system and does get missed because by-and-large the scientific literature is self-correctling for the most part”. A supporting assertion he used is that the vast majority of journals are concerned about their reputation.
The evidence points to the contrary and I have posted about this before.
A recent example from the literature is
“Chopstick Nanorods” Anumolu et al., NanoLetters DOI: 10.1021/nl400959z, where the TEM has clearly been subjected to manipulation. This has been blogged on at length by others here and here
We can add to this the comment in SI from the PI that found its way, inadvertently, into the published article, which has gone viral. The note is suggestive of pressure on one of the authors to “make up” an elemental analysis.
So what happens next?
Contrary to Stuart Cantrill’s assertion, not much. One has many fine words, this list of editorials from Nature being but one example
but very little action.
Consider:
The posting on “Chopstick Nanorods” was subjected to some legal pressure from the University of Utah. This has a familiar whiff: recall what occurred to Science Fraud and the false DCMA Retraction Watch was subjected to?
According to a tweet from Richard van Noorden (@Richvn), corrections remain steady at ~0.75% to 1%, though this includes “trivial” ones. Retractions on the other hand are up 10-fold and continue to rise.
Am I too cynical? No, read David Vaux’s excellent guest post on Retraction Watch, where he catalogues his efforts to get an article, for which he wrote the News and Views retracted – in the end all he could do was to retract his News and Views, the offending article still stands.
The extensive comments are worth reading too, Michaelhbriggs summarises one important aspect of the problem, I quote:
” …and the head of the institute and the head of the US group were appointed to the editorial boards of high impact factor journals.
… and so it go on”
The problem, put simply is conflict of interest. Not surprisingly, science develops a system, peer review, to prevent this, but no defense is absolute. Add in a hefty dose of profit motive in publishing (journals, impact factors, “chasing the spash”), ego in PIs (“chasing the splash” and a notion of self importance added to “lifestyle” and the unavoidable result is a degree of corruption.
Various solutions have been discussed, including a “reproducibility index”
I would argue that peer review is the only “regulator” required. However, the very notion of peer review is that it is a continuous process. So the peer review that occurs prior to publication is but the first step. What should follows is peer review by the wider community. It is this step where we have a problem. Science is incredibly uncritical, perhaps because science students do not take courses in philosophy and logic (some intriguing thoughts on what are “useful” courses for a science student to take here).
or because people are too anxious to get up some notional career ladder.
Continuous peer review is made really easy by the internet. A variety of sites have sprung up to meet this need.
At the very end of the process, we have Retraction Watch, which catalogues retractions. At earlier stages we had sites with sharp elbows, born, perhaps, out of frustration at the mountain of misconduct: Abnormal Science and Science Fraud, which have since closed. We now have a more mature forum, PubPeer , which can be considered to be their successor. There is a lot of very enlightening discussion between peers and authors on the site. Using Chrome, you can link through from Pubmed, so that when you access a paper in a PubMed search, you also access what is up on PubPeer.
PubPeer does work, see here.
The “faceless judges” are having an effect – internet enabled post publication peer review is most likely the reason for the rise in retractions.
The weakness of PubPeer?
There is no pressure for authors, journals or institutions to engage. This is clear from my recent reading (Chrome can link PubMed searches to PubPeer), where I came across a fascinating paper on nanoparticles with a fair number of comments from various peers, but nothing back from the authors – though it is August so they may be on holiday.
I would suggest with a few more steps, we will come closer to the aspiration of self-righting science.
1. Readers – yes YOU – use PubPeer to ask questions, don’t be shy.
2. Authors have an obligation to respond if they are to continue to publish or obtain grants. A corollary is that grant agencies and journals should cross check CVs versus PubPeer (or its successor) comments.
3. Journals have to respond proactively, rather than defensively, as they have hitherto. This would mean linking articles to PubPeer comments. This links to the Open Access and Open Data debates
4. Institutions have to respond.
This way we would have a real debate and a sifting mechanism to understand collectively, the messiness of science and grope our way to a better understanding. Science, as Stephen Curry noted in a post is messy. I would agree.
Update on 3 Nov 2013
I don’t think I said that ‘everything gets corrected as evidence accumulates’… I’ve just had another quick listen to what I think are the relevant bits of the podcast, and unless I’ve missed the offending parts (and I don’t think I would have said that because I don’t agree with it), this is what I actually said:
“…we should do our best to try and catch it [plagiarism] but I think we should accept the fact that no system is foolproof, but I don’t think we should really get ourselves worked up if something does slip through the system and does get missed because by-and-large the scientific literature is self-correctling for the most part…”
“…I’m sure not every example [of plagiarism] is caught but with the world being what it is now there is actually what seems to be a fairly good mechanism in place to catch the naughtly people and as long as those situations are dealt with satisfactorily after the case then the record gets corrected and we should all just carry on and not be too worried about it. I guess the problem is is when things don’t necessarily get corrected the way that perhaps people think they should…”
“…the vast majority of journals are really quite worried about their reputation and if they were seen to be not taking action against someone who had published a plagiarising paper in their journal I think that would reflect very badly on that journal… …I imagine they [sanctions] are applied, I’m not going to say in every single case, but I imagine they are applied in the majority of cases…”
I may well have said stupid stuff elsewhere in the podcast, but I don’t think what is written in the blog post accurately reflects what I did say (your readers can go and listen for themselves and make up their own minds). As for focusing on plagiarism, that was kinda the point of the podcast – it was never our intention to get into fraud/misconduct/Photoshop… we were talking for over 2 hours as it was!
This is not to say that you don’t make some very good points in your post – you certainly do; I just disagree with how my comments have been interpreted.
Stuart, thanks – sloppiness on my part, though I have the (rather lame) excuse it was rather noisy in the background here when I listened to the podcast. Post updated. Don’t think I noted anything stupid at all int eh podcast – one of the best ones I have listened to!
The Chopsticks nanorods paper, Anumolu et al., NanoLetters DOI: 10.1021/nl400959z has been retracted:
http://retractionwatch.wordpress.com/2013/08/16/nano-letters-retracts-chopstick-nanorod-paper-questioned-this-week-on-chemistry-blogs/
[…] To me there is something of a conflict of interest here. In my last post on the subject, “Does Science self-right“, I went through a selection of the evidence that highlighted the conflict of interest […]
[…] points, which echoes the frustrations vented in a number of my posts (some examples here and here), is that editors really need to act like editors, not just some sort of conduit. That is, they […]
[…] that, like education, peer review is a continuous process. PubPeer, which I have mentioned before (here and here) provides a route for post publication peer review. Activity at PubPeer is increasing, […]
Usually I do not learn article on blogs, however
I wish to say that this write-up very forced me to try and do so!
Your writing style has been amazed me. Thank you, quite nice article.
[…] Inspiration for this post comes from various sources, including Arjun Raj’s posts on the STAP papers (here and here) and that by The Spectroscope (here) and my previous posts on the question of whether science does self-right. […]
Hey there just wanted to give you a brief heads up and let you know a few
of the pictures aren’t loading properly. I’m not sure why but I think its a linking issue.
I’ve tried it in two different browsers and both
show the same outcome.