An issue that regularly raises its head at Retraction Watch is the balance between corrections and retractions. The discussions that follow the excellent posts on Retraction Watch often revolve around the different reaction of journals to data re-use and/or manipulation.
One thing is certain: the reactions of journals are… …uncertain. So the re-use of data and data manipulation (not to be confused with data analysis, please), either within an article or between articles draw an inconsistent response.
Just to take some examples.
1. A recent case, described in a post on Retraction Watch, where data from fig 1a were re-used in fig 3 of the article in the Journal of Molecular Medicine, resulted in a retraction. Note that although the precise details are obscure, it was the authors who requested the retraction.
2. Serial image manipulation in the Journal of Biological Chemistry resulted in a substantial correction. In this instance the authors were subjected to an “investigation” (my quote marks) by their institution. This was instigated by peers publicising on the internet multiple instances of such image manipulation across several publications by these authors. The authors were not charged with misconduct by their institution. They now appear to be trying to clean up their record retrospectively. Judging from comments at various blogs, including under the Retraction Watch post, it would seem that not many of their peers agreed with the exoneration or with the retroactive clean up.
3. A Journal of Biological Chemistry article was retracted for image manipulation. The difference with the case above? The U Wisconsin researcher responsible was investigated for misconduct and formally found guilty. Then the paper was retracted.
3. A recent case in PNAS, which I posted on earlier, involved the reuse of data from one experiment in a previous Nature Materials paper, for a completely different experiment in the subsequent PNAS paper.
This earned the authors a correction: they substituted a new figure, which one assumes now actually has some bearing on the experiment described in the PNAS paper.
4. The after shocks of the Melendez scandal are a gradual retraction of his papers: now standing at nine, with this latest one for “Data presented in this paper have been manipulated digitally. Figures shown in this article have been replicated in other papers depicting different experimental conditions”
I could go on, trawling the pages of Retraction Watch. There do seem to be some themes emerging.
First, if the authors request a retraction they get it. As I noted in a post a while back, there are some outstanding scientists who, when they get it wrong (we all make mistakes), take the simple step of retracting the paper. In other cases, where some form of misconduct has taken place, the authors occasionally request a retraction. Perhaps in such instances the institution has slapped their wrists hard, or they had failed to properly train an inexperienced member of the team regarding what is and what is not appropriate.
Second, if a researcher is formally found guilty of misconduct, then the papers start to fall, but not all of them, one should note. More than 18 months after the Melendez fraud came to light, only nine papers have been retracted. Diederik Stapel’s count is up to 50. In part, the slow rate of retraction may stem from the time it takes to go through each paper and establish the truth beyond reasonable doubt.
Third, corrections are issued, even when data are re-used for different experiments and that appears to be the end. In such cases there has often not been a publicly acknowledged investigation of misconduct, though I would note that the peers are generally unimpressed and there are continued calls for action on other parts of the research team’s oeuvre. For example, see comments at Retraction Watch under Case 2.
Fourth, the level of defensiveness of institutions, journals and authors, with respect to their reactions to clear evidence of data re-use and data manipulation, is quite astounding. Astounding, because we are engaged in science. One of the reasons we are employed is because we ask questions. This does not fit with defensiveness. Never did, never will.
With reference to this last point, we should note that once again science is becoming a small village. While the days when the world’s entire molecular biology community could meet at Cold Spring Harbor are long gone, the internet and social media bring back important elements of that time. We can all fit into “one room” albeit a virtual one, with many conversations happening simultaneously. This leads to far greater public scrutiny of scientific output (aka papers). While no one can read every paper, someone, somewhere has read your paper, very, very carefully. If there is something they are unhappy about, we will all know about it in due course.
This leads to self-evident conclusions. You publish BECAUSE people want to hear about your work and they will read your paper. Some will read it very carefully, so if you have got something wrong it will come to light. Defensiveness, silence, summoning the self-righteous shield of peer-review is more damaging than coming out into the open.