Two recent retractions on Retraction Watch merit more than a passing mention, because they demonstrate, yet again, the wildly different and completely contradictory reactions of individuals and journals to data that turn out to be problematic. In one sense this is an update post on “Chalk and Cheese“, “Re-use of “stripes”“, “Correct correction?” and “Data re-use warrants correction at PNAS“.
In my post Chalk and Cheese, I highlighted an exemplary retraction by a physicist. It is worth a re-read. Two neuroscience papers have recently been retracted by the authors, after they discovered problems in their data. In one case, due to a substance leaching form a filter used to sterilise a protein prior to adding to cultured cells, in another due to a microtome cutting sections thinner than it was set to. In both instances, painstaking work by the authors identified the problem and they then requested the retraction of their papers.
This is how science works. Scientists are trained to be careful, to catalogue and record obsessively, to check and re-check. Even with such care, there are occasional problems that creep through – this is inevitable, accidents, by definition, will happen. We should take our hats off to the authors of these papers, meanwhile re-doubling our own efforts to triple check a third time those controls and calculations.
The very fact that we are taking our hats off illustrates the problems facing science – it often doesn’t work. Most retractions on Retraction Watch are not due to the authors being fooled by the minutiae of a complex experiment, but due to a lack of integrity in their data. In my posts “Re-use of “stripes”“, “Data re-use warrants correction at PNAS” and “, “Correct correction?” I pointed out the gross inconsistencies of authors and journals challenged by data lacking integrity. Some of the more recent retractions on Retraction Watch show this lack of consistency is, well, consistent, for want of a better word.
Some examples.
Aggarwal, who laughably threatened to sue Retraction Watch, now has expressions of concerns issued against two of his papers.
Curi has a corrected paper retracted.
A paper is retracted due to “misgrouping of figures”
Authors are “unable to guarantee the accuracy of some of the figures” and retract two papers.
Another inconsistency that has been highlighted recently was the lapse at a NPG journal, Nature Materials, regarding a request by a third party for original data relating to data published that claim the existence of stripes by Stellacci’s group on nanoparticles. The initial response was of the “Not our problem” variety. It was only after some time and external pressure that the response was changed.
It is obvious that there is a lot of face saving going on. Data re-used for different experiments are “corrected”. That is, substituted by other data. Given the initial “error”, what confidence can we have in the cataloguing of data in that lab?
Clearly, the Thesaurus of Euphemisms is a heavily thumbed volume in some quarters. This, in my opinion, is a BIG mistake. Issuing a correction in such instances or using terms such as “misgrouping” to explain a retraction is a smokescreen for what had to be either the result of extremely poor record keeping or intent to deceive. In either case, journals should want to distance themselves from the authors and institutions should be taking a hard look at the staff concerned. Instead, coining yet another euphemism reduces, rather than enhances their reputation.
An interesting exercise would be to trawl through every Retraction Watch post and catalogue:
(i) the number or retractions that were honest, like the three mentioned at the top of this post.
(ii) the number of instances where the process involved a euphemism or smokescreen for poor record keeping or deceit.
(iii) for a bit of fun, the euphemisms themselves, if only to tickle those of us with a sense of irony.
[…] “Two recent retractions on Retraction Watch merit more than a passing mention, because they demonstrate, yet again, the wildly different and completely contradictory reactions of individuals and journals to data that turn out to be problematic …” (more) […]
[…] notices, a continued source of frustration at Retraction Watch and the stimulus for my proposed Thesaurus of Euphemisms. A fine collection of these can be found in Ivan Oransky’s excellent presentation at the 3rd […]
[…] there is an alternative, which is a load of corrections and some more entries for the “Thesaurus of Euphemisms“, as seems to be happening at […]
[…] common theme is the use of euphemisms, something I posted about before (Thesaurus of euphemisms). Words such as “misconduct ” and “fraud” are rarely seen in retraction […]