Scientific misconduct has fascinated me, since I was a PhD student. A number of frauds were exposed around that time, including that of Phil Spector, a graduate student in Ephraim’s Racker’s lab. Racker subsequently wrote a fascinating article in Nature where he dissects what he believes to be the main characteristics of fraudsters.
It is extremely important to separate misconduct, or to use the colloquial term, fraud, from mistake and artefact. Mistakes happen, particularly at the cutting edge of science and in interdisciplinary research. Artefacts are there, lying in wait equally for the unwary and the well prepared. Though embarrassing, real scientists own up to their mistakes.
However, there is a class of people, I guess those with inflated egos and/or misplaced ambition, who defend artefact well beyond reason and refuse to admit to error. This is one end of the fraud spectrum. This spectrum then extends into data manipulation and data creation. In all instances I would use the term “fraud”, not “scientific fraud”. There is nothing scientific about the fraud, it is just plain old simple fraud. It is also completely against what should be the normal process of science, which is the rigorous examination and debate of information and ideas. However, temptation is substantial: journal editors watch their impact factors nervously, promotion and hiring committees look at the gloss (what is the impact factor of a candidate’s publications, as opposed to actually reading some of these publications and considering their actual impact as pieces of scientific output). Governments and funding agencies contribute, by promoting views of excellence that can be misguided. We see this in the UK through the stress placed on REF/RAE, which can only be an imperfect exercise and, if too rigid, risks disenfranchising substantial amounts of excellent research.
There have been a fair number of scientific frauds. The result was that in the USA the Office of Research Integrity has been set up. ORI is pretty unique and a model that should be replicated elsewhere. It combines fraud investigation with education. Most fraud is preventable. However, lab members often prefer to keep quiet. I guess it is difficult to speak up against someone one sees everyday. In addition, pressures on PIs mean that they often lose track of what is their actual job: PI. Pressures on the young to get tenure don’t help either.
So we have, perhaps, created a system where what is most surprising is not that fraud happens, but that it isn’t more common. So there is clearly plenty of good in science.
It should also be noted that most frauds go undetected or more correctly, no one bothers to right the ship, because they feel the work will be forgotten soon enough. How many Photoshopped half tones do we see in papers and react by simply not reading any further or ignoring those particular data in the paper? In instances where fraud is publicised, this seems to be either due to the scale (in terms of number of papers, taxpayer’s funds spent, perceived importance of the scientific findings) and/or due to the stature of the scientist involved. However, even when data have been clearly tinkered with in a manner that is not acceptable, e.g., clear evidence of cutting and pasting bands in a Western blot, this is not the end: it is possible to wriggle out, though not without tarnishing one’s reputation.
Enough for now – I recommend the links to OSI and the OSI video and these really should be a “must watch” for scientists in training.
In the meantime, if you want to know what not to read, I recommend these two WordPress blogs.
Retraction Watch does just that: it highlights papers that are retracted, a most excellent and useful service to the community.
Abnormal Science catalogued published data that should not have been published. It has not been active since February 2012, but is a very interesting insight, including some of the major recent cases of fraud, such as that by Melendez.
Update 3 November 2013 The links to Abnormal Science and Science Fraud are dead