Leonid Schneider has a guest post on Retraction Watch “What if universities had to agree to refund grants whenever there was a retraction?” that has generated a lot of discussion. My own comment became so long that I am posting it below. For those who are not aware, in the USA, the Office of Research Integrity (ORI) has the power to reclaim from institutions grant funding acquired through fraudulent means, e.g., manipulated or made up data, though there is a time limit and this is only exerted in a fraction of the cases investigated by ORI. No other country has a similar or analogous mechanism.
I like Leonid Schneider’s idea. At present the only deterrent at the institutional level for manipulating data is the effect on the image of the institution and of the PI, and in the US the occasional instance of financial loss through an ORI investigation. Given the increase in lawyering up by people with multiple suspect papers, an additional selection pressure would do no harm. It would also provide ammunition inside institutions to those that push for research scientists to be rewarded, rather than “research managers”, who have mega millions, but may have little understanding of the techniques and so no ability to engage in meaningful training of students and postdocs.
I don’t see that this idea would create further bureaucracy. Institutions will not need to invest anything to deal with such a clause in grants, because the PIs will already being doing things right. The occasional SNAFU, which leads to a retraction for “doing the right thing” will not cost much, because at most only one grant is likely to be built on the faulty data. These SNAFUs are often detected by the lab itself and usually not so long after the work is published.
We should remember that university budgets are large, an individual’s grants are small and there is plenty of elasticity in a university budget to find a million if needed. In addition, we have relatively few retractions/year, though we have far more concerns expressed per year on, for example, PubPeer that remain unanswered. So for an individual institution, the risk is low and there would be no need to create an additional layer of administrative control.
The risk is higher if an institution hires a serial fraudster. So, for the serial fraudster, it is a different matter, because each and every grant is tied to faulty work, sometimes over many years. This in itself does not mean that institutions will create additional layers of bureaucracy, they will simply spend more time scrutinising papers on CVs – aka reading the papers, as opposed to scanning the names of the journals the papers are published in. This can only result in a higher calibre of hired staff.
However, Leonid Schneider’s idea will only work if we have “open data”. That is, if the original data from a publication are freely available. Then the problem of PIs and institutions fighting to prevent retractions will evaporate, because the data are there and a court case will simply land them with a large legal bill. So with the open data caveat the idea is well worth considering.
The benefits will be substantial, because we will have a culture shift away from PIs disengaging from research and sitting behind a desk (or on a plane) 24/7. Instead, all that is good will be under a positive selection pressure: graduate student and postdoc training will be very substantially enhanced; collaboration within institutions will likely increase, because group size will drop, allowing PIs the time to talk to each other and generate really new ideas. The greatest benefit will be from having open data. This already occurs in some fields – consider the amount of DNA sequence data, protein structure data and now mass spectrometry data that is in public databases. We use these without even thinking that they are open data on a daily basis in molecular life sciences. If all other data were similarly accessible, research would really change.