This post has been stimulated by a post on PubPeer entitled “A crisis of trust“
This post should be required reading for all engaged in research and in the management of the institutions involved in research, including funders and journal editors. I made a brief comment, relating to a sentence that is some way down the post:
“This could be done if together we invert the burden of proof. It should be your responsibility as a researcher to convince your peers, not theirs to prove you wrong”.
My comment was that it is tragic that anyone should have to write that sentence. Science can only progress through discussion of data and models that try to capture the meaning of those data. So if there is disagreement, the proposers of the model have to go back to the data. This will often need a “destructive” experiment, rather than a “me too” experiment, to be designed and done to test the validity of the critiques. That should be obvious.
Another useful read is Paul Brookes’ analysis of papers he received while at the helm of the late, lamented Science Fraud website, which he has published at PeerJ, entitled “Internet publicity of data problems in the bioscience literature correlates with enhanced corrective action“.
Paul Brookes’ analysis indicates that it is likely that public exposure increases the amount of corrective action. Though this gives a positive impression, all is not well. Looking over papers that have featured on Pubpeer on subjects where I have some competence, we clearly have evidence of trolling by or on behalf of the authors (see, for example, the gish gallop and trolling on this preprint the criticizes an entire oeuvre on nanoparticles).
The reader can look for themselves under the “featured” articles on Pubpeer, choosing any article with >10 comments to see a good number of gish gallops and trolls, while the authors are often conspicuously absent.
The corrective mechanism proposed in the PubPeer blog is hardly revolutionary: it is simply stating the obvious regarding the framework within which research should be conducted. I already run every paper I think of reading through PubPeer – I use it as a filter, to remove the chaff, on which I should not waste my time. In the future, I will doubtless use the availability of data as a filter too. If there are not publicly available data associated with a paper, I will not read.
It will take a few years though before I put such a filter into effect – the notional time is a couple of years after I manage to fully implement the recommendation for my own papers. This is simply to allow the community to figure out how to put their data in the public domain. It isn’t always easy to do this in a meaningful way and there will, for example, be a fair amount of wrestling with instrument proprietary data formats, etc.
For all who really are engaged in research, it is in our best interests to do so. Putting our data out there will give the reader the assurance that our papers actually mean what they say. In a world where metrics pollute advancement, this will only increase your citations, so win-win and the cheats may become increasingly irrelevant. At that point the statement in the PubPeer blog post could be amended to:
“The burden of proof to convince your peers is your responsibility as a researcher, not theirs to prove you wrong” and we can enjoy a beer together (virtual or otherwise) discussing data.