Feeds:
Posts
Comments

The burden of proof


This post has been stimulated by a post on PubPeer entitled “A crisis of trust
This post should be required reading for all engaged in research and in the management of the institutions involved in research, including funders and journal editors. I made a brief comment, relating to a sentence that is some way down the post:

“This could be done if together we invert the burden of proof. It should be your responsibility as a researcher to convince your peers, not theirs to prove you wrong”.

My comment was that it is tragic that anyone should have to write that sentence. Science can only progress through discussion of data and models that try to capture the meaning of those data. So if there is disagreement, the proposers of the model have to go back to the data. This will often need a “destructive” experiment, rather than a “me too” experiment, to be designed and done to test the validity of the critiques. That should be obvious.
Another useful read is Paul Brookes’ analysis of papers he received while at the helm of the late, lamented Science Fraud website, which he has published at PeerJ, entitled “Internet publicity of data problems in the bioscience literature correlates with enhanced corrective action“.

Paul Brookes’ analysis indicates that it is likely that public exposure increases the amount of corrective action. Though this gives a positive impression, all is not well. Looking over papers that have featured on Pubpeer on subjects where I have some competence, we clearly have evidence of trolling by or on behalf of the authors (see, for example, the gish gallop and trolling on this preprint the criticizes an entire oeuvre on nanoparticles).

The reader can look for themselves under the “featured” articles on Pubpeer, choosing any article with >10 comments to see a good number of gish gallops and trolls, while the authors are often conspicuously absent.

The corrective mechanism proposed in the PubPeer blog is hardly revolutionary: it is simply stating the obvious regarding the framework within which research should be conducted. I already run every paper I think of reading through PubPeer – I use it as a filter, to remove the chaff, on which I should not waste my time. In the future, I will doubtless use the availability of data as a filter too. If there are not publicly available data associated with a paper, I will not read.

It will take a few years though before I put such a filter into effect – the notional time is a couple of years after I manage to fully implement the recommendation for my own papers. This is simply to allow the community to figure out how to put their data in the public domain. It isn’t always easy to do this in a meaningful way and there will, for example, be a fair amount of wrestling with instrument proprietary data formats, etc.

For all who really are engaged in research, it is in our best interests to do so. Putting our data out there will give the reader the assurance that our papers actually mean what they say. In a world where metrics pollute advancement, this will only increase your citations, so win-win and the cheats may become increasingly irrelevant. At that point the statement in the PubPeer blog post could be amended to:

“The burden of proof to convince your peers is your responsibility as a researcher, not theirs to prove you wrong” and we can enjoy a beer together (virtual or otherwise) discussing data.

What is that ship?


Living in a port, there is always movement on the river. What is that ship, where did it come from, where might it be going? The answer is here, a website I stumbled across courtesy of @livuniwx, the Liverpool University weather station twitter feed.

Real-time update of shipping

This is a great companion to Flight Radar: a real-time map of planes in the air across the world.


Inspiration for this post comes from various sources, including Arjun Raj’s posts on the STAP papers (here and here) and that by The Spectroscope (here)
and my previous posts on the question of whether science does self-right.

I take issue with the trivialisation of data fabrication. Continue Reading »


Kat’s paper on the interactions of neuropilin-1 with a heparan sulfate mimetic library of modified heparins is now published in The PeerJ
Continue Reading »


An excellent “discussion” today using the Guardian’s comments section on future investment in science, organised by Stephen Curry. David Willetts found to time to be there for the first 45 min, which is unprecedented and certainly from my perspective, much appreciated. Continue Reading »

A birthday question


A rather one sided debate on stripy nanoparticles is taking place over on PubPeer and on Raphaël’s blog

An individual (“unregistered”) is engaging a good old Gish Gallop, having a hard squint in the dark and seeing patterns. It happens.

I have suggested that “unregistered” should turn their efforts to something more mundane, which is to explain the re-use of data across a number of paper from the Stellacci group.
Continue Reading »


Gradually, the structural problems in sciences are making their way to the surface. There have been articles in newspapers, The Economist and other magazines around the world on the subject. These are stimulated by the constant dripping of information and studies that sit awkwardly with the perceived notion of how science functions.

The high profile controversies tend to catch our attention, simply because of a sense of outrage amongst the wider community that nothing has been done to fix the problem, or that the fixes have been inadequate. Despite the outrage, it remains the case that only a very few are willing to put their head above the parapet and say something. There has been an interesting discussion of this on Athene Donald’s blog here.

Not surprisingly, the “reproducibility question” has gained quite a lot of traction( e.g., here and here). This leads to a simple question: what qualifies as a reproduction?

I argue that an important aspect of reproduction is that it is not necessarily actual reproduction, but a re-examination of observations made with better methods, which includes analytical tools. I have two examples of how scientists deal with the changing landscape of data and their interpretation in these circumstances. The first example is an instance of good practice and is common (or should be). The second seems to ignore the past and the clear message provided by the new data.

Example 1
This is from an excellent 2012 paper in Journal of Biological Chemistry that we discussed (again) in a recent lab meeting. It deals with the molecular basis for one member of the fibroblast growth factor family, FGF-1, being a universal ligand. That is, FGF-1 can bind all FGF receptor isoforms, whereas other FGFs show clear restriction in their specificity. These differences must lie in the structural basis of the recognition of the FGF ligand, the FGF receptor and the heparan suflate co-receptor. The first model put forward by Moosa Mohamadi was superseded in his 2012 paper, when he and his group obtained higher resolution structures of the complexes. This is a great step forward, as FGFs are not just important to basic biology, but they also impact on a wide range of diseases, as well as tissue homeostasis and regeneration. I highlight the following from the paper:
To quote (page 3073, top right column)
“Based on our new FGF1-FGFR2b and FGF1-FGFR1c structures, we can conclude that the promiscuity of FGF1 toward FGFR isoforms cannot be attributed to the fact that FGF1 does not rely on the alternatively spliced betaC’-betaE loop of FGFR for binding as we initially proposed (31).”

This paper provides a great example of how science progresses and is how we should all deal with the normal refinement of data and the implications of such refinements.

Example 2
This is from the continued discussions on whether the ligands on the surface of gold nanoparticles can phase separate into stripes. This has been the subject of a good many posts on Raphael Lévy’s blog (from here to here), following his publication a year ago of his paper entitled “Stripy nanoparticles revisited“, as well as commentary here and elsewhere.

Some more papers from Stellacci and collaborators have been published in 2013. The entire oeuvre has been examined in detail by others, with guest posts on Raphael Lévy’s blog (most recent here) and comments on PubPeer relating to a paper on ArXiv that takes apart the entire body of evidence for stripes.

What is quite clear, even to a non-specialist, is that the basics of experimental science had not been followed in the Stellacci papers on the organisation of ligands on nanoparticles published from 2004 to 2012. These basics include the importance of signal being greater than noise and ensuring that experimental data sample at sufficient depth to avoid interpolation; note that in no cases did instrumentation limitation require interpolation. This might happen to any of us, we are, after all “enthusiasts”.

To conclude, I refer to my quote from Seneca “Errare humanum est sed perseverare diabolicum

This excellent advice is clearly being followed by one FGF lab. It would be good if this advice was adopted more generally across science. When we see real data and analysis (the hard stuff) that challenges our previous data and interpretations, we should all be happy to change these. This is how science (should) move forward. If everyone did this, then there would be no discussion regarding reproducibility. When we see more of the same stuff, without a clear hypothesis testing experiment, we are veering towards metaphysics.

Metaphysics is not science. I note that when data are hidden, so that analysis is restricted, we again enter the realm of metaphysics – hence, for example, the call for open access to clinical trials data.

Links with some relevance to the Seneca’s advice, reproducibility and so on:
There is an excellent post at The Curious Wavefunction’s Sci Am blog
PubPeer: here and here
Neuroskeptic’s post at Discover
Chembark’s post in response to an ACS Nano editorial on reporting misconduct.

Follow

Get every new post delivered to your Inbox.

Join 269 other followers