Feeds:
Posts
Comments

What is that ship?


Living in a port, there is always movement on the river. What is that ship, where did it come from, where might it be going? The answer is here, a website I stumbled across courtesy of @livuniwx, the Liverpool University weather station twitter feed.

Real-time update of shipping

This is a great companion to Flight Radar: a real-time map of planes in the air across the world.


Inspiration for this post comes from various sources, including Arjun Raj’s posts on the STAP papers (here and here) and that by The Spectroscope (here)
and my previous posts on the question of whether science does self-right.

I take issue with the trivialisation of data fabrication. Duplication of lanes on a gel, which Arun Raj states might simply be due to sloppiness cannot be so – it has to be deliberate, if only because of simple geometry. The data are the image of the gel. So to duplicate a lane means to cut and paste. The practice of selecting the narrow band of interest for presentation is a horizontal selection on the image, whereas gel lanes run vertically. So cutting and pasting lanes is a very different operation and there is no reason to cut an paste vertically. The only reason I can think of is that the person is engaged in the business of making up data that is intended to be a simulacrum of the experiment that was never done.

So the problem with the STAP papers isn’t that they are wrong because all papers are wrong, in the sense that they are approximations representative of the understanding of their time or because the technique is tricky. They are wrong because data have been invented. The conclusions of the papers may be correct, the method may work in someone’s hands, but the data do not support the conclusions. If I had a penny for every great idea generated down the pub, I would own the universe. Making that evidence up cannot deemed to constitute evidence for a phenomenon any more than ideas sketched out on a soggy beer mat.

The physics mistakes alluded to in Arun Raj’s second post were just that, as far as I understand – I don’t think there was any evidence for fabrication, just the usual confounding of signal and noise. One might reasonably take issue with the publicity surrounding the “discoveries”, but there is no evidence for fabrication. So these are wrong for the “right” reasons, even if the mechanism of dissemination is deemed deplorable.

The question of why Obokata has taken most of the flak is interesting. I would agree that this may reflect more on her gender than on who is to “blame” for the fabrication of data. This in itself is a good reason to give the institution getting a good going over, but checking all data and placing the blame on the underlings will solve nothing. What really needs to be done is to look at structures and ask the simple question: why have we got a research structure where there is actual power vested in those at the “top” and why are those at the top virtually all male?

This leads neatly to The Spectrscope’s post where a simple question is posed: why is Nature, which supposedly peer-reviewed the papers not the subject of an investigation? As the publishers of papers clearly flawed from the point of view of data fabrication, why is Obokata and her section of Riken under the spotlight, but not Nature?
The answer is obvious. Nature and NPG do not engage in research, they make money (a lot of it) by selling the work of scientists to scientists. They are answerable to no one and will happily take your cash and do this again and again and again – it fits the business model perfectly. Since they are answerable to no one, the only way forward is for scientists and their libraries to vote with their wallets. Pay up suckers.

Simulacra of experiments is something that occurs sufficiently regularly in science as to generate a reaction from the community. Pubpeer is one of the first stops for people who have concerns with data in a paper, because journals are happy to receive the publicity from research, but, as we see in the STAP case, are not answerable when they fail to perform the most basic tasks we would expect to be part of pre-publication peer review. The most recent case brought to my attention is a paper describing the use of TrisNTA nanoparticles to label hexahistidine tagged proteins in electron microscopy experiments. The problems have, according to one of the peers, been raised with the journal editors, as well as being put up on Pubpeer (here)

As a user and fan of TrisNTA I have read the paper and comments on Pubpeer. Although I am not an electron microscopist, I have to agree that the suspicions of the peers seem well placed. I would also add that in our experience labelling proteins quantitatively in this way seems only to work in vitro with purified protein and monovalently functionalised nanoparticles, followed by purification of the protein-nanoparticle conjugate. As a bit of weird lab lore, we have observed in some cases that a percentage of protein is refractory to labelling with nanoparticles monovalently functionalised with TrisNTA (with TrisNTA nanoparticles in good molar excess). If this unlabelled protein is re-purified it will then conjugate to the nanoparticles. If that wasn’t weird enough, this second conjugation is at pretty much the same efficiency as the first, so not 100%. If we go through the process a third time with the re-purified unlabelled protein from the second conjugation we end up with the same labelling efficiency. We are baffled, puzzled and so on, as people who work with proteins generally are, and for want of a satisfactory explanation and an experiment that will test this, we have called this the “protein siesta problem” and content ourselves with purifying the conjugate! So it would be unusual to be able to label all the proteins in a complex in situ as done in this paper.

Regardless of the fate of this particular paper, the bottom line is that TrisNTA remains one of the best ways to label proteins with nanoparticles, since it combines the power molecular biology (hexhistidine tags being coded for in DNA) with that of chemistry, but proteins being proteins, you have to do your work and cutting corners will lead you nowhere.

To finish on a more positive note, while we have a continuous stream of critical comments on Pubpeer, often relating to data duplication of some sort (a further example here, perhaps) good constructive comments are also appearing (example here). We may be (slowly) turning a corner, but still have a long way to go.


Kat’s paper on the interactions of neuropilin-1 with a heparan sulfate mimetic library of modified heparins is now published in The PeerJ
Continue Reading »


An excellent “discussion” today using the Guardian’s comments section on future investment in science, organised by Stephen Curry. David Willetts found to time to be there for the first 45 min, which is unprecedented and certainly from my perspective, much appreciated. Continue Reading »

A birthday question


A rather one sided debate on stripy nanoparticles is taking place over on PubPeer and on Raphaël’s blog

An individual (“unregistered”) is engaging a good old Gish Gallop, having a hard squint in the dark and seeing patterns. It happens.

I have suggested that “unregistered” should turn their efforts to something more mundane, which is to explain the re-use of data across a number of paper from the Stellacci group.
Continue Reading »


Gradually, the structural problems in sciences are making their way to the surface. There have been articles in newspapers, The Economist and other magazines around the world on the subject. These are stimulated by the constant dripping of information and studies that sit awkwardly with the perceived notion of how science functions.

The high profile controversies tend to catch our attention, simply because of a sense of outrage amongst the wider community that nothing has been done to fix the problem, or that the fixes have been inadequate. Despite the outrage, it remains the case that only a very few are willing to put their head above the parapet and say something. There has been an interesting discussion of this on Athene Donald’s blog here.

Not surprisingly, the “reproducibility question” has gained quite a lot of traction( e.g., here and here). This leads to a simple question: what qualifies as a reproduction?

I argue that an important aspect of reproduction is that it is not necessarily actual reproduction, but a re-examination of observations made with better methods, which includes analytical tools. I have two examples of how scientists deal with the changing landscape of data and their interpretation in these circumstances. The first example is an instance of good practice and is common (or should be). The second seems to ignore the past and the clear message provided by the new data.

Example 1
This is from an excellent 2012 paper in Journal of Biological Chemistry that we discussed (again) in a recent lab meeting. It deals with the molecular basis for one member of the fibroblast growth factor family, FGF-1, being a universal ligand. That is, FGF-1 can bind all FGF receptor isoforms, whereas other FGFs show clear restriction in their specificity. These differences must lie in the structural basis of the recognition of the FGF ligand, the FGF receptor and the heparan suflate co-receptor. The first model put forward by Moosa Mohamadi was superseded in his 2012 paper, when he and his group obtained higher resolution structures of the complexes. This is a great step forward, as FGFs are not just important to basic biology, but they also impact on a wide range of diseases, as well as tissue homeostasis and regeneration. I highlight the following from the paper:
To quote (page 3073, top right column)
“Based on our new FGF1-FGFR2b and FGF1-FGFR1c structures, we can conclude that the promiscuity of FGF1 toward FGFR isoforms cannot be attributed to the fact that FGF1 does not rely on the alternatively spliced betaC’-betaE loop of FGFR for binding as we initially proposed (31).”

This paper provides a great example of how science progresses and is how we should all deal with the normal refinement of data and the implications of such refinements.

Example 2
This is from the continued discussions on whether the ligands on the surface of gold nanoparticles can phase separate into stripes. This has been the subject of a good many posts on Raphael Lévy’s blog (from here to here), following his publication a year ago of his paper entitled “Stripy nanoparticles revisited“, as well as commentary here and elsewhere.

Some more papers from Stellacci and collaborators have been published in 2013. The entire oeuvre has been examined in detail by others, with guest posts on Raphael Lévy’s blog (most recent here) and comments on PubPeer relating to a paper on ArXiv that takes apart the entire body of evidence for stripes.

What is quite clear, even to a non-specialist, is that the basics of experimental science had not been followed in the Stellacci papers on the organisation of ligands on nanoparticles published from 2004 to 2012. These basics include the importance of signal being greater than noise and ensuring that experimental data sample at sufficient depth to avoid interpolation; note that in no cases did instrumentation limitation require interpolation. This might happen to any of us, we are, after all “enthusiasts”.

To conclude, I refer to my quote from Seneca “Errare humanum est sed perseverare diabolicum

This excellent advice is clearly being followed by one FGF lab. It would be good if this advice was adopted more generally across science. When we see real data and analysis (the hard stuff) that challenges our previous data and interpretations, we should all be happy to change these. This is how science (should) move forward. If everyone did this, then there would be no discussion regarding reproducibility. When we see more of the same stuff, without a clear hypothesis testing experiment, we are veering towards metaphysics.

Metaphysics is not science. I note that when data are hidden, so that analysis is restricted, we again enter the realm of metaphysics – hence, for example, the call for open access to clinical trials data.

Links with some relevance to the Seneca’s advice, reproducibility and so on:
There is an excellent post at The Curious Wavefunction’s Sci Am blog
PubPeer: here and here
Neuroskeptic’s post at Discover
Chembark’s post in response to an ACS Nano editorial on reporting misconduct.


Last post of the year, perhaps – I have a couple of others brewing, but they need some thought. This has been an unusual year in some ways. First, a big thanks to all my readers – I know a few of you and I hope that my occasional posts are of some interest to you.

To start at the end, I enjoyed this year’s Royal Institution Christmas lectures, though I have a whinge. Why was hyaluronic acid called a protein. It isn’t. It is a glycosaminoglycan, polysaccharide, sugar, carbohydrate, polymer, but NOT a protein. Crucially, it is a secondary gene product. Polysaccharide synthesis is the consequence of the activity of enzymes, primary gene products, but the regulation of polysaccharide synthesis is at the whim of cell and organism physiology and only indirectly by the genome. This, I think, makes the entire business of the mole rat living longer than equivalent rodents and being cancer free even more interesting. If one can have “p53″ named in the lecture, why not define hyaluronic acid correctly? Or is there a fear amongst those working with the central dogma of the messiness of things beyond? Generally, where things are messy in science is where the most interesting stuff is. Glycobiology is certainly messy, sticky and most interesting and I recommend it strongly to all – it is likely also to contribute to getting us out of the mess of global warming. Continue Reading »

Follow

Get every new post delivered to your Inbox.

Join 259 other followers