Feeds:
Posts
Comments

Archive for the ‘Research integrity’ Category


A discussion today with a student asking about the use of the Royal “we” in a report about his work. I agree, this is wrong. My suggestions were the first person singular and the passive. The passive gets a bad press in places, but it does work; the repetition of “we” or “I” grates, the latter particularly so because it can convey a strong sense of ego. Though as I pointed out, this depends how it is used. It was common in single author papers for the author to use “I”. The practice has disappeared due to multiauthorship and the urge to make scientific observations look objective. We finished by joking about the feudalism implicit in the use of ‘my laboratory’, as if this was some sort of sentient being, and then I wondered out loud whether one might not, in a multiauthor paper state:

“In experiment X (Fig. X), blogs demonstrated that….” And then later “In experiment Y (fig. Y) Doe indicated….”

Tonight a tweet from @UtopianCynic

UtopianCynic tweet

reminded me of my earlier conversation. Indeed, why bother with all the rubbish associated with authorship position? Why not have a list of authors and in the paper report who did what and who thought what?

It would then be clear (i) who pulled together the original hypothesis; (ii) who did the experiments; (iii) who thought up the interpretations of the data.

I think I might try this out.

This also solves the long-standing problem of blaming whoever is at the bottom of the pile when a paper is found to have manipulated data. Someone will be explicitly on watch and someone else will have done a particular measurement under that person’s watch.

It will be obvious who should walk the plank, and reaching for lawyers will only result in keel hauling, because it will be all written down and signed off.

 

Advertisements

Read Full Post »


A recent article on bioarchiv “Amending published articles: time to rethink retractions and corrections?” puts forwards ideas on how we might change the way we deal with retractions and corrections. (more…)

Read Full Post »


I am a fan of PubPeer, as it provides a forum for discussion between authors and the wider community, something I have discussed in a number of posts (two examples being here and here). Two days ago, My colleague Mike Cross came by my office, having just delivered a pile of exam scripts for second marking (it’s exam and marking season), asking if I had seen a comment on our paper on PubPeer. I had not – too many e-mails and too busy to look at incoming!
So I looked at the question, which relates to panels in two figures being identical in our paper on neuropilin-1 and vascular endothelial growth factor A (VEGFA) – indeed they are labelled as being identical.
(more…)

Read Full Post »


Discussion surrounding post publication peer review (previous post here seems to be growing and one issue that is frequently raised is anonymity. In a PLOS Medicine editorial Hilda Bastian argues that current post publication peer review is over focussed on what apparently is wrong in papers and that anonymity is a threat to effective post publication peer review.
A PubPeer thread takes issue with these and some other points and I have also joined in (I am Peer2). We should remember that any notion of power has nothing to do with scientific capability – indeed there may even be an inverse relation. So providing those with the least power (so the most disenfranchised) a means to participate in post publication peer review is essential. Though we have no data on PubPeer, PubMed Commons is a venue for the established. There are some critiques, there is also a fair amount of hagiography too. I would hazard a guess that PubPeer is far more diverse in terms of the career stage of participants and in terms of their gender/social group. Certainly my anecdotal evidence suggests as much, and that is all I have to go on. (more…)

Read Full Post »


Leonid Schneider has a guest post on Retraction Watch “What if universities had to agree to refund grants whenever there was a retraction?” that has generated a lot of discussion. My own comment became so long that I am posting it below. For those who are not aware, in the USA, the Office of Research Integrity (ORI) has the power to reclaim from institutions grant funding acquired through fraudulent means, e.g., manipulated or made up data, though there is a time limit and this is only exerted in a fraction of the cases investigated by ORI. No other country has a similar or analogous mechanism.

I like Leonid Schneider’s idea. (more…)

Read Full Post »


Gel (see footnote at end for a brief description of gels aimed at non-biologists) splicing is a term that describes the cutting and pasting of images of lanes (where 1 lane = 1 sample) and placing the images of the lanes in a different order or even combining lanes from different gels. A more extreme form is to simply shift the subsection of the lane, corresponding to the probed molecule, from one lane to the next.
This is wrong and it always has been. However, in post publication peer review on PubPeer, it is often defended, particularly for “older” papers, from a decade or more ago. This then raises arguments about what was acceptable then and are we shifting the goalposts of scientific integrity? The matter has even been a “Topic” on PubPeer. (more…)

Read Full Post »


The question relates to what Langmuir termed “Pathological Science”, simply put “people are tricked into false results … by subjective effects, wishful thinking or threshold interactions“. There is a lot of pathological science and I only use the examples below, because I am most familiar with them; for nanoparticles, I have a personal interest in understanding these materials, since I use them to try to make biological measurements, e.g., here.
(more…)

Read Full Post »

Older Posts »