Archive for the ‘Post publication peer review’ Category

A recent article on bioarchiv “Amending published articles: time to rethink retractions and corrections?” puts forwards ideas on how we might change the way we deal with retractions and corrections.

I would like to thank the authors for a most useful article. There is a lot I do not agree with, and that is surely the point of writing the article. Stimulating discussion is how we arrive at some consensus, though such consensus can only be temporary, as continued change is inevitable.

Note that the article represents the views of the authors, not necessarily those of COPE. Moreover, COPE has guidelines. Bringing breaches of these to a journal’s attention generally results in a shrug of the shoulders (e.g., here). So while the COPE guidelines are useful, they have no teeth. Perhaps this is a good thing, because the changes of science communication continue and it is more than likely that very soon we will have a world of science operating at two, very different regimes.

Regime 1: content of papers is key, place of publication unimportant. This is the current direction of much of the English speaking world.

Regime 2: JIF and glamour journals rule the roost, which is the status some European countries and much of Asia.

However, over time, budgets and financial pressure from large funders, some with international reach (e.g., Wellcome Trust, Bill and Melinda Gates Foundation) means that Regime 2 is likely to falter.


Now onto the article itself.

At the centre of the article is the statement “A lack of willingness to engage in proper post publication correction and amendment…” (Page 3, start of section “A fundamental underlying problem”). This paragraph then states that retraction is tainted, claiming that retractions for honest reasons (we got something wrong) are confounded by dishonest ones (we made up the data). The authors do not cite any evidence for this, but there is evidence, documented, for example, on Retraction Watch.

The evidence I am aware of does not agree.

Honest retractions, which on Retraction Watch are tagged by “Doing the right thing” do not tar authors with the brush of fraud. There is an upwell of sympathy from the community, because we are all too aware how easy it is to get something wrong. On occasion the authors publish how they tracked down the problem; when authors are alerted by a reader of a problem, the reader is more often than not thanked. This only enhances the reputation of the authors in terms of the community’s understanding of the rigour of their research and pushes their future papers higher up a lab’s reading list.

Dishonest retractions are often marked by a long period of obfuscation by the journal AND the authors. For example, look at the delay in The Lancet retracting Wakefield’s fraudulent claims of a link between the MMR vaccine and autism, the still standing paper on arsenate DNA, and how Nature dealt with the STAP stem cell papers (e.g.,  here). We often see corrections (sometimes colloquially termed “mega corrections”, because of the number of figures involved) of data that are fraudulent, to allow authors to find/produce the ‘right’ figure. This I do not agree with.

Retraction means what it says. That it can be a pejorative is down to the fact that most retractions are due to dishonesty, though as I note above, the honest are not tarred by the same brush by the community. If we substitute a neutral word, this too will gain a pejorative flavour, because the underlying problem will remain: most retractions are due to dishonesty. So using a different word will not solve the problem perceived by the authors. Alerting readers that there is ‘concern’ and an investigation is fine, and Pubpeer allows this in a most transparent manner: one can read the concerns of readers, look at the evidence and make up one’s mind. Critical evaluation of the evidence is our job, putting in some sort of filter isn’t going to do science any good. Note that obfuscation by journals and authors is the reason for Pubpeer’s popularity.

The solution to the problem is simple and lies in a different direction: open data. It is still possible to be dishonest in the context of open data, but more difficult and also much easier to spot.

The argument is also made that correcting the literature and investigating fraud should be separate. How can they? The paper is after all integral to the evidence that fraud has or has not occurred and the prime motive (paper = promotion/grant); open data means that individuals can use their critical and analytical faculties to make up their own mind, a platform for communicating one’s analyses, such as Pubpeer, provides the means to access more brains, which is always beneficial. An investigatory committee will likely need the analyses performed by the community.

Of course we all ‘make up data’ every day in the sense of model building, hypothesis generation and generally shooting the bull. We don’t publish this.  So publishing is the key step in scientific fraud, since one is communicating fiction as factual observation. I think the argument made in the paper relies on the idea that the ‘literature’ is somehow distinct from the rest of the process of science. It isn’t, never was and never will be. Communication is at the heart of science.

So what sort of amendments should be allowed? Errata and corrigenda (both make sense as production can result in errors), but no more. The more categories we have, the more game playing will occur by the dishonest journals and authors and we will be none the wiser.

That leaves version control. Should we embrace this? To me the answer is not entirely clear.

Preprint to print.  Yes, it is interesting to readers to see the genesis of the work.

Data: these will have accession numbers/DOIs. In curated databases, there is clear version control and a trail, though the investment in curation of databases is lamentable and we could do much much better. For some reason this is not regarded as ‘cutting edge, innovative, etc. This is a problem for the community to resolve. In the ‘wild’ (other open data) full version control may be less likely and patchy.

There was a time when a researcher working on “Problem A” would on occasion provide a simple title for a succession of papers:

Problem A: paper I

Problem A: paper II

and so on.

The papers in the series do not replace each other; each provides new evidence and likely a modified interpretation of “Problem A”. However, many journals decided that such practice was not good, I guess in part because the title was not sufficiently tabloid-like. Maybe this is a way forward?

As for a ‘living article’, that is the job of encyclopedias. As my generation of scientists retire, rather than write a book summarizing our field, many of us are likely to spend our dotage editing Wikipedia. This will change many aspects of science.

Read Full Post »

A little late this year, but then there are many calendars, so it is surely the start of the New Year for someone, somewhere, today.

Three years ago I made a simple resolution for the New Year, which was not to review for commercial closed access journals. I developed this in 2015 (and here) when I decided to change my publishing priorities and avoid commercial closed access journals.  This was pretty much already happening, so painless. My two caveats relating to publication are important, if you collaborate extensively, simply because many colleagues live in countries where Impact Factor rules their lives. Thus, when I am not the PI and in editorial control of the work, but merely a contributor, then I suggest alternatives, but I do not dig my heels in. For my students and postdocs who originate from these many countries the Learned Society and Open Access alternatives have pretty much solved the problem, in that they have decent impact factors, and their career progression will not be impeded.

I have also been experimenting with preprints for some time and now, along with Open Data. So the 2016 resolution adds preprints and Open Data. All papers where I am sole PI and have, therefore, the full decision-making power on publication (and also full responsibility for the paper) will be first submitted as preprints and data will be fully accessible.

What is interesting is the development of the change in publication culture. There are still many wedded to the notion that the “Top” journals are those with the highest impact factor, despite the fact that there is no evidence to support this conclusion. Witness the article in Nature reporting the excellent decision by the Gates Foundation, which stipulates that worked funded by the Gates Foundation cannot be published in journals that are not properly open access and open data compliant. To paraphrase the Nature headline:

“Shock Horror, Gates stops researchers publishing in Top journals aka ours”.

The implication that a paper in Nature is worth more than one in The Biochemical Journal or PlosOne to name two other good journals of many is ludicrous. Only when the paper is read can one decide whether it is excellent, good or poor, and then it takes time (=years) for the full scientific impact to be recognised. There are plenty of papers in ALL journals that are worse than poor, ample evidence is provided by a quick scan of Pubpeer; Nature for one has a lot to do to put its house in order.

So preprints and Open Data it is. I would encourage all my colleagues to follow suit.

Read Full Post »

I made my first New Year’s resolution on December 31, 2013: to only undertake reviews for open access and learned society journals.  This I have stuck to well, as I noted a year later for the simple reasons that it makes sense and it frees up my time.

Today I had a request to review a manuscript for Nature Publishing Group’s Scientific Reports, and I realised that I need to clarify my position.

I am on strike. (more…)

Read Full Post »

During a quick scan this morning of the “recent” comments on Pubpeer, an activity that I pursue regularly, as part of my reading, there seemed to be a lot more author responses.  So I counted.

70 articles featured with comments.

10 of these had an author response.

This is progress. I have no data, but my impression is that a year ago author comments were far rarer, maybe 1% or thereabouts. Now we are at 14%. Let’s hope this is not an anomaly, but a trend, and maybe in a few years papers without author responses will be in the minority.

Regardless of arguments about anonymity, etc., post publication peer review is growing, which is a sign of health in the scientific enterprise.

Read Full Post »

I went to a most useful talk this morning by Stephen Carlton (@LivUniOA) on the Univeristy repository. I had whinged about this as being nearly unusable, but then I jumped in on an early version.

The repository is now useable, though it is quirky. A few lessons from my efforts to update my entries.

Read Full Post »

I am a fan of PubPeer, as it provides a forum for discussion between authors and the wider community, something I have discussed in a number of posts (two examples being here and here). Two days ago, My colleague Mike Cross came by my office, having just delivered a pile of exam scripts for second marking (it’s exam and marking season), asking if I had seen a comment on our paper on PubPeer. I had not – too many e-mails and too busy to look at incoming!
So I looked at the question, which relates to panels in two figures being identical in our paper on neuropilin-1 and vascular endothelial growth factor A (VEGFA) – indeed they are labelled as being identical.

Read Full Post »

Discussion surrounding post publication peer review (previous post here seems to be growing and one issue that is frequently raised is anonymity. In a PLOS Medicine editorial Hilda Bastian argues that current post publication peer review is over focussed on what apparently is wrong in papers and that anonymity is a threat to effective post publication peer review.
A PubPeer thread takes issue with these and some other points and I have also joined in (I am Peer2). We should remember that any notion of power has nothing to do with scientific capability – indeed there may even be an inverse relation. So providing those with the least power (so the most disenfranchised) a means to participate in post publication peer review is essential. Though we have no data on PubPeer, PubMed Commons is a venue for the established. There are some critiques, there is also a fair amount of hagiography too. I would hazard a guess that PubPeer is far more diverse in terms of the career stage of participants and in terms of their gender/social group. Certainly my anecdotal evidence suggests as much, and that is all I have to go on. (more…)

Read Full Post »

Older Posts »