Posts Tagged ‘science’

A recent article on bioarchiv “Amending published articles: time to rethink retractions and corrections?” puts forwards ideas on how we might change the way we deal with retractions and corrections.

I would like to thank the authors for a most useful article. There is a lot I do not agree with, and that is surely the point of writing the article. Stimulating discussion is how we arrive at some consensus, though such consensus can only be temporary, as continued change is inevitable.

Note that the article represents the views of the authors, not necessarily those of COPE. Moreover, COPE has guidelines. Bringing breaches of these to a journal’s attention generally results in a shrug of the shoulders (e.g., here). So while the COPE guidelines are useful, they have no teeth. Perhaps this is a good thing, because the changes of science communication continue and it is more than likely that very soon we will have a world of science operating at two, very different regimes.

Regime 1: content of papers is key, place of publication unimportant. This is the current direction of much of the English speaking world.

Regime 2: JIF and glamour journals rule the roost, which is the status some European countries and much of Asia.

However, over time, budgets and financial pressure from large funders, some with international reach (e.g., Wellcome Trust, Bill and Melinda Gates Foundation) means that Regime 2 is likely to falter.


Now onto the article itself.

At the centre of the article is the statement “A lack of willingness to engage in proper post publication correction and amendment…” (Page 3, start of section “A fundamental underlying problem”). This paragraph then states that retraction is tainted, claiming that retractions for honest reasons (we got something wrong) are confounded by dishonest ones (we made up the data). The authors do not cite any evidence for this, but there is evidence, documented, for example, on Retraction Watch.

The evidence I am aware of does not agree.

Honest retractions, which on Retraction Watch are tagged by “Doing the right thing” do not tar authors with the brush of fraud. There is an upwell of sympathy from the community, because we are all too aware how easy it is to get something wrong. On occasion the authors publish how they tracked down the problem; when authors are alerted by a reader of a problem, the reader is more often than not thanked. This only enhances the reputation of the authors in terms of the community’s understanding of the rigour of their research and pushes their future papers higher up a lab’s reading list.

Dishonest retractions are often marked by a long period of obfuscation by the journal AND the authors. For example, look at the delay in The Lancet retracting Wakefield’s fraudulent claims of a link between the MMR vaccine and autism, the still standing paper on arsenate DNA, and how Nature dealt with the STAP stem cell papers (e.g.,  here). We often see corrections (sometimes colloquially termed “mega corrections”, because of the number of figures involved) of data that are fraudulent, to allow authors to find/produce the ‘right’ figure. This I do not agree with.

Retraction means what it says. That it can be a pejorative is down to the fact that most retractions are due to dishonesty, though as I note above, the honest are not tarred by the same brush by the community. If we substitute a neutral word, this too will gain a pejorative flavour, because the underlying problem will remain: most retractions are due to dishonesty. So using a different word will not solve the problem perceived by the authors. Alerting readers that there is ‘concern’ and an investigation is fine, and Pubpeer allows this in a most transparent manner: one can read the concerns of readers, look at the evidence and make up one’s mind. Critical evaluation of the evidence is our job, putting in some sort of filter isn’t going to do science any good. Note that obfuscation by journals and authors is the reason for Pubpeer’s popularity.

The solution to the problem is simple and lies in a different direction: open data. It is still possible to be dishonest in the context of open data, but more difficult and also much easier to spot.

The argument is also made that correcting the literature and investigating fraud should be separate. How can they? The paper is after all integral to the evidence that fraud has or has not occurred and the prime motive (paper = promotion/grant); open data means that individuals can use their critical and analytical faculties to make up their own mind, a platform for communicating one’s analyses, such as Pubpeer, provides the means to access more brains, which is always beneficial. An investigatory committee will likely need the analyses performed by the community.

Of course we all ‘make up data’ every day in the sense of model building, hypothesis generation and generally shooting the bull. We don’t publish this.  So publishing is the key step in scientific fraud, since one is communicating fiction as factual observation. I think the argument made in the paper relies on the idea that the ‘literature’ is somehow distinct from the rest of the process of science. It isn’t, never was and never will be. Communication is at the heart of science.

So what sort of amendments should be allowed? Errata and corrigenda (both make sense as production can result in errors), but no more. The more categories we have, the more game playing will occur by the dishonest journals and authors and we will be none the wiser.

That leaves version control. Should we embrace this? To me the answer is not entirely clear.

Preprint to print.  Yes, it is interesting to readers to see the genesis of the work.

Data: these will have accession numbers/DOIs. In curated databases, there is clear version control and a trail, though the investment in curation of databases is lamentable and we could do much much better. For some reason this is not regarded as ‘cutting edge, innovative, etc. This is a problem for the community to resolve. In the ‘wild’ (other open data) full version control may be less likely and patchy.

There was a time when a researcher working on “Problem A” would on occasion provide a simple title for a succession of papers:

Problem A: paper I

Problem A: paper II

and so on.

The papers in the series do not replace each other; each provides new evidence and likely a modified interpretation of “Problem A”. However, many journals decided that such practice was not good, I guess in part because the title was not sufficiently tabloid-like. Maybe this is a way forward?

As for a ‘living article’, that is the job of encyclopedias. As my generation of scientists retire, rather than write a book summarizing our field, many of us are likely to spend our dotage editing Wikipedia. This will change many aspects of science.

Read Full Post »

Two postdoc positions are available in my lab.

Both are part of the larger, European Commission-funded FET-Open programme, ArrestAD, which has recently been funded.

Position 1 aims to characterise heparin-binding proteins in Alzhiemer’s disease.

Position 2 aims to develop inhibitors to Golgi sulfotransferases.

For both positions, feel free to contact me directly by e-mail for informal discussions.

Read Full Post »


The ArrestAD team had its kick off meeting in Paris on 5 January 2017. This was held on Paris, the base of our coordinator, Dulcé Papy-Garcia and was hosted by the APHP in the Espace Scipion. Team members from outside Paris stayed at the Hotel La Demeure  situated nearby and though on the Boulevard St Marcel, nice and quiet.

The kick off meeting started with a presentation form our coordinator, which provided the backdrop for the day. Science presentations from the participants then followed. These provided an overview of the position of the field of the participant and then summarised research plans. In a multidisciplinary project, one cannot be fully up to speed with the other fields, so we all learned a lot. The more technical part of these presentations gave us an opportunity to discus the nuts and bolts of our research plans and how these fitted together. It…

View original post 115 more words

Read Full Post »

A little late this year, but then there are many calendars, so it is surely the start of the New Year for someone, somewhere, today.

Three years ago I made a simple resolution for the New Year, which was not to review for commercial closed access journals. I developed this in 2015 (and here) when I decided to change my publishing priorities and avoid commercial closed access journals.  This was pretty much already happening, so painless. My two caveats relating to publication are important, if you collaborate extensively, simply because many colleagues live in countries where Impact Factor rules their lives. Thus, when I am not the PI and in editorial control of the work, but merely a contributor, then I suggest alternatives, but I do not dig my heels in. For my students and postdocs who originate from these many countries the Learned Society and Open Access alternatives have pretty much solved the problem, in that they have decent impact factors, and their career progression will not be impeded.

I have also been experimenting with preprints for some time and now, along with Open Data. So the 2016 resolution adds preprints and Open Data. All papers where I am sole PI and have, therefore, the full decision-making power on publication (and also full responsibility for the paper) will be first submitted as preprints and data will be fully accessible.

What is interesting is the development of the change in publication culture. There are still many wedded to the notion that the “Top” journals are those with the highest impact factor, despite the fact that there is no evidence to support this conclusion. Witness the article in Nature reporting the excellent decision by the Gates Foundation, which stipulates that worked funded by the Gates Foundation cannot be published in journals that are not properly open access and open data compliant. To paraphrase the Nature headline:

“Shock Horror, Gates stops researchers publishing in Top journals aka ours”.

The implication that a paper in Nature is worth more than one in The Biochemical Journal or PlosOne to name two other good journals of many is ludicrous. Only when the paper is read can one decide whether it is excellent, good or poor, and then it takes time (=years) for the full scientific impact to be recognised. There are plenty of papers in ALL journals that are worse than poor, ample evidence is provided by a quick scan of Pubpeer; Nature for one has a lot to do to put its house in order.

So preprints and Open Data it is. I would encourage all my colleagues to follow suit.

Read Full Post »

I made my first New Year’s resolution on December 31, 2013: to only undertake reviews for open access and learned society journals.  This I have stuck to well, as I noted a year later for the simple reasons that it makes sense and it frees up my time.

Today I had a request to review a manuscript for Nature Publishing Group’s Scientific Reports, and I realised that I need to clarify my position.

I am on strike. (more…)

Read Full Post »

This is a question raised at the end of the excellent article by @Amy_Harmon regarding Open Access and preprints is can biomedical scientists evaluate each other without journals?

The short answer is a resounding yes.  Physical scientists and mathematicians have been posting much of their research as preprints on arXiv for a few decades, with no prejudice to their ability to evaluate the quality of work or of individuals.

The counter argument raised by many in biomedical sciences, from scientists to some journal editors can be boiled down quite simply: We are special and cannot possibly do this.

Various arguments are put forward, from competition (=fear of scooping) to intellectual property. These arguments are heard in many biomedical/biology departments, sometimes leading to quite heated discussions. It is also interesting to note that the defenders of the status quo are not necessarily the older members of the community.

There is a simple answer. Yes you are special, but not in the good sense of the word. (more…)

Read Full Post »

A tweet brought me to a PeerJ blog post on the uptake of open peer review. The post is worth reading. At PeerJ open review is an option – authors and reviewers can opt in or out, and only if both opt in is the reviewing history of a paper published.  One thing that caught my eye was that while 80% of authors opt in, the total number of paper with open reviews is just 40%, which indicates that reviewers are more reticent. (more…)

Read Full Post »

Older Posts »