Posts Tagged ‘Open Access’

A recent article on bioarchiv “Amending published articles: time to rethink retractions and corrections?” puts forwards ideas on how we might change the way we deal with retractions and corrections.

I would like to thank the authors for a most useful article. There is a lot I do not agree with, and that is surely the point of writing the article. Stimulating discussion is how we arrive at some consensus, though such consensus can only be temporary, as continued change is inevitable.

Note that the article represents the views of the authors, not necessarily those of COPE. Moreover, COPE has guidelines. Bringing breaches of these to a journal’s attention generally results in a shrug of the shoulders (e.g., here). So while the COPE guidelines are useful, they have no teeth. Perhaps this is a good thing, because the changes of science communication continue and it is more than likely that very soon we will have a world of science operating at two, very different regimes.

Regime 1: content of papers is key, place of publication unimportant. This is the current direction of much of the English speaking world.

Regime 2: JIF and glamour journals rule the roost, which is the status some European countries and much of Asia.

However, over time, budgets and financial pressure from large funders, some with international reach (e.g., Wellcome Trust, Bill and Melinda Gates Foundation) means that Regime 2 is likely to falter.


Now onto the article itself.

At the centre of the article is the statement “A lack of willingness to engage in proper post publication correction and amendment…” (Page 3, start of section “A fundamental underlying problem”). This paragraph then states that retraction is tainted, claiming that retractions for honest reasons (we got something wrong) are confounded by dishonest ones (we made up the data). The authors do not cite any evidence for this, but there is evidence, documented, for example, on Retraction Watch.

The evidence I am aware of does not agree.

Honest retractions, which on Retraction Watch are tagged by “Doing the right thing” do not tar authors with the brush of fraud. There is an upwell of sympathy from the community, because we are all too aware how easy it is to get something wrong. On occasion the authors publish how they tracked down the problem; when authors are alerted by a reader of a problem, the reader is more often than not thanked. This only enhances the reputation of the authors in terms of the community’s understanding of the rigour of their research and pushes their future papers higher up a lab’s reading list.

Dishonest retractions are often marked by a long period of obfuscation by the journal AND the authors. For example, look at the delay in The Lancet retracting Wakefield’s fraudulent claims of a link between the MMR vaccine and autism, the still standing paper on arsenate DNA, and how Nature dealt with the STAP stem cell papers (e.g.,  here). We often see corrections (sometimes colloquially termed “mega corrections”, because of the number of figures involved) of data that are fraudulent, to allow authors to find/produce the ‘right’ figure. This I do not agree with.

Retraction means what it says. That it can be a pejorative is down to the fact that most retractions are due to dishonesty, though as I note above, the honest are not tarred by the same brush by the community. If we substitute a neutral word, this too will gain a pejorative flavour, because the underlying problem will remain: most retractions are due to dishonesty. So using a different word will not solve the problem perceived by the authors. Alerting readers that there is ‘concern’ and an investigation is fine, and Pubpeer allows this in a most transparent manner: one can read the concerns of readers, look at the evidence and make up one’s mind. Critical evaluation of the evidence is our job, putting in some sort of filter isn’t going to do science any good. Note that obfuscation by journals and authors is the reason for Pubpeer’s popularity.

The solution to the problem is simple and lies in a different direction: open data. It is still possible to be dishonest in the context of open data, but more difficult and also much easier to spot.

The argument is also made that correcting the literature and investigating fraud should be separate. How can they? The paper is after all integral to the evidence that fraud has or has not occurred and the prime motive (paper = promotion/grant); open data means that individuals can use their critical and analytical faculties to make up their own mind, a platform for communicating one’s analyses, such as Pubpeer, provides the means to access more brains, which is always beneficial. An investigatory committee will likely need the analyses performed by the community.

Of course we all ‘make up data’ every day in the sense of model building, hypothesis generation and generally shooting the bull. We don’t publish this.  So publishing is the key step in scientific fraud, since one is communicating fiction as factual observation. I think the argument made in the paper relies on the idea that the ‘literature’ is somehow distinct from the rest of the process of science. It isn’t, never was and never will be. Communication is at the heart of science.

So what sort of amendments should be allowed? Errata and corrigenda (both make sense as production can result in errors), but no more. The more categories we have, the more game playing will occur by the dishonest journals and authors and we will be none the wiser.

That leaves version control. Should we embrace this? To me the answer is not entirely clear.

Preprint to print.  Yes, it is interesting to readers to see the genesis of the work.

Data: these will have accession numbers/DOIs. In curated databases, there is clear version control and a trail, though the investment in curation of databases is lamentable and we could do much much better. For some reason this is not regarded as ‘cutting edge, innovative, etc. This is a problem for the community to resolve. In the ‘wild’ (other open data) full version control may be less likely and patchy.

There was a time when a researcher working on “Problem A” would on occasion provide a simple title for a succession of papers:

Problem A: paper I

Problem A: paper II

and so on.

The papers in the series do not replace each other; each provides new evidence and likely a modified interpretation of “Problem A”. However, many journals decided that such practice was not good, I guess in part because the title was not sufficiently tabloid-like. Maybe this is a way forward?

As for a ‘living article’, that is the job of encyclopedias. As my generation of scientists retire, rather than write a book summarizing our field, many of us are likely to spend our dotage editing Wikipedia. This will change many aspects of science.

Read Full Post »

A little late this year, but then there are many calendars, so it is surely the start of the New Year for someone, somewhere, today.

Three years ago I made a simple resolution for the New Year, which was not to review for commercial closed access journals. I developed this in 2015 (and here) when I decided to change my publishing priorities and avoid commercial closed access journals.  This was pretty much already happening, so painless. My two caveats relating to publication are important, if you collaborate extensively, simply because many colleagues live in countries where Impact Factor rules their lives. Thus, when I am not the PI and in editorial control of the work, but merely a contributor, then I suggest alternatives, but I do not dig my heels in. For my students and postdocs who originate from these many countries the Learned Society and Open Access alternatives have pretty much solved the problem, in that they have decent impact factors, and their career progression will not be impeded.

I have also been experimenting with preprints for some time and now, along with Open Data. So the 2016 resolution adds preprints and Open Data. All papers where I am sole PI and have, therefore, the full decision-making power on publication (and also full responsibility for the paper) will be first submitted as preprints and data will be fully accessible.

What is interesting is the development of the change in publication culture. There are still many wedded to the notion that the “Top” journals are those with the highest impact factor, despite the fact that there is no evidence to support this conclusion. Witness the article in Nature reporting the excellent decision by the Gates Foundation, which stipulates that worked funded by the Gates Foundation cannot be published in journals that are not properly open access and open data compliant. To paraphrase the Nature headline:

“Shock Horror, Gates stops researchers publishing in Top journals aka ours”.

The implication that a paper in Nature is worth more than one in The Biochemical Journal or PlosOne to name two other good journals of many is ludicrous. Only when the paper is read can one decide whether it is excellent, good or poor, and then it takes time (=years) for the full scientific impact to be recognised. There are plenty of papers in ALL journals that are worse than poor, ample evidence is provided by a quick scan of Pubpeer; Nature for one has a lot to do to put its house in order.

So preprints and Open Data it is. I would encourage all my colleagues to follow suit.

Read Full Post »

I made my first New Year’s resolution on December 31, 2013: to only undertake reviews for open access and learned society journals.  This I have stuck to well, as I noted a year later for the simple reasons that it makes sense and it frees up my time.

Today I had a request to review a manuscript for Nature Publishing Group’s Scientific Reports, and I realised that I need to clarify my position.

I am on strike. (more…)

Read Full Post »

This post assembles various comments I have posted and other thoughts on sci-hub and access to the scientific literature. It finishes with some ideas about what we should consider keeping and some of my better experiences, as a consumer and producer of the scientific literature.

Some time between clay tablet and the PDF

Once upon a time manuscripts were hand written, double spaced (fountain pen as ever outperforming all other tools), graphs transferred to tracing paper using a rotoring pen and Letraset (also alive and well) used for symbols. (more…)

Read Full Post »

Much has been written about the peer review process and its flaws. Richard Smith, a former editor of the British Medical Journal has stated that since peer-review doesn’t work, we shouldn’t do it

I have recently come across another example of the flaws in peer review. I reviewed a manuscript last year and identified what I believed to be technical problems and suggested at least major revision. The other two reviewers agreed; the three of us had homed in independently on the same technical issues.

Move forward a year and the paper is published in another (equally “prestigious”) journal, no changes.

So I will now amend my New Year resolution (still holding firm) from 2014 and 2015.

In addition to only reviewing for open access journals, I will from now on only review for journals where the review is open and published or where I am free to publish the review. That, at least, will avoid the ethical tension between participating in anonymous peer-review and then wanting to publish the critique when nothing has changed in the paper.

Why Groundhog day? This is not the first time I have had this experience.

Read Full Post »

I went to a most useful talk this morning by Stephen Carlton (@LivUniOA) on the Univeristy repository. I had whinged about this as being nearly unusable, but then I jumped in on an early version.

The repository is now useable, though it is quirky. A few lessons from my efforts to update my entries.

Read Full Post »

Bearing in mind that there are lies, damned lies and publication metrics (apologies to Benjamin Disraeli and Mark Twain), publishing in Elsevier journals may not be good for the health of your future citations. (more…)

Read Full Post »