The so-called “sting” by Science on Open Access journals has brought a lot of criticism, some of this is here, and here. For me the best has been Micheal Eisen’s post, which uses satire to show that Science was well wide of the mark.
As the discussion dies down on the subject, I have had a few thoughts. First, what might have been Science’s motive? Was it simply to try to tickle their readers, to undermine Open Access or to distinguish Science from other journals? Put bluntly, the “sting” was completely lacking in rigour in terms of the method and the reporting of the data. There have always been vanity publishers and Jeffrey Beall has a list of these here.
It is obvious that you can get anything published through a vanity publisher and that any test of the peer review system should not include such journals. My conclusion is that Science were trying in part to shore up the argument for maintaining journals and the status quo.
Note that there is a trend here. Great rhetoric, and bad practice. If you look at a series of editorials in Nature on allied subjects (peer review, reproducibility and so on, see image).
it isn’t difficult to spot the chasm between rhetoric and practice. For another view on this chasm, see David Vaux’s excellent guest post at Retraction Watch, where he describes his frustrations with Nature that led him to retract his News and Views article, because they would not take action over a paper that was clearly wrong.
So how fares peer review? Same as ever, very uneven. This summer saw yet another paper on so-called “Stripy nanoparticles” published in ACS Nano. Those following this saga will look at the data and sigh.
Peer review prior to publication is only the start. What really interests us is how the new knowledge communicated in a paper alters our view of the world. This means that, like education, peer review is a continuous process. PubPeer, which I have mentioned before (here and here) provides a route for post publication peer review. Activity at PubPeer is increasing, which is good to see. In my own field of fibroblast growth factors six papers have attracted comments.
These papers are in Endocrinology 2011; two in Am J Path 2003
and 2010; Arterio Scler Throm Vasc Biol 2003; J Clin. Invest. 2002; J Biol Chem 2002;
I cannot find any papers with comments relating to heparan sulfate or proteoglycans – I hope this stays that way, the field is extremely collegial, open and scientifically functional. My other area of activity, materials, does not fare so well. Indeed, the two papers attracting the highest number of comments from the greatest number of Peers are the now infamous Cell paper on stem cells (42 comments), accepted within 3 days of submission and then subject to a “correction” due to “image problems” and a materials paper on a gold nanoparticle-based plasmonic sensor (26 comments). The latter has no input from the authors or from the journal.
Such a lack of response is not exceptional – the six FGF papers have yet to elicit a response from authors or journals. However, I for one would definitely like to know. These papers are exciting. Is the work right or are there in fact major underlying problems? More and more people are using PubPeer as a “filter” for papers. These and other papers are on my “at risk” list. I would not spend time reading them or incorporating the new knowledge they communicate until such time as there is a response.
So while PubPeer seems to work well at the level of the scientific community engaging in post publication peer review, those who are responsible for publishing, authors and journals often do not want to engage (this is not always the case, see here). It would be nice to see some degree of follow though from journals and editors, rather than their sheltering in the bunker to protect commercial interest. In this light, I would have a quiet word of advice to editors at all journals, whether professional, as one might find at Science and Nature or amateur, as in many specialist journals, where the editor has a day job too: an editor has a major responsibility to the readership and so the community. At present the community is extremely cynical regarding the quality of journals. Metrics from impact factors to eigen values are a source of mirth (sometiems of the gallows flavour), not earnest discussion. So do exactly what it says on the box marked “Editor”. You will earn respect and the community’s perception of the quality of your journal will rise.
This would be far more interesting than rhetoric that is a long way from practice and poor attempts at investigating flaws in peer review.
[…] “The so-called ‘sting’ by Science on Open Access journals has brought a lot of criticism, some of this is here, and here. For me the best has been Micheal Eisen’s post, which uses satire to show that Science was well wide of the mark …” (more) […]