5 minutes reading time (1074 words)

Peer Review @ OAI9

Peer Review @ OAI9
Last week I attended the OAI9 conference in Geneva. Great weather, great discussions and a great programme. The Thurs afternoon session on “Quality Assurance” (read: peer review) was of particular interest for me. Since I actually took notes (something I usually don't), I thought I'd give them a bit more form and put them to use here. Enjoy.

PLOS

First up, Damian Pattinson from PLOS discussed “Managing Peer Review at Scale” with PLOS One. Scale is certainly correct – PLOS One publishes over 600 new research articles every week and holds contributions from 450,000 authors that have been through the hands of 80,000 reviewers. Damian’s talk allowed a look at what that scale reveals about peer review: among some well-rehearsed arguments about bias, negativity, ad hom attacks and varied levels of expertise/ignorance, one statistic made clear the varying levels of detail reviewers bring – that PLOS One peer reviews range from 10 words to 10 pages. The problem is that science is becoming too complicated for reviewers – it is increasingly cross-disciplinary, but its reviewers are not. Allied to this is the reliance on the wrong-headed, print-culture question “is this paper suitable to be published in this journal?”, when a better question would be “is this paper of value to any particular reader?” Such observations underpinned PLOS’s then-innovative strategy of asking reviewers to assess submissions for their scientific rigour and not the novelty/scope of their topic/results. PLOS advocates a pretty sensible 4 point plan for peer-review:
  1. Open it up
  2. Share early
  3. Channel it
  4. Give credit
Finally, but perhaps most interestingly, PLOS is committed to implementing open peer review at PLOS One this year “hopefully”.

IMG_1239

Peerage of Science

Next Janne-Tuomas Seppänen introduced us to Peerage of Science (PoS), a much smaller operation than PLOS (but what isn’t?) which focusses firmly on peer review. PoS tackles two problems: the levels of variance of review quality and the duplication of effort as rejected papers “slide down the journal prestige ladder” until they find a home. PoS addresses these problems by inviting authors to submit directly to them. PoS then allows any vetted, qualified and non-affiliated peer reviewers to review. These reviews are themselves open to review and the whole process is open to subscribing journals, who can then make an offer to publish. Journals pay PoS for this service upon acceptance of the offer by the author. Janne then unveiled the site’s new feature “mypeerageof science”, which collects and shows metrics regarding an individual reviewer’s performance. Janne admitted that PoS is still small, but it has shown itself to have worked within specific subjects and is now ready to grow. This was a first introduction to PoS for many at the conference, and it’s fair to say it was a hit with many impressed by its innovative model.

Publons

Finally Andrew Preston introduced Publons, whose mission is to speed up science by improving peer review, in particular by addressing the problem of incentive in peer review. For authors and publishers the incentives are clear: authors must publish and publishers want profit (hence the increase in submissions, along with monetary reason to accept more submissions). But where is the incentive for reviewers (outside of altruism or professional pride)? Publons resolves this by turning peer review into measurable research outputs. Publons collects information about peer review from reviewers and publishers to produce reviewer profiles which detail verified peer review contributions that researchers can add to their CVs. It’s a great idea – and already pretty successful with more than 38,000 reviewers, 100,000+ reviews and more than 6000 journals involved. This is helped by partnering with publishers to encourage reviewers to share their reviews. A good stat that came from Publon’s analysis is that open and hidden reviews differ in that the hidden ones tended to be about 15% longer, but also more emotive (more positive/negative).

 

Questions

This was a great overview of some of the more innovative efforts to reform peer review. In the end, a few questions remained for me. Firstly, PLOS’ argument that once papers are of sufficient standards in methodology/execution that their relative merits regarding originality/importance should not be decided by editors – that if we can answer yes to the question “is this paper of value to any particular reader?”, the paper should become part of the literature – troubles me a little. Not to say I disagree, but I have reservations. Especially since I cannot think of a paper to which we could possibly (with certainty) answer “no” to this question.  Yet at a time of exponentially increasing research output, we need more and not fewer filters. Gold-standard journals – as brands connotating quality and importance – are of course not infallible (just as Decca said no to the Beatles, Nature rejected Graphene), no-one can say today what people tomorrow will find important and many negative results or reruns of experiments do still go unpublished. Yet we can take steps to address such problems without ditching wholesale editorial selection. I find the PLOS bucket method interesting, but think the question remains open whether it really aids the discoverability of key scientific results or hinders it by further diluting the record. A research project perhaps? (If there has been work done to study this, I’d be grateful to be pointed to it!)

Next – although the talks were all predicated on how broken the peer review system is, the evidence therefore was largely anecdotal. There is definitely scope for large research projects to work with companies like PLOS and Publons to more systematically interrogate their data regarding peer review, to use their scale to more scientifically diagnose the problems with peer review. This leads back to what I think is the central irony of peer review – that it is the doorman of science, expected to police the experimental method and keep the scientific record pure BUT is itself a woefully under-researched blunt instrument of almost (double?) blind-faith.

Finally, given all the talk of a crisis of incentive, it does bring home just how wonderful a thing academia is. Since although incentives of self-interest are lacking in peer-review, the system nonetheless functions (perhaps sub-optimally, but still!), built on people donating their time to try to improve the work of their peers and to foster progress in their disciplines. Reflecting on that fact gives hope to even a cynic like me.

As I say, a great session with lots to think about …
Video/slides available on the CERN website (click through the time-table).

Related Posts

By accepting you will be accessing a service provided by a third-party external to https://www.openaire.eu/

OpenAIRE
flag black white lowOpenAIRE-Advance receives
funding from the European 
Union's Horizon 2020 Research and
Innovation programme under Grant
Agreement No. 777541.

Subscribe

  Unless otherwise indicated, all materials created by OpenAIRE are licenced under CC ATTRIBUTION 4.0 INTERNATIONAL LICENSE.
OpenAIRE uses cookies in order to function properly. By using the OpenAIRE portal you accept our use of cookies.