Has classical peer review really been proven to work? In the first monthly Crastina Column Joshua Nicholson shares his doubts with us. He is one of the creators of the open access journal The Winnower – ”DIY Scientific Publishing” which is is ”founded on the principle that all ideas in science should be openly discussed and debated”.
Science is said to be evidence-based—that is one needs evidence to support or refute a hypothesis. It has been proposed and defended that classical peer review is a suitable check for the quality of scientific research. I ask here, as a scientist, where is the evidence?
Classical peer review is the process in which 2-4 peers (colleagues and competitors) review a paper looking for clarity, errors, fraud, and many times even “potential impact.” The process occurs away from the public eye, is single-blinded (i.e. the reviewers are anonymous to authors), and can take weeks, even years to complete (Bornmann and Daniel, 2010).
Overall, it is a slow process that is not generally open for evaluation by the scientific community. Is it effective? When editors at various journals artificially introduced errors into studies and sent them for review, the majority went undetected.
- Editors at the British Medical Journal found on average that only 2 out of 9 major errors were detected (Schroter et al., 2008).
- At JAMA only 2 out of 8 errors were noticed (Godlee et al., 1998).
- At the Annals of Emergency Medicine, similarly, 68% of reviewers did not realize the conclusions were not supported by the evidence (Baxt et al., 1998).
Thus, the majority of errors go undetected by classical peer review, supporting the conclusion that it is not effective.
What’s the solution? Open post-publication peer review (ie open commentary after publication with the ability to revise and update publications). Disagree? Then write a review! But careful, because that’s open post-publication peer review working.
- Baxt, W.G., Waeckerle, J.F., Berlin, J.A., Callaham, M.L., 1998. Who Reviews the Reviewers? Feasibility of Using a Fictitious Manuscript to Evaluate Peer Reviewer Performance. Annals of Emergency Medicine 32, 310-317.
- Bornmann, L., Daniel, H.-D., 2010. How long is the peer review process for journal manuscripts? A case study on Angewandte Chemie International Edition. Chimia (Aarau) 64, 72-77.
- Godlee, F., Gale, C.R., Martyn, C.N., 1998. Effect on the quality of peer review of blinding reviewers and asking them to sign their reports: A randomized controlled trial. JAMA 280, 237-240.
- Schroter, S., Black, N., Evans, S., Godlee, F., Osorio, L., Smith, R., 2008. What errors do peer reviewers detect, and does training improve their ability to detect them? Journal of the Royal Society of Medicine 101, 507-514.
- Kolisa Yola Sinyanya, Cape Town: ‘I won their hearts with SciComm!’ - March 21, 2019
- Global Lab – building Ghana’s open science and innovation community. - February 25, 2019
- TCC in Nairobi teaches effective communication to scientists - January 25, 2019
- The beginning of a new era: science communication in Africa - January 13, 2019
- TAP helps prospective students choose dream university - January 4, 2019
- Meet Sarang Park – manager of @IAmSciComm - September 28, 2018
- Sharing an experience we all can relate to – our childhood - September 5, 2018
- Sci Foo: intense intellectual, interdisciplinary interaction for influencers - July 17, 2018
- Agile Science #3: Xavier came up with Agile Research a decade ago - February 23, 2018
- Randy McIntosh, neuroscientist: “I see parallels between the process of discovery in science and musical improvisation” - January 29, 2018