Has classical peer review really been proven to work? In the first monthly Crastina Column Joshua Nicholson shares his doubts with us. He is one of the creators of the open access journal The Winnower – ”DIY Scientific Publishing” which is is ”founded on the principle that all ideas in science should be openly discussed and debated”.
Science is said to be evidence-based—that is one needs evidence to support or refute a hypothesis. It has been proposed and defended that classical peer review is a suitable check for the quality of scientific research. I ask here, as a scientist, where is the evidence?
Classical peer review is the process in which 2-4 peers (colleagues and competitors) review a paper looking for clarity, errors, fraud, and many times even “potential impact.” The process occurs away from the public eye, is single-blinded (i.e. the reviewers are anonymous to authors), and can take weeks, even years to complete (Bornmann and Daniel, 2010).
Overall, it is a slow process that is not generally open for evaluation by the scientific community. Is it effective? When editors at various journals artificially introduced errors into studies and sent them for review, the majority went undetected.
- Editors at the British Medical Journal found on average that only 2 out of 9 major errors were detected (Schroter et al., 2008).
- At JAMA only 2 out of 8 errors were noticed (Godlee et al., 1998).
- At the Annals of Emergency Medicine, similarly, 68% of reviewers did not realize the conclusions were not supported by the evidence (Baxt et al., 1998).
Thus, the majority of errors go undetected by classical peer review, supporting the conclusion that it is not effective.
What’s the solution? Open post-publication peer review (ie open commentary after publication with the ability to revise and update publications). Disagree? Then write a review! But careful, because that’s open post-publication peer review working.
- Baxt, W.G., Waeckerle, J.F., Berlin, J.A., Callaham, M.L., 1998. Who Reviews the Reviewers? Feasibility of Using a Fictitious Manuscript to Evaluate Peer Reviewer Performance. Annals of Emergency Medicine 32, 310-317.
- Bornmann, L., Daniel, H.-D., 2010. How long is the peer review process for journal manuscripts? A case study on Angewandte Chemie International Edition. Chimia (Aarau) 64, 72-77.
- Godlee, F., Gale, C.R., Martyn, C.N., 1998. Effect on the quality of peer review of blinding reviewers and asking them to sign their reports: A randomized controlled trial. JAMA 280, 237-240.
- Schroter, S., Black, N., Evans, S., Godlee, F., Osorio, L., Smith, R., 2008. What errors do peer reviewers detect, and does training improve their ability to detect them? Journal of the Royal Society of Medicine 101, 507-514.
- Agile Science student project at Brussels Engineering School ECAM: “We can’t wait to try it again!” - August 28, 2020
- Create an infographic in the Lifeology SciArt Infographic Challenge - June 16, 2020
- Adam Ruben – The scientist that teaches undergraduate students comedy - March 27, 2020
- Sam Gregson, Bad Boy of Science: “Comedy helps to bridge the gap” - March 10, 2020
- The Coolest Science Merchandise of 2019 - December 16, 2019
- Science Media Centre (UK) offers guide on dealing with online harassment in academia - November 26, 2019
- Agile project management taught to students and researchers at Karolinska Institutet - September 20, 2019
- Stefan Jansson: Improve your credibility! (Crastina Column, September 2019) - September 6, 2019
- The People’s Poet: Silke Kramprich, tech communicator - August 31, 2019
- Coworking Mornings help London SciCommers being more productive - August 17, 2019