-
Edited by Susan T. Fiske, Princeton University, Princeton, NJ, and approved November 18, 2014 (received for review September 21, 2014)
Significance
Peer review is an institution of
enormous importance for the careers of scientists and the content of
published science. The
decisions of gatekeepers—editors and peer
reviewers—legitimize scientific findings, distribute professional
rewards, and influence
future research. However, appropriate data
to gauge the quality of gatekeeper decision-making in science has
rarely been made
publicly available. Our research tracks
the popularity of rejected and accepted manuscripts at three elite
medical journals.
We found that editors and reviewers
generally made good decisions regarding which manuscripts to promote and
reject. However,
many highly cited articles were
surprisingly rejected. Our research suggests that evaluative strategies
that increase the
mean quality of published science may also
increase the risk of rejecting unconventional or outstanding work.
Abstract
Peer review is the main
institution responsible for the evaluation and gestation of scientific
research. Although peer review
is widely seen as vital to scientific
evaluation, anecdotal evidence abounds of gatekeeping mistakes in
leading journals,
such as rejecting seminal contributions or
accepting mediocre submissions. Systematic evidence regarding the
effectiveness—or
lack thereof—of scientific gatekeeping is
scant, largely because access to rejected manuscripts from journals is
rarely available.
Using a dataset of 1,008 manuscripts
submitted to three elite medical journals, we show differences in
citation outcomes for
articles that received different
appraisals from editors and peer reviewers. Among rejected articles,
desk-rejected manuscripts,
deemed as unworthy of peer review by
editors, received fewer citations than those sent for peer review. Among
both rejected
and accepted articles, manuscripts with
lower scores from peer reviewers received relatively fewer citations
when they were
eventually published. However, hindsight
reveals numerous questionable gatekeeping decisions. Of the 808
eventually published
articles in our dataset, our three focal
journals rejected many highly cited manuscripts, including the 14 most
popular; roughly
the top 2 percent. Of those 14 articles,
12 were desk-rejected. This finding raises concerns regarding whether
peer review
is ill-suited to recognize and gestate the
most impactful ideas and research. Despite this finding, results show
that in our
case studies, on the whole, there was
value added in peer review. Editors and peer reviewers generally—but not
always—made
good decisions regarding the
identification and promotion of quality in scientific manuscripts.
Footnotes
- 1To whom correspondence should be addressed. Email: ksiler@gmail.com.
-
Author contributions: K.S., K.L., and L.B. designed research; K.S., K.L., and L.B. performed research; K.S. analyzed data; and K.S. wrote the paper.
-
The authors declare no conflict of interest.
-
This article is a PNAS Direct Submission.
-
Data deposition: Our data are confidential and securely stored at the University of California, San Francisco.
-
This article contains supporting information online at www.pnas.org/lookup/suppl/doi:10.1073/pnas.1418218112/-/DCSupplemental.