Ivan Oransky doesn’t sugar-coat his answer when asked about the state of academic peer review: “Things are pretty bad.”
As a distinguished journalist in residence at New York University and co-founder of Retraction Watch – a site that chronicles the growing number of papers being retracted from academic journals – Oransky is better positioned than just about anyone to make such a blunt assessment.
He elaborates further, citing a range of factors contributing to the current state of affairs. These include the publish-or-perish mentality, chatbot ghostwriting, predatory journals, plagiarism, an overload of papers, a shortage of reviewers, and weak incentives to attract and retain reviewers.
"Things are pretty bad and they have been bad for some time because the incentives are completely misaligned,” Oranksy told FirstPrinciples in a call from his NYU office.
Things are so bad that a new world record was set in 2023: more than 10,000 research papers were retracted from academic journals. In a troubling development, 19 journals closed after being inundated by a barrage of fake research from so-called “paper mills” that churn out the scientific equivalent of clickbait, and one scientist holds the current record of 213 retractions to his name.
“The numbers don’t lie: Scientific publishing has a problem, and it’s getting worse,” Oransky and Retraction Watch co-founder Adam Marcus wrote in a recent opinion piece for The Washington Post. “Vigilance against fraudulent or defective research has always been necessary, but in recent years the sheer amount of suspect material has threatened to overwhelm publishers.”
At its best, peer review ensures scientific ideas are scrutinized and challenged by the most discerning experts in a field; at its worst, critics say, it is a broken system that needs its own honest review.
Inside Higher Ed showcased the perils of peer review in a 2022 article, both calling the crisis “worse than ever” and examining potential solutions proposed by experts. These proposals included paying reviewers for their time and expertise, making peer review duties a requirement of employment, streamlining the “revise and resubmit” process, and mandating that paper submitters also review other authors' work.
“As a process, peer review theoretically works,” writes JT Torres of Quinnipiac University in an essay for The Conversation. “The question is whether the peer will get the support needed to effectively conduct the review.”
Oransky identifies a basic supply-and-demand problem: too many papers and too few willing and capable reviewers.
“The scale is impossible,” he says. “Let’s do the math: 3 million peer-reviewed papers published every year, multiply those by two-to-three reviewers, times four-to-eight hours per reviewer… that math doesn’t work. Yet we persist in this dreamlike fantasy that somehow peer review is going to ensure quality.”
The fragility of the peer review system were spotlighted in the 2013 “Who’s Afraid of Peer Review” affair, in which science journalist John Bohannon created “a scientific version of Mad Libs” — a nonsense paper that was accepted by 157 of 255 pay-to-publish open-access journals (an acceptance rate of 60 percent).
Retraction Watch is also tracking a troubling new trend: the rise of papers at least partially authored and reviewed by AI tools like ChatGPT, with many published papers inadvertently including AI-generated phrases like “regenerate response,” which authors and reviewers neglected to delete from the copied-and-pasted AI text.
But these trends are only symptoms of a deeper disease plaguing peer review, Oransky says, and it feeds on rankings.
“Everybody wants to be ranked number one. That goes for journals, it goes for universities, it goes for researchers, and it goes for governments. To be ranked higher, what you really need is citations. To be cited, you have to be published.”
“Publish or perish” is no longer the just mantra of young tenure-seeking academics, but have evolved into the reality of an increasingly cutthroat, profit driven industry.
“Citations are gamed in increasingly cunning ways,” Oransky writes with co-authors in medical journal The BMJ. “Authors and editors create citation rings and cartels. Companies pounce on expired domains to hijack indexed journals and take their names, fooling unsuspecting researchers. Or researchers who are well aware of the game use this vulnerability to publish papers that cite their work.”
There are even citation “brokerages” through which buyers can pay their way onto the author list of a scientific paper, even if they had nothing to do with the research.
Despite these damning revelations, Oransky still believes in the process of peer review and can envision ways to make it better. These include granting regulatory bodies "more teeth" to prevent and punish abuse, eliminating the pay-to-publish model, encouraging journals to publish peer reviews to increase transparency, and decoupling academic rankings from citation counts.
The entire peer review process needs the same impartial scrutiny and constructive criticism it aims to provide.
“Science is still the very best way to develop and grow knowledge and get closer to the truth,” Oransky concludes. “The scientific method is still one of humankind’s greatest inventions. But we need to remember that scientists are human too. Let’s call it what it is: a highly porous but useful filter.”
Further reading:
“What’s wrong with peer review?” by Nidhi Subbaraman in The Wall Street Journal.
“Peer review isn’t perfect − I know because I teach others how to do it and I’ve seen firsthand how it comes up short” by JT Torres in The Conversation
“The Peer-Review Crisis” by Colleen Flaherty in Inside Higher Ed