How toxic incentives are fueling an ‘epidemic’ of cheating in scholarly research

Lucina Uddin, chief plaintiff and neuroscience professor at the University of California Los Angeles, has sued six academic publishers partly for not paying scholars for performing peer review services.

Flawed incentive structures for publishing scholarly work may be eroding faculty and research integrity at large, contributing to an “epidemic” of dishonesty and even cheating, says Phillip Magness, senior fellow at the Independent Institute, a think tank, and co-author of “Cracks in the Ivory Tower: The Moral Mess of Higher Education.”

“Academia is an industry,” he says. “Even though we have private universities, it’s really kind of a private-public entity with hundreds of billions of dollars tied up in it. People—whether from the left or right—are going to exploit those bad incentives on their own behalf.”

The higher education sector of the U.S. research and development industry was worth $91.4 billion in 2022, according to the National Science Foundation. Revenue related to academic publishing is estimated to top $25 billion annually, according to PublishingState.com. But scientists usually go home empty-handed if their research results produce a null effect or are inconclusive.

“We call this the ‘file drawer effect,'” says Julia Strand, professor and chair of Carleton College’s psychology department. “It’s the idea that findings that are not statistically significant get put in the file drawer and nobody ever sees them.”

As a result, scientists may be swayed to manipulate their data until it produces attractive results. While “p-hacking” and other infamous forms of data manipulation are frowned upon, scientists can obscure whether their findings were organic—or a stroke of luck. For example, a scientist may “massage” their data by removing the results of certain participants who seemed like they weren’t taking a study seriously and whose results contradicted a presumptive outcome.

“Although we like to think of science as a purely objective and unbiased system,” Strand says, “the fact is that scientists have a lot of flexibility in how we implement our studies and analyze our results.”

Aside from potential material success, the “publish or perish” cycle is directly tied to an academic’s reputation, Magness says. “Rewards in academia—whether getting hired, promoted or published—come from output and productivity.”


President moves: A narrow no-confidence vote results in this leader’s exit


A litany of integrity issues arise when focusing on sensational results, reputation and high output rather than rigor and quality. The number of scientific papers retracted hit a record-high last year following an outpouring of paper mills publishing subpar or fake research littered with medical misinformation.

Secondly, it’s led to an over-reliance on the “Journal Impact Factor,” which ranks a work by the number of times it’s cited rather than its influence, Strand says. “It is an unfortunate and imprecise measure,” Strand says. “Papers cited a lot don’t necessarily mean that they are rigorously done or likely to be replicated. It’s just a measure of it being talked about.”

Shoddy work may gain traction when incentive structures fail to promote academic rigor. Lucina Uddin, chief plaintiff and neuroscience professor at the University of California Los Angeles, has sued six academic publishers partly for not paying scholars for performing peer review services. Elsevier, John Wiley & Sons, Sage Publications, Springer Nature, Taylor & Francis and Wolters Kluwer are estimated to control 50% of the market, according to the Norwegian peer-reviewed journal Tidsskriftet.

“[I]t has become increasingly difficult to coerce busy scholars into providing their valuable labor for nothing,” Uddin wrote in her class-action anti-trust lawsuit. “Submitted manuscripts sit awaiting peer review for many months or even years.”

Wiley, one of the lawsuit’s defendants, stated the claims “are without merit,” and other publishers have declined to comment or did not immediately respond to a request for one, Reuters reports.

However, sloppy peer-review processes may also be due in part to the growing “echo chamber” of higher education, Magness says. “If people are seeing results that they expect and agree with, they have no incentive to dig any deeper and check them against data sets,” Magness adds. “To put it another way, cheating happens because people can get away with it.”

Tune back in Thursday to read how concerns surrounding academic integrity are hitting the highest rungs of university leadership, fueling higher education’s political divisiveness—and how the sector can upscale its research guardrails. 

Alcino Donadel
Alcino Donadel
Alcino Donadel is a UB staff writer and first-generation journalism graduate from the University of Florida. He has triple citizenship from the U.S., Ecuador and Brazil.

Most Popular