Another front in the 'replication crisis': replicated papers less likely to be cited

May 21, 2021

Citations are becoming a sticky wicket in the world of scholarly writing. (Unsplash/Bernd Klutsch)

Academic papers whose findings aren't replicable are more likely to be cited than papers whose findings are replicated in top psychology, economics and general-science journals, according to a new study.

The research, published Friday in Science Advances, shows that academic papers that were replicated in influential replication projects were cited 153 times less, on average, than papers that did not replicate. These findings highlight not only the "replication crisis" in academia — the failure to replicate a large fraction of published experiments — but also suggest a second crisis in the field: Papers less likely to be true are being cited more than papers likely to be true.

Citations are important in the world of academia because the number of citations a work receives is a basic measure used to assess the scholarly impact of published works; in promotion decisions in academic institutions, this metric may be used to evaluate researchers and their work, according to the authors of the Science Advances study. Citations themselves have been a frequent topic of research, with one research paper showing that too much jargon in a paper's abstract leads to fewer citations, and another demonstrating that academic review articles lead to fewer citations for underlying research.  

But the number of citations papers receive doesn't necessarily say anything about whether the paper's findings are likely to be true or untrue. 

Between 2015 and 2018, there were three influential replication projects that tried to replicate the findings in top psychology, economics and general-science journals — and the results of those projects were bleak. The projects were published in 2015, 2016, and 2018

These projects found that, in economics, 61% of 18 studies were replicable; in the publications Nature and Science , 62% of 21 studies were replicable; and in psychology, only 39% of the experiments yielded significant findings in the replication study, in contrast to 97% of the original experiments.

Marta Serra-Garcia, a co-author of the paper and an assistant professor at the University of California, San Diego, told The Academic Times that these influential replication projects sparked questions about the implications of nonreplicability over the long term in academia. 

Once there is a failure to replicate the findings of a paper, it indicates that the paper is less likely to be true, according to Serra-Garcia. "How does it impact the field?" she said "Do we keep citing it?" 

In light of these questions, Serra-Garcia and her colleague Uri Gneezy set out to determine how the nonreplicated papers fared in terms of citations, compared with papers that did in fact replicate in the replication projects.

The researchers used the findings from the three replication projects to correlate replicability with citations, with 80 papers in total, along with Google Scholar citations for each paper from the date of publication through the end of 2019. 

The study found that papers that were less likely to be true — since they weren't replicated — were cited much more than papers that were more likely to be true, even after the replication studies were published, with the largest gap being in the publications Nature and Science, where nonreplicable papers outpaced replicable papers in citations by 300 times. 

These findings in the gap between citations have been persistent over time — a persistent gap that isn't explained by "negative citations," or those that cite the failed replication. Moreover, only a minority of publications acknowledged the failed replications after they were published, and only 12% of citations acknowledged the replication failure after the replication projects were published. 

Perhaps the reason for these results is that nonreplicable studies seem more "interesting" than replicable studies, according to the researchers, meaning they "attract more attention and follow-up work."

"The less-likely-to-be-true papers, in our mind, tended to be the ones that are more interesting and striking in the sense that the findings were surprising from a reader perspective," Serra-Garcia said. "That kind of makes them more cited, because they're more interesting — they're maybe opening a new topic, or a new hypothesis has kind of been highlighted and found some evidence for." 

The researchers also said experts seem very capable of predicting which findings in academic papers will replicate, based on estimates from prediction markets where experts in the field bet on the replication results before a replication study is published.

Given that experts can confidently make these assessments, the researchers questioned why nonreplicable papers make the editorial cut. They suggest that less reliable papers are accepted for publication because review teams face a trade-off: interest versus reliability. Editorial teams may be willing to accept lower reliability of results for papers that are more interesting, thus applying lower standards regarding reproducibility. 

Examining this trade-off helps to partially explain the source of the replication crisis, and the authors hope that garnering attention for this issue will result in improved practices and reduced failed replications. The study also highlights the contention in academia between interesting and reliable results.

"One of the issues we face, to some extent, is that science is complex," Serra-Garcia said. "But if we want to kind of captivate audiences, we need to make it simple and interesting. And I think we're all battling this trade-off. … When academics want to talk to the broader audiences, we have to simplify our messages."

Looking to the future, the authors of the study call for more research to examine how long this effect will hold and to investigate whether academic disciplines will eventually "internalize a paper's failure to replicate" and whether the impact of nonreplicable papers will be reduced. 

The study "Nonreplicable publications are cited more than replicable ones," published May 21 in Science Advances, was co-authored by Marta Serra-Garcia and Uri Gneezy, University of California, San Diego.

Saving
We use cookies to improve your experience on our site and to show you relevant advertising.