This model could help researchers pursue 'good science' while avoiding misleading results

May 17, 2021

Greater attention to methodology could help scientists move their fields forward. (Unsplash/ThisisEngineering RAEng)

New findings suggest that scientists who put more effort toward exploring the best methodologies for their work may be more likely to test hypotheses that push their field forward. 

The findings were detailed in a Nature Human Behaviour study, published Monday, which used complex computer simulations to identify how the scientific community can more effectively encourage studies that are both groundbreaking and accurate. The simulation suggested that the veracity of scientific results improved when researchers spent more time and money on preliminary stages — analyzing the theoretical frameworks that underpin their field of interest before proceeding to conduct empirical research themselves. Those results align with previous research from Brian Nosek and colleagues, showing the benefit of preregistration, or the practice of choosing and publicly disseminating one's methods and research questions before gathering data in order to avoid bias and open up science to more scrutiny.

Coherent methodological frameworks help scientists build powerful, unifying theories: When a group of physicists said they'd observed neutrinos exceeding the speed of light, for example, other teams knew to view those results with caution and pursue further testing, because they did not align with long-established theories of special relativity.

The new study supports the value of preliminary work focused on theory and methodology, indicating that it might help scientists choose hypotheses that are more likely to yield accurate data and push their fields forward. Robust pre-research practices could also make scientists less susceptible to publishing false or sensationalistic results. In other words, when scientists explore and prioritize well-established methodologies, "'good science' can be maintained, meaning a world in which you don't just come up with scientific clickbait," Alexander Stewart, an applied mathematician at the University of St. Andrews who studies cultural evolution and, in particular, the way ideas flow through a society, told The Academic Times. "Instead, you put a lot of effort into carefully constructing your hypotheses and testing them."

Stewart and his co-author, Joshua Plotkin, a professor of natural sciences at the University of Pennsylvania, explored the veracity of scientific studies under various constraints with the help of a computer simulation. The pair began by applying mathematical equations to a highly simplistic experiment. "Then what we do is build a more realistic model that is too complicated for us to analyze using just math, but that captures more of the reality of the world," Stewart said. "It captures things like noise and uncertainty and different people having different incentives."

Academics, especially those who are still early in their career, can face intense pressure to receive professional recognition or acclaim as they seek long-term employment. "People perceive that it depends on you getting a paper into a high-profile journal," Stewart said. "And so that provides a pressure to publish — it provides a desire to generate results that will get you into those journals." 

Some institutions and countries have even awarded cash bonuses for those who land papers in highly prestigious journals. In 2020, China banned cash rewards to researchers for publishing in journals, amid worries that articles may have been rushed through to earn the authors those bonuses. And citations from Chinese researchers soared nearly fourfold in the 10 years prior, providing evidence that scientists might have needlessly referenced work by themselves or their colleagues to boost citation counts and increase their chances of promotion.

The researchers found that the pressure to publish articles can, under different constraints, lead to opposing results: On the one hand, a healthy dose of competition can motivate scientists to exhibit rigor and caution while pursuing meaningful, groundbreaking research. But if that pressure comes without proper skepticism and editorial oversight from academic journals, the scientific community runs the risk of publishing a higher number of questionable studies with faulty findings.

The model offers a new contribution to the field of metascience, the application of scientific practices to understand and improve science as a whole. Some metascience researchers have gone so far as to say that the bulk of published scientific work is misleading or inaccurate. Such was the case in one explosive and oft-cited 2005 study titled "Why Most Published Research Findings Are False," by Stanford physician-scientist John P.A. Ioannidis, who most recently received media attention for suggesting that the coronavirus pandemic would be far less deadly than epidemiologists had predicted. 

Others have pointed to well-known cases of alleged or proven data fraud — for example, the work of Jonathan Pruitt, a behavioral ecologist whose body of research into animal sociality was called into question following concerns that he may have fabricated data, although he has denied those claims. Similarly, the data from one 2014 study, which claimed that a short conversation with an LGBTQ person could change someone's mind about gay marraige, turned out to have been entirely invented. These high-profile cases suggest that scientific fraud has the potential to skew policy. But Stewart cautioned that blatant attempts to mislead readers are rare and that the majority of cases of "bad science" are unintentional, stemming from poor experimental conditions. 

Another subfield of metascience concerns the highly publicized "replication crisis," a fear among some scientists that many or even most experimental findings, especially those that involve humans, cannot be reproduced, even under the same conditions. Because replication is a key piece of the scientific process, many researchers worry that a failure to reproduce findings could erode the public's trust in science.

Stewart acknowledges that there may be some truth to these concerns, while also noting that the extent of the issue varies by field and may be most prevalent in the social sciences. His and Plotkin's model suggests that replication can be helpful if it is accompanied by a critical exploration of the theories and hypotheses that undergird a particular result. At best, replication can "interact synergistically with theory to stabilize good science across fields," the researchers noted in their paper. "Replication, at least on its own, is a very hit-and-miss tool for correcting these problems," Stewart said. 

Stewart offered an analogy about how science progresses over time: It's like moving through an abstract space of all possible ideas or hypotheses in search of the truth. A scenario in which bad science may propagate would involve researchers randomly bouncing from one idea to the next with no clue about where to head in the future. But Stewart thinks that researchers could gain a stronger sense of direction by spending more time on methodology.

"And so, what's very interesting to me just as a purely intellectual question is, is there a better or worse way to move through the space of ideas that helps you find the best ideas or minimize the bad ideas?" Stewart asked. "If you have some mechanism to say left is better than right, that helps you. But of course, it's not just the choice of left versus right. It's much more complicated."

The study "The natural selection of good science," published May 17 in Nature Human Behaviour, was authored by Alexander J. Stewart, University of St. Andrews; and Joshua B. Plotkin, University of Pennsylvania.

Saving
We use cookies to improve your experience on our site and to show you relevant advertising.