Scientific 'facts' could be falseScientific 'facts' could be false
Researchers model how "publication bias" does — or doesn't — affect "canonization" of facts in science.
December 20, 2016

John Adams famously pronounced in a courtroom in 1770 that "facts are stubborn things" and should not be altered by "our wishes, our inclinations or the dictates of our passion." Facts, however stubborn, must pass through the trials of human perception before being acknowledged — or "canonized" — as facts.
Carl Bergstrom, a professor of biology at the University of Washington, has used mathematical modeling to investigate the practice of science and how science could be shaped by the biases and incentives inherent to human institutions.
"Science is a process of revealing facts through experimentation, but science is also a human endeavor built on human institutions," Bergstrom said. "Scientists seek status and respond to incentives just like anyone else does. So, it is worth asking — with precise, answerable questions — if, when and how these incentives affect the practice of science."
In an article published Dec. 20 in the journal eLife, Bergstrom and co-authors present a mathematical model that explores whether "publication bias" — the tendency of journals to publish mostly positive experimental results — influences how scientists canonize facts. Their results offer a warning that sharing positive results comes with the risk that a false claim could be canonized as fact. On the other hand, the findings also offer hope by suggesting that simple changes to publication practices can minimize the risk of false canonization.
These issues have become particularly relevant over the past decade, as prominent articles have questioned the reproducibility of scientific experiments — a hallmark of validity for discoveries made using the scientific method.
"We're modeling the chances of 'false canonization' of facts on lower levels of the scientific method," said Bergstrom. "Evolution happens and explains the diversity of life. Climate change is real. We wanted to model if publication bias increases the risk of false canonization at the lowest levels of fact acquisition."
Bergstrom cited a historical example of false canonization in science: Biologists once postulated that bacteria caused stomach ulcers, but in the 1950s, gastroenterologist E.D. Palmer reported evidence that bacteria could not survive in the human gut.
"These findings, supported by the efficacy of antacids, supported the alternative 'chemical theory of ulcer development,' which was subsequently canonized," Bergstrom said. "The problem was that Palmer was using experimental protocols that would not have detected Helicobacter pylori, the bacteria that we know today causes ulcers. It took about a half-century to correct this falsehood."
While the idea of false canonization itself may cause dyspepsia, Bergstrom and his team — lead author Silas Nissen of the Niels Bohr Institute in Denmark and co-authors Kevin Gross of North Carolina State University and University of Washington undergraduate student Tali Magidson — set out to model the risks of false canonization given the fact that scientists have incentives to publish only their best and positive results. The so-called "negative results," which show no clear, definitive conclusions or simply do not affirm a hypothesis, are much less likely to be published in peer-reviewed journals.
"The net effect of publication bias is that negative results are less likely to be seen, read and processed by scientific peers," Bergstrom said. "Is this misleading the canonization process?"
Nissen explained, "In order to study the effect of the publication tendency (bias), we have used computer calculations to create a model for how a hypothesis can be perceived as true or false after repeated experiments based on previously published scientific articles on the hypothesis."
The probability that a hypothesis is true or false moves up and down a "ladder" as more and more experiments relating to the hypothesis are published.
Necessary negative results
"It turns out that requiring more evidence before canonizing a claim as fact did not help," Bergstrom said. "Instead, our model showed that you need to publish more negative results — at least more than we probably are now."
Since most negative results live out their obscurity in the pages of laboratory notebooks, it is difficult to quantify the ratio published.
"Our model shows that if not enough negative results are published, then hypotheses that are actually false could be perceived as being true facts. If, however, you increase the rate (often on the order of 20-30%) at which negative results are published, it would be easier to distinguish false hypotheses from true facts," Nissen said.
The researchers concluded that if the scientific journals largely prefer to publish results that are positive and reject the more negative results, it may lead to a growing belief that the hypothesis is true every time a positive result is published, and it eventually will be declared as fact.
As the calculations show, the scientific journals have to publish negative results at a certain rate in order to ensure that the hypotheses that are false do not end up being regarded as true facts.
"Negative results are probably published at different rates in other fields of science," Bergstrom said. "New options today, such as self-publishing papers online and the rise of journals that accept some negative results, may affect this. In general, we need to share negative results more than we are doing today."
Their model also indicated that negative results had the biggest impact as a claim approached the point of canonization. That finding may offer scientists an easy way to prevent false canonization.
"By more closely scrutinizing claims as they achieve broader acceptance, we could identify false claims and keep them from being canonized," Bergstrom said.
To Bergstrom, the model raises valid questions about how scientists choose to publish and share their findings — both positive and negative. He hopes these findings pave the way for more detailed exploration of bias in scientific institutions, including the effects of funding sources and incentives on different fields of science.
"As a community, we tend to say, 'Damn it, this didn't work, and I'm not going to write it up,'" Bergstrom said. "I'd like scientists to reconsider that tendency, because science is only efficient if we publish a reasonable fraction of our negative findings."
About the Author(s)
You May Also Like