In late 2013, Adam Mann awaited an important announcement from a South Dakota mineshaft. As Wired’s space science and physics reporter, Mann was following the Large Underground Xenon (LUX) experiment—at the time, the world’s most sensitive dark matter particle detector—located in the mineshaft. LUX had been collecting data for three months, hoping to detect signals from scientists’ leading theoretical candidate for what might make up dark matter.
Finally, the results were out: LUX found … nothing. No signals that looked like dark matter particles. It was a null result: The researchers found no evidence in support of their theoretical expectations. “It’s always a little disappointing,” recalls Mann, now a freelance writer. “It’s hard to convince an editor to cover null results.” And most of the time, he adds, they may not be important to cover.
But this wasn’t one of those times. Mann’s coverage explained that the results—though not what scientists hoped for—could refine theoretical expectations about what these subatomic particles should look like. The absence of the predicted signals in exceptionally clean data boosted confidence in the detector’s sensitivity. And the outcome challenged previous experiments that had claimed to detect low-mass dark matter particles.
While null results can feel like non-news, they sometimes reveal valuable stories that show non-linear progress in science or with important implications for readers, such as if a treatment that was thought to be helpful isn’t, or if a local government is investing money in an intervention that has no effect. And though finding null results and gauging their newsworthiness can involve more legwork than positive results, the effort can help journalists step beyond the traditional study news story, bring new ideas and facets into reporting, and learn more about the inner workings of the fields they cover. Ultimately, covering null results can help reporters contribute to a more accurate view of science and the world.
But First, What Is a Null Result?
To understand null results, the first step is understanding what’s called the null hypothesis. This is the idea that there is no statistically significant association between variables. To support a hypothesis that X has an effect on Y, for example, researchers need strong evidence that the null hypothesis—X has no effect on Y—isn’t true. For the LUX experiment, a simplified version of the null hypothesis would be that all signals detected were consistent with the expected background from known sources, with no interactions from dark matter particles.
Null results can occur for different reasons, which are important to understand in order to write about them accurately.
Or consider a clinical trial testing whether a new drug relieves migraines. In that case, the null hypothesis would be that the drug has no effect. So the researchers would have found a null result if, after treatment, the difference in headache scores on average for people in the placebo group and the treatment group is statistically indistinguishable from zero—in other words, if there’s a difference in the sample, it’s too small to conclude that a real difference would exist in the population.
Null results can occur for different reasons, which are important to understand in order to write about them accurately. Sometimes a null result may indicate the null hypothesis is correct. Other times, it can mean there’s just not enough evidence to confidently rule out the null hypothesis, regardless of whether it’s actually correct. The latter case often arises due to methodological limitations. The study might have lacked sufficient statistical power—the probability of detecting a real effect if one exists. If an effect is weaker than expected or data are affected by factors like random variability or measurement errors, for example, a bigger sample size may be necessary to spot a real effect.
Still, both types of null results can help advance science. They not only help refine hypotheses and methods, but also prevent other researchers from spending time and resources on paths that have already been explored. “The best experiments are the ones where no matter what answer you get, it helps you move forward to the next experiment,” says freelance journalist and biostatistician Viviane Callier.
How to Find Null Results
While the occasional null-result study may make it into a top-tier journal, these studies are more likely to find a home in niche or lower-tier journals. Freelance science journalist Laura Dattaro says that her process for spotting stories for her newsletter Null and Noteworthy, published until July 2025 by The Transmitter, involved “trolling through lots and lots of research.” It also included checking a newsroom-wide Airtable used to track studies about neuroscience, looking for null results in those that didn’t make it into stories. She often turned to preprints as well. She had Google Scholar alerts for combinations of the terms “neuroscience,” “no evidence,” “null,” and “replication.”
Unpublished work can be another source of potentially newsworthy null results—if you can find it.
Most null findings, though, won’t make it into any article or preprint. In a survey published July 22 by Springer Nature involving more than 11,000 researchers, 98% acknowledged the value of sharing negative results, and 53% said they had obtained such results at some point. However, only 30% had submitted them to journals. Among the reasons cited were “concerns about negative bias leading to reputational harm” and a “low likelihood of journal acceptance.”
This leads to what’s known as the file-drawer problem. “Someone does a study, it doesn’t work out, and they just file it away,” says Jesse Gubb, a senior research manager at J-PAL, a research center that focuses on impact evaluations, who has written about null results. And there are a lot of null results along the way to success, he adds. “So when we look across the published research as a whole, we’re getting a very skewed picture of what works. We’re not seeing what didn’t work.”
Unpublished work can be another source of potentially newsworthy null results—if you can find it. One way to peek into these drawers, Mann says, is listening to the Q&A sessions at conferences, “where other [scientists] stand up and push back, and you can get a little bit of a sense of whatever debate is going on within the field.” When interviewing researchers for a story, he also asks what else they are working on. “It might be an offhand comment: ‘Oh, yeah, we’re waiting for X, Y, and Z,’” he says. “Hopefully, you file that away in the back of your mind or [in] a folder somewhere with story ideas.” Then, follow up with researchers to see how studies turned out, and regularly check preprint servers like arXiv.
Journalists can also peruse planned studies and ongoing clinical trials. Databases such as the World Health Organization’s international registry and the National Institute of Health’s clinical trial website are fertile soil for null results. Some studies, including many clinical trials, produce documents called preregistrations. These study plans specify not only researchers’ original questions and hypotheses but also their pre-analysis plans, where they define what an effect would look like and the sample size needed to detect it at the desired statistical power and confidence level. Preregistrations can be found in some open science platforms, like OSF, or field-specific databases. “Emailing authors on preregistrations and just letting them know you’re interested in their findings whenever they have them can be useful,” says Dattaro.
Evaluating and Contextualizing Null Results
Once you’ve found a null result, how do you decide whether the non-finding is worth writing about—especially when dealing with preprints or unpublished work, which haven’t yet been peer-reviewed? This part of the reporting process will look a lot like covering positive results. Assess whether the study’s design, implementation, and methodology are robust by talking to researchers in the same field who weren’t involved in the study.
Figuring out which of the two kinds of null results you’re dealing with—one that confidently indicates no real effect or association, or one where there simply wasn’t enough data to draw a conclusion—may also require dipping a toe in methodological waters: Was the sample large enough to detect an effect if one existed? Did the study have enough statistical power? Since there are no general rules for what makes a good sample size (it will vary from field to field and from study to study), these are all good questions for your expert sources. Journalists can request help parsing statistics through a program from Sense about Science USA and the American Statistical Association.
Once you’ve found a null result worth writing about, you need to strike the right balance to show why the non-finding is relevant without overselling it.
To gauge how meaningful null results are, Mann recommends asking researchers direct questions, such as, “Why should we care about this?” Callier also suggests asking about the practical implications of the outcome for areas like policy and public health. For example, a study on the effect of free contraception in Burkina Faso found that, contrary to expectations, reducing the economic barrier to accessing contraception had no detectable effect on fertility rate. The researchers urge that the results not be interpreted as an argument for defunding family planning but rather as a call for more research to understand what is driving high fertility—work that could, in turn, lead to policies that are better adapted to that specific cultural setting. The complex implications of the different interpretations could yield in-depth reporting opportunities and important public health stories.
The broader story that led researchers to a null finding can also help reporters evaluate if it’s newsworthy. “Null results can get really interesting when they challenge something that people thought they knew, or when it’s a result that’s really different from what you would expect based on other past evidence,” says Dattaro. Paper abstracts and introductions often offer clues to those stories. “Sometimes, they have these long tales, like, ‘Well, so-and-so back in 1997 found this, then another person [did that],’” she says. It shows that “this result is actually the latest in a long line of scientific inquiry, and that makes it really interesting.”
From an editor’s perspective, having these elements laid out in a pitch will increase the chances of it being accepted for publication. Clarice Cudischevitch, editor of the column Ciência Fundamental (Fundamental Science), published by the Brazilian newspaper Folha de S.Paulo, says that when considering a pitch about a null result, she looks for “[whether] the story is a good opportunity to explore the scientific process, … to show the behind-the-scenes,” she says. “I would also see if the story brings the unexpected and has some novelty.”
Find the “Why” to Build Your Story
Once you’ve found a null result worth writing about, you need to strike the right balance to show why the non-finding is relevant without overselling it. A strong narrative arc can illustrate how the results help advance science or impact people’s lives, supported by precise wording to clarify what was or wasn’t found.
Paying attention to scientists’ initial motivations—why were they looking for an association between X and Y in the first place?—can help identify the narrative core of a story. For The Transmitter, Dattaro wrote about a flurry of studies that examined whether having epidural analgesia during labor was associated with an increased chance of having a child with autism (spoiler: no, there is no such association whatsoever). She wondered why so many researchers were suddenly exploring this question, only to find null results again and again. The spark behind this chain reaction was a single paper that reported a positive—but spurious—result suggesting such a link existed.
Many times, writing about null results is about highlighting how they challenged what was thought to be true.
Dattaro’s story was about null results, but it was also about much more: autism and misinformation, self-correction in science, and the risk that comes with the overrepresentation of positive results in both journals and the media. Publication bias favoring positive results can lead false claims to be “canonized” and accepted as facts, a study found, and publishing more null results would help reduce this effect.
Many times, writing about null results is about highlighting how they challenged what was thought to be true. But with autism and epidurals, the studies Dattaro wrote about belong in a category many null results fall into: the “yep, we already knew that, nothing new to see here” category. Still, by looking at the broader story she compellingly showed how scientific consensus is built, challenged, and reinforced by accumulating more evidence.
As an editor, Cudischevitch works with her writers to make sure their articles show why these seemingly boring results are important steps in knowledge production. “Trust in science should not come from showing only the revolutionary things that science does. … [The revolutionary] is very rare, and it takes time to happen,” she says. “Null or negative results are an opportunity to show the scientific process.”
To reflect that incremental aspect of science, reporters should describe results with precision and avoid temptation to oversell null results. Callier recommends phrasing like “no difference was detected” versus “there was no effect.” It’s also important that the wording clearly reflects the distinction between a null that truly shows no effect and one grounded in methodological limitations. For the second case, for example, phrasing like “evidence was inconclusive” might work better.
Even if that kind of prudent yet accurate wording doesn’t feel exhilarating, journalists can bring in the excitement through other facets of the story. It’s important to keep in mind that a null-result story will be about more than statistics. It will be about knowledge and how it guides decisions. For his 2013 story about the LUX experiment, Mann recalls how important it was to focus on what he calls “the tension of the story.” This included the disagreements within the field, the previous experiments that had shown positive results, and the widely accepted ideas about dark matter that the null result challenged.
“The process of science is not just: ‘Experiment reveals truth.’ It’s [more] like: ‘An experiment happens and then everybody looks at it from their particular angles and argues about the exact interpretation and meaning,’” he says. “Our job as journalists is to give a reader a clearer understanding of the fact that this is actually what happens within science.”

Lucila Pinto is a freelance science and tech journalist. Her work has appeared in Science, Rest of World, and La Nación, among other publications. She is a Pulitzer Center reporting fellow and a graduate of Columbia University’s science journalism program. She is currently an early-career fellow at The Open Notebook supported by the Burroughs Wellcome Fund. Follow her on X @luchipaint and on LinkedIn.