Look online for the health effects of coffee and you’re bound to come across a wide range of headlines. You’ve either lowered your stroke risk and boosted your heart health or you’ve increased your risk of death. Four cups a day is healthy. Five cups or more? Hopefully your will is up to date.
While there is certainly variability in the quality of one cup of coffee to the next—pour-over or diner carafe, fresh-roasted or vacuum-sealed—whether or not that cup of joe is going to shorten your lifespan shouldn’t depend on the study of the day. And while by now the coffee debate has become a silly trope, it’s a good lesson for science writers on the importance of evaluating the rigor of scientific studies.
Just as journalism contains different kinds of stories, from profiles to editorials, scientific studies also come in several different flavors. And not all should be considered equal. When writing about science, journalists have a responsibility to their readers to help contextualize studies and make sense of findings. Based on how a study is designed, some allow for relatively clear conclusions while others warrant more careful interpretation.
It’s also just as important for those who cover science to keep in mind that, generally speaking, no one study can “prove” something in science. Rather, science is a process by which an accumulation of evidence helps home in on a likely explanation while leaving open the possibility that new evidence in the future may change our understanding.
In 1894, for example, physicists were astounded by a study that seemed to show that cats, by landing upright every time they fell, were defying a fundamental law of physics. But as science journalist Katherine J. Wu wrote in a 2022 story for The Atlantic, additional scientific inquiry over subsequent decades showed that agile felines didn’t overturn any physical law. Rather, scientists had underestimated cats’ skeletal flexibility. As Wu put it, the front and back halves of their bodies exert “an equal and opposite shove-y, twisty force” that accounts for their seemingly preternatural ability to land on their feet.
What follows is a non-exhaustive primer on the most common kinds of research studies science writers might encounter in their work and how to approach covering them with readers, and the process of science, in mind.
To Observe or to Experiment
In general, there are two different kinds of studies—observational and experimental—and they differ in pretty significant ways.
In observational studies, researchers look at something that has happened in the world and use statistics to try to explain it. Importantly, the researchers themselves do not have a way of controlling the conditions that lead to one outcome or another. Instead, they have to design their studies to rule out a host of other possible explanations and attempt to draw meaningful conclusions using powerful mathematical tools.
For example, in a study comparing the outcomes of Israeli patients who took the COVID-19 antiviral medication Paxlovid to patients who did not, researchers analyzed data available to them in medical records. Based on these observable data, they found that older patients treated with Paxlovid were significantly less likely to be hospitalized or die of COVID-19 than older patients who had not received the treatment. They also found that younger patients did not experience these benefits.
The researchers did not control who was in which group; they simply worked with existing information. However, to draw more-reliable conclusions, they had to take into consideration, or “adjust for,” other possible explanations for the differences between groups, such as their sex, income level, or COVID-19 vaccination status. By doing this, the researchers could conclude that when older patients in Israel received Paxlovid treatment, they were more likely to have better outcomes, regardless of the other factors involved.
Observational studies can be quite powerful if they are designed well, particularly if they rule out other explanations for the observed outcomes and include an adequate number of research subjects. However, observational studies do not allow researchers to conclude that an intervention or treatment caused a particular outcome. Instead, they only permit scientists to draw conclusions about the relationship between an outcome and an intervention.
Experimental studies, on the other hand, allow researchers to test ideas by choosing which group or groups receive a particular treatment or intervention and comparing them to groups, called “controls,” that do not. When well-designed and subject to appropriate statistical analysis, experimental studies allow researchers more confidence in concluding that the effects they’ve measured are likely a result of the intervention.
For instance, in one recent study, researchers were interested in testing different kinds of lightweight paint materials that can reflect heat and potentially provide better cooling for applications ranging from airplanes to automobiles. The scientists designed a coating and then ran a series of experiments and computer simulations to test how well it worked compared to existing materials. Based on the data, they concluded they had a material that performed better than existing ones, providing a foundation for additional experimental studies (and a patent).
Even the science of quantum entanglement, which began as mathematical prediction in Albert Einstein’s era, evolved into theory, has over decades been experimentally refined, earning three scientists a Nobel Prize in Physics in 2022.
And in anthropology, researchers who once relied primarily on observational studies examining various characteristics of bone, now also use experimental studies to ask old questions in new ways. For instance, in a recent Science News story, freelance journalist Carolyn Wilke describes a researcher using molecular clues from fossil remains to determine time and age of death. In 2017, for example, the scientist buried the bones of piglets that had died naturally and then dug them up a year later. She collected powder from their bones and studied the connection between certain proteins in the bone and the age of the piglets at their time of death (using, as with the other examples above, those mathematical analyses that help increase researchers’ confidence in their results).
What Experiments and Observational Studies Can and Can’t Tell You
The crème de la crème of scientific studies is a type of experimental study called a randomized controlled trial. These studies allow researchers to best test the specific effects of an intervention, such as the growth potential of plants in different kinds of soils, and draw strong conclusions.
Test subjects are divided at random so they don’t share particular features that could influence the outcome. In human studies, research subjects should also be unaware of which treatment they’re receiving, in part because the placebo effect can influence the outcome of a study.
For example, early in the COVID-19 pandemic, as scientists sought effective vaccines, every study volunteer received an injection, but not all these injections contained vaccine. While researchers collected data on participants’ antibody responses and incidence of negative outcomes, the volunteers were not informed whether they received a real vaccine or something designed only to look like it.
Researchers then analyzed differences between both groups and drew conclusions about the effectiveness of the vaccines. (For ethical reasons, once enough data was collected the volunteers were informed which group they were part of and given the chance to choose a vaccine.)
In some randomized controlled trials, both the research subjects and the researchers themselves are left in the dark during the study, so neither knows who is receiving the real intervention and who is not, reducing the chances a researcher’s bias could influence the outcome. Only later are outcomes matched to which treatment the research subject received.
It can be quite difficult to perform randomized controlled studies. Withholding treatment from someone with life-threatening cancer or subjecting someone to a harmful intervention such as addictive-drug injection is unethical. Or, it might be impossible to hide from research subjects which intervention they’re receiving. In other studies, it can also be impossible to create controls. Plus, randomized control trials can be time-consuming and expensive.
So, researchers turn to other kinds of studies to draw the strongest conclusions possible. For instance, they might turn to observational studies, including cohort studies, case-control studies, and cross-sectional studies.
In cohort studies, researchers look at a population, or cohort, of individuals and ask whether some treatment or intervention the population receives leads to a measurable difference in some outcome. They can either follow research subjects over the course of an intervention or offer a snapshot in time during or after an intervention is complete. The Paxlovid treatment study above is an example of a retrospective cohort study, done after the intervention was complete.
In case-control studies, on the other hand, researchers compare one group that has experienced a particular outcome (the case) to a similar group that does not share that outcome (the control) and ask whether something of interest may be responsible for the difference. For instance, researchers might compare Spanish-speaking children who can read by age 5 to Spanish-speaking children who cannot read by age 5 and assess whether those children whose parents read to them often as toddlers are more likely to be in one group or the other. These studies, too, must rely on statistical techniques to account for other variables that can explain the outcome, such as differences in family income.
In cross-sectional studies—which examine a phenomenon at a single point or “cross-section” in time—both the outcome and the intervention are examined at the same time, such as the prevalence of Chagas disease among Latin American migrants living in Japan or whether gender and family socioeconomic status affect the physical activity of Dutch youth. As with the other types of observational studies, it can also be difficult to draw conclusions about causal relationships from these kinds of studies.
Using Math to Make Predictions: Computer Modeling
There is much more to the process of scientific inquiry than choosing between an experimental or an observational study. There are many ways in which researchers can explore questions of interest.
For example, modeling studies allow researchers to use existing data, along with powerful mathematics and computing, to simulate the real world and make predictions or approximate results that may be impossible or unethical to perform experimentally. These approximations can estimate effects across time, ecological systems, populations, solar systems, and more.
For example, a study in Nature in May 2021 used computer modeling to improve upon estimates of how much Antarctic ice would contribute to sea level rise under a number of different climate-warming scenarios.
And the Centers for Disease Control and Prevention, along with many others across the world, uses modeling to forecast future outcomes of the COVID-19 pandemic. That includes a May 2021 report that attempted to predict case trends, hospitalizations, and deaths from COVID-19 based on scenarios involving vaccine uptake and effectiveness and adherence to public health recommendations such as masking. That was before the global rise in the Delta variant and the emergence of the Omicron variant, laying bare one limitation of even the most rigorous modeling studies.
But just as other types of studies have their limitations, so, too, do modeling studies. Models require real-world data to make sure their assumptions are feasible, and so the results of any study are only as good as the data that go into it. In addition, the mathematical tools that underpin simulations are constantly being improved upon to increase the power of the results these studies provide. Over time, models are refined and compared to real-world situations to test their ability to do what scientists build them to do.
Looking at the Big Picture: Meta-Analyses and Review Studies
Sometimes, researchers combine data from a number of different studies into one large study called a meta-analysis. This allows them to increase the overall size and diversity of the study sample, perform more reliable statistical analysis (and thus, obtain more reliable results), and better understand the overall findings of studies that have conflicting results.
This 2021 story in Discover magazine highlights a number of meta-analyses of studies that assess health outcomes associated with coffee and heart disease, liver cancer, depression, and more. The good news: Coffee is (probably) (mostly) good for you.
Sometimes, studies even combine different kinds of research, such as this 2017 “umbrella review of meta-analyses” in which researchers reviewed 218 meta-analyses to categorize health outcomes associated with coffee consumption.
Like meta-analyses, review articles also involve categorizing many studies in a particular field and summarizing a collection of findings. But unlike meta-analyses, they don’t involve testing an assumption or collecting or combining data. Reviews can be instrumental for identifying trends, for generalizing the findings of many studies over time, and for characterizing the consensus of a field. For instance, an exhaustive review of autism and vaccines in the early 2000s, conducted in the wake of a fraudulent study with far-reaching consequences, found no causal relationship between vaccines and vaccine components and autism.
While meta-analyses and reviews may not offer new findings, they can provide new insights and help correct for the idiosyncratic limitations of any single study. For journalists, they can be valuable starting points for evaluating feature story ideas and provide material for understanding the prevailing questions and debates within a scientific field.
A Singular View: Case Studies
Every science writer loves a good anecdote. And that’s exactly what a case study is: an anecdote that documents a unique or unusual situation such as a perplexing health problem that doctors or researchers have encountered. As riveting as case studies may be, they provide very limited information for drawing broad conclusions, so journalists on the hunt for stories must tread carefully. Still, case studies can be especially interesting because they offer descriptive examples—most often in the life and social sciences—and may document a medical, neurological, or social phenomenon, or a political or economic situation that is among the only examples of its kind in the world.
Science writers covering case reports should look for ample details that have been carefully recorded. They should also be wary of claims that seem implausible or impossible to glean from studying just one or a few examples of a complex situation.
That doesn’t mean you need to give case studies a wide berth. Sometimes a single case, or a few scattered reports, can be the first signal of a bigger phenomenon and are worth following up, as Helen Branswell did for STAT in January 2020 after early reports of a new virus that was causing infections, some fatal, among people in China.
Her perception that the emerging, not-yet-named virus deserved closer journalistic attention was no accident. “Having worked for years on stories about infectious diseases outbreaks,” she told The Open Notebook in 2020, “I have a grounding in disease dynamics that really helps me understand what I’m watching unfold—and a sense of what might be coming next.”
Kelly Tyrrell has been writing about science since covering her first experimental study, about the science of foot-strike impact while running, in 2010. Before that, she was a scientist conducting her own experiments in the lab. Today, she is the director of media relations and a science writer at the University of Wisconsin–Madison, and the engagement editor at The Open Notebook. She’s also a runner who is always looking for a good excuse to be outside. The rest of the time you can find her on Twitter @kellyperil.