Search

Sharon Begley’s Brief Guide to Writing Medical News

  Léelo en español

Sharon Begley
Sharon Begley Courtesy of Sharon Begley

The challenging thing about medical stories is that they can go off the rails before you’ve written the first word—when you choose what to write about. Every week brings hundreds of studies unveiling results, an embarrassment of riches for anyone writing about medical research. But like the dieter ogling the dessert cart, the medical writer needs to maintain tremendous willpower to resist the treats—another cure for cancer, another neuroimaging insight into the functions and foibles of the human mind, another genetic-engineering triumph for CRISPR-cas9—and, instead, keep reminding herself of two key facts.

One applies to promising new compounds that are tested in animal models of disease: For every 100 such experimental drugs, less than one will turn out to work in humans and make it to the marketplace. Good to know next time the shazamamabab molecule “cures” schizophrenia in mice.

The other applies to findings about how behaviors, foods, genes, and environmental variables affect health. According to the now-famous analysis “Why Most Published Research Findings Are False,” there’s a good chance that reported associations are wrong. And lest you think that false findings are more likely to be concentrated in peripheral fields, in fact the hotter a scientific topic (measured by the number of investigators involved in it and how competitive it is), the less likely findings are to be true. That’s not because of fraud (or, not only because of fraud), but because of things like publication bias and statistics. If the findings aren’t wrong, they may be meaningless—the old saw that correlation cannot prove causation.

I start off with these two downers for a reason. Somehow, medical writers forget (or never learn) that they are supposed to be journalists, not cheerleaders, and that to serve their readers or listeners they need to bring as much scrutiny, skepticism, and critical thinking to their field as the politics reporter brings to a candidate’s tax plan. And our first chance to do that is by choosing not to cover something.

Setting a high bar for quality isn’t easy, especially with an editor demanding 500 words on how turmeric prevents cancer. Nor is it necessarily the path to a successful career in medical journalism. One can be gainfully employed while writing the worst kind of clickbait, from how flesh-eating bacteria will become a nationwide scourge to how mad cow disease is poised to become an epidemic. (My own mea culpa is writing a flimsy story for Newsweek on why vacation sex is better than the everyday kind; the clicks were off the chart.) But neither fearmongering nor its opposite—credulous reporting about claims for cures—serves our audiences.

How can you separate findings that are likely to be true from those destined for the dustbin of science? Force yourself to be skeptical of findings whose statistical significance is just on the border of what’s deemed passable; through a dicey practice called “p hacking,” researchers can nudge their data into statistical significance. Remind yourself of the flimsiness of observational studies as opposed to randomized controlled studies—and tell your audience which is which. The former might find an association between A (such as drinking coffee) and B (living longer), but can’t show that A caused B. Researchers are fond of claiming that they ruled out took the many ways that a known third factor, C (socioeconomic status?), might have caused B, and so tiptoe up to claims of causality (especially in press releases and interviews, though usually not the published paper). But just because researchers assure you that they “adjusted for” a long list of variables doesn’t mean they got them all, something I tried to explain in a recent story about prenatal exposure to antidepressants and the risk of autism. Remember, many, many researchers claimed hormone replacement brought a myriad of health benefits to post-menopausal women, such as better heart health. It took the massive, randomized, controlled Women’s Health Initiative study to show that hormone replacement not only had no such benefits, but actually caused harm.

If a study passes these bars, don’t let your skepticism falter. Ask yourself, has anyone else reported this before? Has anyone reported the opposite? Does it accord with related research on, say, lab animals, if the result is in people?

It’s also crucial to dig beneath claims of relative harms or benefits (X reduces the risk of disease Y by 50 percent!) and insist on absolute numbers. One of my recent favorites comes from a study reporting that false positives in mammograms (spots that looked like cancer but weren’t) are associated with a 40 percent higher risk of real breast cancer years later. That sounds downright terrifying. But it’s equivalent to saying that of 600 women with false positives, one additional woman will develop cancer over 10 years, Dr. Saurabh Jha of the University of Pennsylvania explained in Health News Review [and at MedPage Today]. That makes the finding look a lot less relevant to any given woman, yet the study got tons of ink in late 2015. The legitimacy and real-world relevance of a study, as Jha noted, “correlates poorly with media sensationalism.” It’s easy, even tempting, to hype something; editors love it. But most of us got into this field intending to be better than that.

 

Sharon Begley is the senior science writer at Stat, an online publication covering the life sciences. Before that she was the senior health and science correspondent at Reuters, the science columnist at The Wall Street Journal, and a science writer, editor, and columnist at Newsweek. Follow her on Twitter @sxbegle.

 

Skip to content