In the spring of 2017, weeks after then-president Donald Trump took office in the United States, I wrote multiple stories for Wired magazine covering a leak of extremely revealing data about the CIA’s operations and digital-surveillance capabilities. Known as “Vault 7,” the trove was released by the controversial transparency group WikiLeaks and its embattled founder Julian Assange. My stories were straightforward and benign news coverage in a sea of similar articles. Yet in the hours and days after I started publishing, angry Twitter users began tweeting hateful comments at me and about me, saying that as a woman I had no credibility to cover technology or national security. The mob was particularly fixated on the fact that I’m Jewish. For a few days, my Twitter notifications were full of bigoted accusations and violent remarks.
It was scary to scroll through my Twitter account and see derogatory and malicious attacks on me and my work. But I also felt lucky, because after a few days, the firehose slowed to a trickle and then dried up entirely. The trolls seemingly got bored with me and moved on.
Most people who work in news media know someone who has dealt with online harassment or have been the target of it themselves. In these situations, journalists can face heinous attacks across their social media accounts, be forced to deal with the release of their personal information, and even endure death threats or other threats of violence targeted at them and their families.
Yet many publications have been slow to acknowledge these experiences, let alone invest in ways to support staffers, freelancers, and sources experiencing digital abuse. This leaves journalists to endure the digital onslaught on their own or scramble to mount an ad hoc defense with one or two close, trusted colleagues. But both the problem and the threat go far beyond anything an individual can address alone.
“The brilliance of online abuse is that it’s framed as if it’s ad hominem, as if it’s a targeted, personal attack—it’s about you,” says Viktorya Vilk, director for digital safety and free expression at PEN America. But “that’s often not actually what it’s about at all,” she says. “It’s about suppressing a free press, and it’s about undermining efforts to speak truth to power, to uncover corruption, to critique people. And the way that a lot of folks have realized is most effective to do that is by going after individual reporters to try to drive them out of the industry and to shut them up so they don’t do their work.”
There are things that you can do as an editor trying to protect your writers or as a reporter trying to protect your sources. Tackling this, however, is about far more than personal action.
Media outlets need to create policies to acknowledge and minimize the harms of digital harassment and abuse against staff and freelancers—and in doing so help support freedom of the press.
There’s an old adage that “there’s no such thing as perfect security.” But the more editors, reporters, and managers know about the dangers, the less likely they are to be caught completely off guard if they need to help a colleague.
The sad truth is that journalists and sources who are members of marginalized groups are at higher risk of facing online abuse related to their race, ethnicity, religion, disability, gender, sexual orientation, or human rights stances. This is important for everyone involved in a story to consider when planning assignments. But editors and writers should also consider subject matter. Reporting on political or corporate corruption, covering repressive regimes or wars, and writing about abuses perpetrated against marginalized groups or efforts to expand representation are all controversial areas. For science reporters, infectious disease and public health, animal welfare, climate change, sustainability, genetics, and reproductive health and care are all particularly contentious beats.
The first step for editors is accepting the reality that online harassment is a present threat and talking openly with writers about it when relevant to normalize discussion.
The dangers for reporters and their sources are particularly heightened in some countries where government censorship or more subtle restrictions on speech tend to galvanize backlash, particularly in the Global South.
“It’s been challenging to speak your mind on the Indian internet over the last few years because you do have these organized, sophisticated troll armies ready to go after you, especially if you criticize the ruling party,” says Pranav Dixit, a tech reporter for BuzzFeed News and former tech correspondent based in India. “At this point it’s been happening for so many years that it’s almost normalized. It’s a very polarized world that we live in and people don’t like the media, so people are going to go after journalists.”
Ultimately, anything people care deeply about, and certainly anything that has a lot of money tied up in it, can be potentially incendiary, which means no subject is completely safe. And though no one can predict every threat, as a journalist—whether you’re a reporter, photographer, editor, or fact-checker—you naturally develop an intuition over time of which stories in your purview are more likely to provoke an abusive response.
Creating a Supportive Environment
While covering WikiLeaks, I felt like enduring the blowback was simply part of my job and something I should take in stride. It’s a common idea. “For many years at many different types of publications I didn’t ask for help,” says Rachel Feltman, a longtime science journalist and editor who has worked for both legacy and digital-first publications. “I even had people who had been mentors to me telling me I should be sort of dignified and quiet about the harassment I was getting on social media.”
In reporting this piece, I heard a number of harrowing stories about journalists facing a firestorm of digital abuse after publishing a particularly controversial story, or dealing with months or years of consistent low-grade harassment that eventually reached a crisis point. Again and again, though, reporters and editors declined to recount these stories on the record precisely because they had worked so desperately to put the experience behind them.
But virtually all of them emphasized that when harassment happened, they felt like they were on their own. Many reporters said that the best-case scenario was having the support of one saintly editor who was in the trenches with them reading through vitriolic posts and messages and attempting to get the attention of tech companies to block abusive users or take down violent content.
“Some major newspapers literally have a clean room to open mail in, because reporters have been sent mysterious powders and fear biological attacks,” Feltman says. “Yet there’s been this sense that online harassment is only as real and as scary as you allow it to be.”
Feltman adds, though, that, “I think things are starting to change a bit now.” There is increasingly a sense that publications should have systems in place to both prevent and respond to harassment. Or at the very least, some minimal awareness of the concept among newsroom managers.
Editors need resources beyond themselves, though, to then act on anything their writers report to them.
“One of the simplest places for a publication to start is just to tell your staff that you believe online abuse is a problem and that you will offer support if they experience it,” PEN America’s Vilk says.
PEN America and other global organizations that advocate for freedom of expression, such as the group ARTICLE 19, offer newsroom trainings and online harassment resources for journalists, including PEN’s extensive Online Harassment Field Manual. PEN offers Online Abuse Defense workshops and consultations as well through a pay-what-you-can model for media outlets and publishers, including free or subsidized options for small organizations.
The first step for editors is accepting the reality that online harassment is a present threat and talking openly with writers about it when relevant to normalize discussion. And editors can be supportive from the time a story is assigned, whether you’re working with a freelancer or a staff writer. Consider whether, in your experience, a topic could end up provoking digital harassment. If so, you should flag that for the writer and share some digital-security best practices. You should also encourage them to reach out to you at the first sign that they may be experiencing harassment.
Editors need resources beyond themselves, though, to then act on anything their writers report to them. Newsroom leadership should develop clear policies for how reporters and editors will communicate with higher-ups and the legal department in a case of online harassment. For example, some publications have a Slack channel or other group chat specifically devoted to rapidly triaging and responding to incidents of digital harassment. Others plan to use email for notification or some combination of the two. Such a plan should also include a system for determining which colleagues will collaborate on response in the case of a rapidly escalating situation. Media outlets can establish a sort of “emergency contact” system where each editor and reporter identifies a few colleagues they would feel comfortable deputizing to work with them on triage if they begin to experience digital harassment. And newsrooms should similarly have a plan for who will work with the relevant editor in case one of their freelancers or a source is facing online abuse.
When It’s Go Time
If a journalist or a source starts experiencing escalating harassment online, it’s important to start implementing the action plan quickly.
“These situations are so fluid and start changing so quickly that the first handful of hours and the first days are the most critical in responding,” says Judy Taing, who heads ARTICLE 19’s global gender and sexuality program. “Without that structure in place the journalists are left to handle those first moments themselves. Derogatory remarks, disparaging remarks, scare tactics—it’s often difficult alone to gauge the true motive or how far something will go.”
A crucial issue during an incident of online abuse is how overwhelmed a target may feel if their social media accounts are flooded with hateful and threatening messages, their phone is ringing off the hook, or they are increasingly fearing for their safety and that of their family. The psychological toll can be severe, so media outlets need to be able to execute the plan for establishing a team of colleagues who will review the options with the target and move forward with their consent.
It can be particularly difficult for writers and editors to support sources who experience harassment as a result of participating in a story. If harassment does occur after a story is published, the same principles should apply that journalists offer sources their time and support to plan and execute a response.
This task force can moderate the comments section on the article generating the controversy or across the publication. They or someone else the target trusts can also comb the target’s social media feeds, email, and other communication platforms to report and then block or mute abusive activity. And colleagues can also take on the labor of documenting the harassment throughout the situation so there is a record of the incident. The publication should confer with the target about a response that will help them feel most supported, whether that amounts to a public defense or quieter internal actions. Legal can help to coordinate if a publication thinks involving law enforcement would be productive.
“The thing that institutions need to do is create some kind of internal reporting channel for staff and freelancers, whether it’s an email address people can reach out to, human resources, the ability to escalate it to your manager, or ideally all three,” PEN America’s Vilk says. “And then what we need is a task force, because the skill sets required are very varied. So digital-security people or IT, audience-engagement people who have relationships with social media platforms, PR people to make a statement, and you might need your general counsel. You pull in the most relevant folks and they become the advisory group that figures out what to do case by case.”
As Vilk suggests, larger news organizations with audience-development teams likely have points of contact at social media platforms and other tech companies so they can attempt to get personalized attention on evolving situations of digital abuse. Within a social network, say, this can take the form of a contact who makes sure abusive accounts are being addressed, violent content is being removed, and targets’ accounts are secure. But smaller publications and those based in underserved regions often have no such ability to backchannel with tech companies, an unfair disadvantage and a crucial equity issue that digital platforms should work to address.
Publications should also offer to pay for services like DeleteMe, which help scrub private and personally identifying data from the internet, for any reporter experiencing harassment.
It can be particularly difficult for writers and editors to support sources who experience harassment as a result of participating in a story. When conducting interviews, reporters don’t have an obligation to hold adult sources’ hands, but it may be appropriate for them to provide some context if a source seems not to have considered that a topic they are speaking about is particularly controversial. And if harassment does occur after a story is published, the same principles should apply that journalists offer sources their time and support to plan and execute a response.
For sources and journalists alike, the way that a publication responds to digital harassment depends to a degree on who is being harassed. Staffers have access to all the resources a publication offers, but that also means that they are at the mercy of their employer’s policies, which may be flawed or incomplete. Meanwhile, freelancers and the editors who work with them deal with unique challenges when attempting to respond to incidents of digital abuse. Freelancers may have less information available about how a given institution approaches situations of digital harassment, fewer opportunities to advocate for improvements within the outlets they work for, and, potentially, less priority and support from a publication. That’s why it’s important for editors to be especially communicative and to try to keep a broad eye on the social media response to any potentially controversial piece they work on with a freelancer.
Responses to incidents of digital harassment should be based on the target’s specific consent and offer them the flexibility to be as involved or uninvolved as they want to be in all aspects of the process.
“Some journalists just want it to go away and some are ready for a fight and want the situation to be more publicized,” ARTICLE 19’s Taing says. “The safety and comfort of the journalists should be the driving force.”
After an episode of harassment, editors should talk to the target about whether they want to take a break from their beat or steer clear of certain topics, at least for a little while. But it can be difficult for both reporters and their editors to set boundaries around which stories they work on and when if a publication’s management hasn’t reckoned with the reality of digital abuse as a threat.
Furthermore, it’s important to confirm that everything has been documented as fully as possible and then reflect and debrief after harassment occurs. Editors should consider what their publication did well and any areas where the institutional support was lacking. And editors should check in with journalists who have experienced harassment after a month and again, perhaps after six months, while bringing these conclusions to decision-makers within the publication. The bottom line is that rank-and-file journalists should not have to shoulder this burden alone. That may mean lobbying newsroom management, either as a specific coalition or through a union, to enact policies that will provide institutional structure and support in these situations.
“Media companies need to understand what the internet is like for journalists and understand that this happens,” Buzzfeed’s Dixit says. “They need to be supportive. The least they can do is stand by their journalists.”
Lily Hay Newman is a senior writer at Wired magazine focused on information security, digital privacy, and hacking. She previously worked as a technology reporter at Slate and was the staff writer for Future Tense, a publication and project of Slate, the New America Foundation, and Arizona State University. Additionally, her work has appeared in Gizmodo, Fast Company, IEEE Spectrum, and Popular Mechanics. Follow her on Twitter @lilyhnewman and Mastodon @email@example.com.