Since ChatGPT debuted in late 2022, it and other generative artificial intelligence (AI) tools have increasingly crept into conversations about journalism. Unlike assistive AI—which journalists have long used to create interview transcripts or run spell-checks—generative AI can tackle tasks that previously only a human could do, in a fraction of the time. These tools can brainstorm story ideas, parse complicated data sets, write copy, and suggest interview questions, among other editorial feats.
Some media leaders are “leaning into AI,” exploring its ability to create content and interact with readers. And a few prominent journalism organizations are creating working groups to explore how the tech can be useful in practice. But other experts warn that generative AI steals writers’ work and raises a raft of thorny ethical questions. The companies behind this technology, such as ChatGPT’s developer OpenAI, are alarmingly opaque, says Tim Requarth, a contributing writer at Slate. OpenAI’s Chinese competitor, DeepSeek, which claims to be open source, has also been criticized for its lack of transparency.
The proliferation of AI-generated content has the potential to degrade the social and economic conditions that allow human writers to thrive, according to Requarth. The writing process is often how journalists think through ideas, he says; if AI cuts through that process, “you want to be careful about what you outsource your thinking to.”
The debate rages on, but 45 percent of 3,016 journalists surveyed by the public relations and media software company Cision in May 2024 say they have already started using generative AI tools at least “a little.” And according to an April 2024 Associated Press survey, the fraction of journalists who said they or their organization have used generative AI is even higher—at almost 74 percent.
Educating themselves about AI tools can empower journalists to make better decisions for their work and their audience, says Meghan Murphy, the director of programs at the Online News Association (ONA), who runs the organization’s AI in Journalism Initiative. So how does a science journalist begin to separate the promise of generative AI from the pitfalls? This tip sheet, based on conversations with media professionals and journalists who have carefully considered the use of AI in our industry and, for some, in their own work, lays out key questions to consider as you weigh giving AI a try.
What Can Generative AI Do?
Generative AI can create sophisticated texts. Legitimate journalists aren’t using AI to write their articles for them with zero human oversight, but an AI tool such as ChatGPT or Claude can easily come up with a workable outline or even take a stab at drafting some copy—though the latter might be considered ethically murky by some editors or fellow reporters. It can also generate possible transitions between paragraphs, draft social media posts, and propose headlines.
It can summarize and synthesize material. Journalists can use AI tools to help them digest long swaths of research text or massive documents. Data journalists at The Marshall Project, for example, used ChatGPT to transform dense policy documents into readable summaries of the prison book-banning policies in over 30 U.S. states. Humans on the team checked the output twice, according to a post from data project lead Andrew Rodriguez Calderón in the newsletter, “Generative AI in the Newsroom.”
AI can make information gathering more accessible.
It can help brainstorm ideas. Murphy doesn’t use ChatGPT to write for her, but she does bounce ideas off the chatbot to help her think through ONA blog posts. “It’s an interesting thought starter,” she says. When Murphy prompts the tool, “a lot of the time it spits back to me what I already know or what I wrote anyway,” she says, but sometimes it will present an angle she wants to explore further. And when a snappy headline has eluded her, Murphy says she’s explored the AI tool built into the productivity and note-taking app Notion to spark some ideas. Even if all you get from the chatbot’s ideas is a good laugh, sometimes that can be enough to shake off writer’s block, Murphy says.
It can support audience engagement. Only humans can cultivate relationships with audiences and sources, says José Nieves, editor-in-chief at the Cuban publication elTOQUE and International Center for Journalists Knight Fellow. But journalists without the resources to synthesize large amounts of text can use AI to monitor social media trends or even reactions to their work. Nieves and his team have used AI to summarize posts on social media, thereby helping the small newsroom decide where to direct their energy and “generate meaningful relations with our community,” he says.
It can make information gathering more accessible. Generative AI tools tailor their outputs to users’ stated needs or preferences. For example, when MIT Technology Review reporter Caiwei Chen was tracking unusual data-related jobs in China for a project with the City University of New York’s AI Journalism Labs, she and her collaborator asked ChatGPT to help them code a crawler for job postings. “I constantly asked ChatGPT, ‘Explain to me why it works like this as if I’m a college freshman,’” she says. And on the audience side, generative AI could help readers better grasp your stories. Parents have told Chen, for example, that the tool can help children with dyslexia by turning an article into a conversation. “They [might not] read an article, but when they can talk to the article, it’s easier for [them] to learn,” she says.
What Can Generative AI Not Do?
Its outputs can’t be trusted as facts. Generative AI is powered by large-language models, or LLMs. These models use probability to predict likely responses to a prompt based on what they learn from training data. Therefore, AI tools sometimes generate nonsensical or false answers called “hallucinations.” These tools aren’t designed to retrieve accurate information, Requarth says. They compose responses based on word associations. So, in a way, “LLMs only hallucinate,” he says. (Don’t bother with relying on ChatGPT for references—it hallucinates those, too.) Carefully fact-check any outputs you receive, and remember that you’re on the hook for mistakes you make in your work. “Even if you did it by accident, because ChatGPT did it for you,” says Nicola Jones, a freelance science reporter.
It should go without saying that newsrooms can’t run solely on generative AI.
It can’t come up with truly original ideas. Generative AI tools produce responses to prompts based on writing that already exists—anything that’s been fed to it as training data. Because of the technology’s derivative nature, some writers and academics are concerned that if AI-generated writing is normalized, it will homogenize our written culture in style and perhaps even content. Generative AI might also curtail users’ creativity by engaging anchoring bias, in which they overly rely on AI-generated responses. AI’s inability to produce fresh ideas is one of several reasons Jones doesn’t use the technology for her reporting, she says. “As a journalist, I need to be right and new and it cannot be right or new.”
It can’t replace a human writer or editor. It should go without saying that newsrooms can’t run solely on generative AI. “Creative tasks should be reserved for humans; AI still makes serious mistakes,” Alexandre Orrico, editorial director of Brazilian outlet Núcleo Jornalismo, wrote in an email. “This is a risk that, in a journalism outlet, we cannot take.” (Núcleo Jornalismo was the first media outlet in Brazil to release an AI usage policy.) Vera Chan, a media consultant and former journalist, says it would be a mistake to use AI instead of working with real journalists. “An instinct with all technology is to go towards an area of replacement rather than augmentation,” she says. Instead, Chan hopes AI can be a constructive part of the industry, one that helps writers do their jobs instead of replacing them.
What Ethical Issues Does Generative AI Raise?
Using generative AI adds to your carbon footprint. A single ChatGPT query uses about 10 times as much electricity as a Google search and, as a result, datacenters stand to consume 4.5 percent of the world’s energy production by 2030. AI technology also uses up large amounts of water to cool equipment and generate electricity for data centers. Journalists should think carefully about the climate costs of using these tools, Requarth says.
Deploying AI may present unacceptable risks to accuracy and quality.
Generative AI applications may be trained on stolen writing. Data that trains LLMs comes from free internet sources, such as Wikipedia, and some not-so-free sources, such as The New York Times. The jury is still out on whether training data can be considered a “fair use” of those materials under copyright law, but some writers and publishers are disgruntled about their work being used without credit or revenue from companies such as OpenAI.
OpenAI operates without much transparency or oversight. Several safety researchers have left OpenAI since early 2024, and many of them were subject to a strict non-disparagement agreement as part of their offboarding, raising eyebrows in the tech world. Whistleblowers say there are serious risks if OpenAI’s allegedly “game changing” tech is used irresponsibly. And few governmental guardrails exist. The European Union adopted one of the first comprehensive regulations on AI in July 2024. And while U.S. policymakers are considering more than a hundred bills related to AI, there aren’t yet any overarching federal laws regulating the development or use of the technology.
Using generative AI involves accepting tradeoffs. Deploying AI may present unacceptable risks to accuracy and quality. Orrico advises journalists to ask themselves upfront: Do I really need AI to perform this task? And if so, can I leverage the agility offered by the AI tool without compromising too much? News organizations should ask the same questions, too. Nieves, who characterizes elTOQUE’s mission as serving communities living under censorship, says his team relies on generative AI to brainstorm headlines and write funding proposals faster. That allows them to shift resources to “deep and valuable work that the AI couldn’t do, [such as] reporting in the field,” he says.
What Does Thoughtful Experimentation with Generative AI Look Like?
Learn how it works! Don’t assume you already know everything there is to know about generative AI. Learning the basics can inform your decision-making, help you understand how AI is developing, and allow you to weigh its strengths and limitations. The Global Investigative Journalism Network has published a resource compiling a wealth of useful information on AI tools and how they work, including a visual primer on how LLMs process full sentences at once. You can also listen to data scientist and journalist Nikita Roy’s podcast Newsroom Robots, where she covers AI use in journalism, or try one of ONA’s AI trainings for journalists.
Talk through AI use openly with your colleagues and editors.
Play around. Before using the technology in your work, try messing around with it on your own. For example, Jones tested out ChatGPT by asking it to write rhyming scavenger hunts for her kids. You can explore prompt engineering, and watch how using different ChatGPT prompts affects its output. (For example, saying, “Explain the scientific method,” and “You are a professional neurologist with decades of experience. Explain the scientific method as it pertains to neurological research,” will result in very different answers.) Requarth advises journalists who are curious about AI to test out its capabilities in “low-stakes, low-sensitive-information scenarios,” such as drafting social media posts to promote an article.
You can use it to organize your research. Chen says she sometimes uses ChatGPT to orient herself while swimming in research for a feature, leaning on the tool to break down jargony terms or to jog her memory about lengthy reporting material, such as books. But remember, its outputs aren’t authoritative, she says. The advent of generative AI has led Jones to be similarly wary: “Be ever more cautious with everything you read online,” she says. “Always go to primary sources. Talk to people.”
Talk through AI use openly with your colleagues and editors. During a 2024 gathering of state newsroom leaders within the National Trust for Local News, the group had a two-hour-long conversation about their views on experimenting with generative AI, recalls Rodney Gibbs, head of product and audience at the nonprofit organization. The discussion helped build trust across newsrooms in the face of the AI-paradigm shift, he says. “It helped us put it in perspective.” It’s also smart to check in with your editors about their publications’ AI policies.
How Should Journalists Think About AI and Privacy?
Be highly selective about what you share with generative AI tools. Anything you submit could be used as training material and reappear without your knowledge as some part of a response to another person’s query. You could be wading into the murky waters of copyright infringement, for example, by feeding a story into an LLM without getting permission from the publication that “owns” it. “You want to be super careful about making sure that you are up to date on those privacy policies [for each AI tool] and where that information is going,” Murphy says. To avoid ethical snafus, consider using paid subscriptions, such as ChatGPT Team, which allows users to prevent the company from using their exchanges to train its products. There are also private platforms, such as the open-source tool AnythingLLM, which you can download and use locally on your own device.
Transparency about AI use can go a long way to build reader trust.
Don’t give any personal or identifying information to a chatbot. Uploading a sensitive conversation to an AI tool could put you or your sources at risk. For investigative journalists or journalists operating in hostile zones, like Nieves, privacy can be a matter of personal safety. For those reasons, Nieves and his team are careful not to feed any protected sources’ information, or sometimes even writers’ bylines, into chatbots.
If you choose to use generative AI, be transparent with your sources. After a source expressed their discomfort with her use of Otter.AI to transcribe interviews, Jones changed her email signature to be more transparent upfront. She lists the specific recording tools she uses to conduct interviews and transcribe them, so sources can make informed decisions about participating in interviews.
AI outputs deserve disclosure. The majority of readers—94 percent—want journalists to disclose their use of AI, according to a survey by ONA and the journalism training and research organization Trusting News, which also offers a Trust Kit on the subject for journalists to walk through. “Unfortunately, we know that many places use AI platforms indiscriminately without informing readers,” says Orrico, whose outlet has a practice of disclosing whether AI was used to generate summaries or translations, for example. Transparency about AI use can go a long way to build reader trust, especially at a time when it’s increasingly fragmented. “We need to lift the curtain as an industry even more and show people how we work,” Murphy says. “I think that this is a good opportunity to do that.”

Emma Gometz is a journalist, illustrator, and performance artist based in New York City. When she’s not writing or thinking about writing, she’s probably waiting in a rush line for a Broadway show, doing yoga, or trying to learn about space by reading books meant for children. She’s currently a digital producer for WNYC’s Science Friday, but you can also find her words in Teen Vogue, The Open Notebook (where she’s an early-career fellow supported by the Burroughs Wellcome Fund), and The Columbia Spectator. Find her on Bluesky @emmalgometz.bsky.social.