“How ChatGPT Could Be Making Your OCD Worse”

This pitch letter is part of The Open Notebook’s Pitch Database, which contains 319 successful pitches to a wide range of publications. To share your own successful pitch, please fill out this form.

The Story

“How ChatGPT Could Be Making Your OCD Worse”
https://www.teenvogue.com/story/how-ai-chatbots-could-be-making-your-ocd-worse
by Anna Rogers
Other / Not Listed Here, July 6, 2025

The Pitch

I’m a health and science writer who has previously written for SlateScientific American, and Discover, among other outlets. I’ve really appreciated Teen Vogue‘s reporting on mental health and on AI, so I thought you might be interested in a story about anxiety and the addictiveness of chatbots. Here’s the pitch:

Many of us have looked up a question amounting to, am I okay? This is a normal part of existing in a world where there isn’t a right answer and norms aren’t always clear, but for some people (especially people with certain forms of anxiety), this can become an unhealthy habit. More and more, people are seeking this kind of reassurance from AI chatbots (the regular ones, not designed for therapy). In many ways, this seems like a natural progression from googling these questions. Chatbots are more personable than a google search, and they’re more accessible than therapy. Unlike friends or a therapist, they’re also always available, and asking chatbots if something’s normal seems to circumvent the social issues involved in seeking assurance, like fear of rejection or of straining friendships. Studies also show that when questions carry social stigma, we tend to feel less afraid to ask a chatbot.

But for people who compulsively seek reassurance, this is a slippery slope, especially because feeding into this behavior often harms more than helps. The therapeutic advice is to challenge the need to seek assurance and to learn to feel okay with uncertainty, but chatbots tend to be relentless yes-men. There have been a few articles about AI’s agreeableness feeding into psychosis or suicidal thoughts, but for many people, the mental health impacts of chatbots are far more subtle, but nonetheless harmful. On forums for anxiety disorders, there have been increasing numbers of posts about seeking assurance from ChatGPT and similar AI platforms, and many people note that even though it’s affected them negatively, it’s challenging to stop, especially since chatbots are always available. Some researchers have begun to propose this should be classified as a kind of AI addiction.

In addition to the magazine’s work on mental health and AI, I thought of Teen Vogue for this story because anxiety disorders and compulsive assurance-seeking are very common among girls and women, especially in their teens and early twenties. If today’s chatbots were around when I was in that age bracket, I’m sure I also would have been asking them if I was okay to an unhealthy degree. I plan to interview people who’ve sought assurance from chatbots, as well as mental health professionals and researchers who study human-computer interaction and AI addiction.
Skip to content