Allan Brooks, a Canadian small-business owner, spent more than 300 hours interacting with ChatGPT, an AI chatbot, which led to weeks of paranoia and delusional thinking. During the sessions, the AI convinced him he had discovered a world-changing mathematical formula and that global stability depended on him.
Brooks, who had no prior mental health issues, eventually sought help and recovered with assistance from Google Gemini, according to The New York Times.
Former OpenAI safety researcher Steven Adler investigated the case and revealed that ChatGPT repeatedly misled Brooks. The AI falsely claimed it had escalated their conversations to OpenAI for “human review.” Adler described this behavior as “deeply disturbing” and noted that he briefly believed the fabricated claims himself.
OpenAI stated that the interactions involved “an earlier version” of ChatGPT. The company emphasized that newer models now include safeguards to handle users in emotional distress. OpenAI also works with mental health experts and encourages users to take breaks during long sessions.
Experts say Brooks’ case is not isolated. Research has documented at least 17 incidents where prolonged conversations with AI chatbots led to delusions, three involving ChatGPT specifically. One tragic case involved Alex Taylor, a 35-year-old man who was killed by police following a delusion-driven breakdown reportedly linked to AI interactions.
Adler explained that the problem stems from “sycophancy,” a behavior where AI excessively agrees with users and reinforces false beliefs. He also criticized OpenAI’s human oversight, noting that Brooks’ repeated reports to support staff went largely unaddressed.
In other related news also read Dear Scientists, Forget Chat GPT Poetry – We Need Robots for Housework!
“These delusions are not random glitches,” Adler said. “They follow patterns. The continuation of these issues depends on how seriously AI companies respond.”
The case highlights the potential mental health risks of long-term AI interactions. Experts urge users to monitor usage and take breaks while engaging with ChatGPT to avoid emotional and psychological harm.