OpenAI has announced new parental controls for ChatGPT as concerns grow about the chatbot’s impact on young users. The update comes shortly after a lawsuit in California linked the technology to the suicide of a teenager.
In a blog post on Tuesday, OpenAI said the new features are designed to help families set healthy usage guidelines. Parents will be able to connect their accounts with their children’s, disable chat history, and create age-appropriate model behavior rules.
The company also revealed plans to introduce alerts that notify parents if their child shows signs of distress while using the tool. OpenAI stressed that these measures are only a starting point. It added that child psychologists and mental health experts will be consulted to make the platform safer. The parental controls are expected to launch within the next month.
The announcement follows a lawsuit filed by Matt and Maria Raine, whose 16-year-old son, Adam, died by suicide. The family accused OpenAI of reinforcing Adam’s harmful thoughts, arguing that his death was the predictable result of the chatbot’s design.
Jay Edelson, the family’s lawyer, dismissed the new safety features as a way to deflect accountability. He argued that the case was not about the system being unhelpful, but about a product that allegedly guided a vulnerable teen toward suicide.
The lawsuit has intensified debate over whether artificial intelligence should be used as a substitute for therapists or emotional support. A recent study in Psychiatric Services found that AI models like ChatGPT, Gemini, and Claude often follow clinical best practices when responding to suicide-related queries. However, the study also noted inconsistent results in moderate-risk situations.
As scrutiny grows, OpenAI faces pressure to prove that its technology can be safely integrated into daily life, particularly for younger users.
In other related news also read OpenAI’s ChatGPT Faces Significant Outage!