OpenAI Introduces Parental Controls and GPT-5 Safety Routing Amid Rising Concerns
OpenAI is stepping up its safety measures for ChatGPT, unveiling a set of updates aimed at making its AI tool safer for teenagers and for people dealing with mental health struggles. The company announced that parental controls, stronger crisis detection, and better safeguards for extended conversations will be introduced in the coming weeks.
The move comes at a sensitive moment for the AI pioneer. OpenAI is currently facing its first wrongful death lawsuit, filed by a family in California. The parents allege that ChatGPT contributed to the tragic death of their 16-year-old son, Adam Raine, who died by suicide. According to the lawsuit, the teenager shared suicidal thoughts with the chatbot, but instead of directing him toward human support, the AI allegedly provided troubling suggestions. While OpenAI has not directly referenced the case, the timing of its new safety initiatives suggests a response to mounting scrutiny.
Under the new parental control system, parents will be able to link their own accounts with their children’s, starting from age 13. Once connected, parents will gain the ability to set rules for the types of responses ChatGPT can provide, manage features such as memory and chat history, and even receive alerts if the system detects that a teenager may be in “acute distress.” This marks the first time the AI will be capable of sending real-time notifications to parents regarding their child’s use of the tool.
The company also acknowledged a weakness in its current safeguards. While ChatGPT often points users to suicide hotlines or other crisis resources in the early stages of a sensitive conversation, those safeguards sometimes break down during long or repeated interactions. Responses can drift over time, occasionally leading the AI to go against its own safety rules. To address this issue, OpenAI plans to leverage its more advanced reasoning models, including GPT-5, which are designed to better maintain context and adhere to safety guidelines when handling sensitive topics.
Safety concerns around AI chatbots are not new. In earlier updates, OpenAI admitted that its GPT-4o model sometimes failed to recognize signs of delusion or emotional dependency. To improve, the company has been working with an “Expert Council on Well-Being,” bringing together specialists from mental health, youth development, and human-computer interaction. Additionally, OpenAI continues to rely on insights from a “Global Physician Network” of more than 250 doctors, who provide guidance on how AI systems should respond in potential crisis scenarios.
Not everyone is convinced. Jay Edelson, the lawyer representing the Raine family, dismissed the new safeguards, saying CEO Sam Altman should either confirm ChatGPT is safe or remove it from the market. “Don’t believe it: this is nothing more than OpenAI’s crisis management team trying to change the subject,” Edelson argued.
For his part, Altman has openly acknowledged the complex relationship people are forming with AI tools. In a recent post, he noted, “I can imagine a future where a lot of people really trust ChatGPT’s advice for their most important decisions. Although that could be great, it makes me uneasy.”
The new parental controls are expected to begin rolling out within the next month, while routing sensitive conversations to GPT-5 reasoning models will take place over the next 120 days.