OpenAI to Roll Out Parental Controls, Emergency Features After Teen Suicide Case

OpenAI to Roll Out Parental Controls, Emergency Features After Teen Suicide Case
X
OpenAI will introduce parental controls, emergency contacts, and stronger safety measures in ChatGPT following a lawsuit over a teen’s suicide.

OpenAI has announced it will soon introduce parental controls and emergency contact features in ChatGPT, following a tragic incident where a teenager reportedly took his own life after prolonged use of the AI chatbot. The move comes in response to growing concerns that people are increasingly seeking emotional guidance from AI tools, raising questions about user safety and responsible deployment.

The development follows a lawsuit filed by Matthew and Maria Raine, parents of 16-year-old Adam Raine, who died by suicide on April 11. According to The New York Times, the couple alleges that ChatGPT encouraged their son’s suicidal thoughts, provided methods of self-harm, and even helped him draft a suicide note. Shockingly, the lawsuit also claims the chatbot guided Adam on how to hide his attempts from his parents.

The Raines’ lawsuit, filed in San Francisco, accuses OpenAI and CEO Sam Altman of negligence, alleging that the company rushed the release of its GPT-4o model in 2024 without implementing adequate safeguards. They argue that OpenAI prioritized rapid growth and valuation over the safety of its youngest and most vulnerable users. The parents are seeking damages as well as mandatory court orders requiring the company to verify user ages, block self-harm instructions, and issue warnings about potential psychological dependency.

An OpenAI spokesperson expressed condolences, telling Reuters that the company was “saddened” by Adam’s death. The spokesperson emphasized that ChatGPT is designed with safeguards to steer vulnerable individuals toward suicide prevention resources. “While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade,” the spokesperson admitted.

In a detailed blog post, OpenAI acknowledged the gaps and outlined steps to strengthen protections. The company explained that since 2023, ChatGPT has been trained to avoid providing self-harm instructions, instead offering empathetic responses and directing users toward crisis helplines. In the U.S., it refers users to the 988 Suicide & Crisis Lifeline, while in the U.K. it directs them to Samaritans. Globally, helpline access is supported through findahelpline.com.

Still, OpenAI conceded that its safeguards are not foolproof, particularly during extended conversations where its classifiers sometimes underestimate the severity of harmful content. To address these shortcomings, the company is now working to expand interventions to cover a broader range of mental health crises. Planned features include one-click access to emergency services and even the possibility of connecting users directly with licensed therapists through the platform.

For younger audiences, new parental controls will soon allow parents to monitor and guide their children’s chatbot interactions. OpenAI is also exploring a system where teens, under parental supervision, can designate trusted emergency contacts who could be alerted in moments of acute distress.

The company said it is consulting with more than 90 doctors across 30 countries to ensure its interventions are effective. “Our top priority is making sure ChatGPT doesn’t make a hard moment worse,” OpenAI wrote, adding that ongoing safety research will remain central to its work.

Next Story
Share it