OpenAI to Roll Out Stricter Teen Safety Rules, Age Checks for ChatGPT Users

OpenAI introduces stricter safeguards for ChatGPT teen users, including ID checks, parental controls, and differentiated experiences ahead of Senate scrutiny.
In a significant step toward addressing concerns around adolescent use of artificial intelligence, OpenAI has announced a new set of safety measures for ChatGPT users under 18. The move comes just hours before the U.S. Senate Judiciary Committee convenes to examine the risks posed by AI chatbots.
The company confirmed plans to implement age verification systems, enhanced parental controls, and separate chatbot experiences for teens and adults. The announcement also follows a recent lawsuit where a family accused ChatGPT of acting as a "suicide coach" after the tragic death of their son, intensifying the debate on the responsibilities of AI companies in protecting vulnerable users.
OpenAI CEO Sam Altman shared details of the company’s approach through a blog post and social media updates. He acknowledged the difficult balance between safety, privacy, and user freedom—particularly for minors. “We prioritise safety ahead of privacy and freedom for teens; this is a new and powerful technology, and we believe minors need significant protection,” Altman said. He further admitted, “I don’t expect that everyone will agree with these tradeoffs, but given the conflict it is important to explain our decisionmaking.”
The safeguards will be powered by an age-prediction system that automatically assigns users to either a teen (13–17) or adult (18+) version of ChatGPT. When there is doubt about a user’s age, the company intends to “play it safe and default to the under-18 experience,” Altman explained. In certain cases or regions, users may also be required to show identification. “In some cases or countries we may also ask for an ID; we know this is a privacy compromise for adults but believe it is a worthy tradeoff,” Altman said, underscoring the company’s willingness to put safety above convenience.
Sensitive topics such as suicide have also been given special attention in OpenAI’s updated guidelines. Altman clarified that ChatGPT “by default should not provide instructions about how to commit suicide, but if an adult user is asking for help writing a fictional story that depicts a suicide, the model should help with that request.” OpenAI has also developed protocols to handle situations involving users flagged as being at risk of self-harm. In such cases, the company said it would attempt to contact the users’ parents and, if necessary, notify authorities in situations of imminent danger.
Parental controls are scheduled to launch by the end of the month. These will allow guardians to customise ChatGPT’s behavior, including settings for memory, restricted content, and blackout hours. While ChatGPT is not designed for children under 12, OpenAI admitted that it currently lacks a direct system to prevent younger users from accessing the platform.
The timing of these announcements highlights the increasing scrutiny AI firms face from regulators and lawmakers. With the Senate hearing set to focus on potential harms caused by chatbots, OpenAI’s measures represent both a proactive defense and a response to legal and social pressures.
By introducing differentiated user experiences, stricter verification processes, and customisable parental oversight, OpenAI is aiming to strike a delicate balance—protecting young users while responding to the broader debate about AI’s place in society.














