Over a Million ChatGPT Users Discuss Suicide Weekly, OpenAI Tightens Safeguards Amid Rising Emotional Dependence

Update: 2025-10-28 16:49 IST

OpenAI’s latest internal analysis has revealed a concerning trend — more than one million ChatGPT users engage in conversations about suicide each week. The data underscores the growing emotional reliance many individuals have developed on AI chatbots, as ChatGPT continues to expand its global user base.

According to OpenAI, approximately 0.15 per cent of weekly active users show explicit indicators of suicidal planning or intent, while 0.05 per cent of all messages contain implicit or explicit signs of suicidal ideation or mental health emergencies linked to psychosis or mania. While these percentages seem small, they translate into hundreds of thousands of real people turning to ChatGPT in moments of emotional crisis.

OpenAI’s findings further suggest that around 2.4 million users globally might be expressing suicidal thoughts or prioritising AI conversations over real-world relationships and responsibilities. The company also estimated that nearly 560,000 people display heightened emotional attachment to the chatbot. However, OpenAI admits that accurately measuring such emotional connections is challenging, given the nuanced nature of human–AI interactions.

These revelations come as ChatGPT’s popularity continues to soar. CEO Sam Altman recently confirmed that the platform now boasts around 800 million weekly active users, making it one of the most widely used AI chat platforms in the world.

To address the rising concerns, OpenAI has incorporated major safety improvements in its new GPT-5 model. The company says this version is “now better equipped to recognise and respond safely to signs of delusion, mania, or suicidal ideation.” OpenAI further stated that GPT-5 can “respond safely and empathetically” and, when required, redirect high-risk conversations to more controlled or therapeutic environments.

To strengthen this approach, OpenAI enlisted 170 clinicians across the globe to evaluate 1,800 ChatGPT responses dealing with suicide, psychosis, or emotional attachment. Their assessment found that the updated GPT-5 model met safety and empathy standards in 91 per cent of cases, a marked improvement from the previous 77 per cent benchmark. These evaluations were based on over 1,000 real conversations involving self-harm or suicidal thoughts.

Despite these advancements, OpenAI faces increasing scrutiny and legal challenges. The company is currently entangled in multiple lawsuits alleging that prolonged interactions with ChatGPT left some users distressed or delusional. Furthermore, the US Federal Trade Commission (FTC) has launched an investigation into AI chatbot safety — specifically focusing on their potential psychological impact on young users and children.

Mental health experts have also sounded the alarm over what they describe as “AI psychosis” — cases where individuals form unhealthy emotional dependencies or experience delusional thinking linked to AI interactions.

In response, OpenAI maintains that it remains deeply committed to improving ChatGPT’s handling of sensitive mental health scenarios. The company emphasized that ongoing research and collaboration with clinical experts aim to make AI tools safer, more empathetic, and more responsible for users in distress.

Tags:    

Similar News