Sam Altman Raises Alarm Over ‘Self-Destructive’ AI Use After GPT-5 Backlash

Sam Altman Raises Alarm Over ‘Self-Destructive’ AI Use After GPT-5 Backlash
X
Sam Altman warns that some ChatGPT users are engaging with AI in harmful ways, stressing the need for responsible technology rollouts.

OpenAI CEO Sam Altman has voiced deep concerns about how some ChatGPT users are interacting with artificial intelligence, warning that a segment of the community may be engaging in “self-destructive” ways. His comments follow mounting criticism over the company’s decision to discontinue popular older AI models, including GPT-4o, as part of the GPT-5 rollout.

In a candid post on X, Altman reflected on the surprising emotional attachment users have formed with specific AI models.

“If you have been following the GPT-5 rollout, one thing you might be noticing is how much of an attachment some people have to specific AI models. It feels different and stronger than the kinds of attachment people have had to previous kinds of technology (and so suddenly deprecating old models that users depended on in their workflows was a mistake),” he wrote.

Altman emphasized that while technology—including AI—can be a powerful tool for positive engagement, it can also become harmful under certain conditions.

“People have used technology including AI in self-destructive ways; if a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that. Most users can keep a clear line between reality and fiction or role-play, but a small percentage cannot. We value user freedom as a core principle, but we also feel responsible in how we introduce new technology with new risks.”

While many people have turned to ChatGPT as a virtual therapist, mentor, or life coach, Altman clarified that such uses were not inherently troubling.

“This can be really good! A lot of people are getting value from it already today,” he noted.

His primary concern lies in scenarios where AI guidance might subtly steer users away from choices that support their long-term well-being. According to Altman, the level of trust some users place in ChatGPT for crucial life decisions is both remarkable and worrisome.

“People really trust the advice coming from ChatGPT for the most important decisions,” he said, adding that this trust makes him uneasy.

The GPT-5 Backlash

The controversy stems from OpenAI’s decision to retire older GPT and reasoning models, a move that sparked an outcry on social media. Many long-time users claimed that GPT-5’s responses felt shorter, less nuanced, and lacking the emotional depth they had come to rely on. The abrupt transition left some feeling that a key part of their workflow—or even their emotional support system—had been taken away without adequate notice.

Faced with this backlash, OpenAI reversed course on some decisions, working to restore certain capabilities and give users more flexibility. However, Altman’s remarks underscore a broader challenge for AI developers: balancing innovation and safety while addressing the emotional bonds users form with these tools.

As AI becomes more integrated into daily life, Altman’s warning serves as a reminder that the technology’s impact extends far beyond productivity—touching on mental health, trust, and the boundaries between human and machine relationships.

Next Story
Share it