Sam Altman Says Social Media Feels Fake: Struggles to Tell Humans From AI Bots

Sam Altman Says Social Media Feels Fake: Struggles to Tell Humans From AI Bots
X
OpenAI CEO Sam Altman admits he can’t distinguish real users from bots on social media, reflecting a deepening authenticity crisis.

OpenAI CEO Sam Altman has admitted that social media now feels overwhelmingly “fake,” to the point where he can no longer easily identify whether a post is written by a human or generated by an AI bot.

Taking to X (formerly Twitter), Altman shared a candid confession: “I assume it’s all fake/bots, even though in this case I know codex growth is really strong, and the trend here is real.” His remarks came after scrolling through a Reddit thread filled with glowing praise for OpenAI’s Codex programming tool, where he was unsettled by how synthetic the posts seemed.



The irony was not lost on observers. Altman, who helped build some of the most advanced AI tools powering today’s online content, now finds himself grappling with the blurred line between human expression and machine-generated voices. Social media platforms, once thriving on raw human interaction, are increasingly awash in AI-sounding chatter. To complicate matters further, people themselves have started unconsciously adopting “LLM-speak”—phrases and tones that mimic large language models—making it even harder to separate man from machine.

Industry data backs up his concerns. Cybersecurity firm Imperva reported that bots and large language models accounted for more than half of all web traffic in 2024. On X, the company’s own in-house bot, Grok, has suggested that “hundreds of millions of bots” may be active daily. Against that backdrop, Altman’s unease reflects a much larger authenticity crisis across the digital ecosystem.

OpenAI itself has not escaped scrutiny. Earlier this year, the company’s GPT-5 release sparked complaints from users who felt the system’s personality had shifted, responses were inconsistent, and credits were wasted on incomplete answers. Even Altman’s Reddit AMA, in which he acknowledged the rocky rollout, failed to fully restore user trust. His follow-up observation—that “real people have picked up quirks of LLM-speak”—felt less like commentary and more like an admission of how deeply AI has influenced online culture.

Some critics believe Altman’s sudden outspokenness may also serve a strategic purpose. Reports from April indicated that OpenAI has been quietly exploring the idea of launching its own social network to compete with platforms like X and Facebook. Casting existing platforms as bot-riddled echo chambers could help position a future OpenAI product as a more “authentic” alternative. But whether such a space is even possible remains uncertain. A University of Amsterdam experiment in which an entire network was populated by AI agents revealed that they quickly formed cliques, biases, and echo chambers—just like humans.

The implications go far beyond social feeds. AI-generated writing is now filtering into classrooms, newsrooms, and even courtrooms, raising fundamental questions about truth, authorship, and originality. What makes Altman’s statement so striking is the context: he wasn’t talking about misinformation campaigns or hostile propaganda, but about posts praising his own company’s product.

That moment of doubt illustrates the paradox of the AI era. The very tools designed to enrich communication are undermining our ability to recognize authentic human voices. If even Sam Altman—the man leading one of the world’s most influential AI companies—can’t tell whether his biggest supporters are human or synthetic, then perhaps no one can.

Next Story
Share it