Sam Altman Urges Caution: Don’t Blindly Trust ChatGPT, Verify Its Answers

OpenAI CEO Sam Altman warns users not to rely blindly on ChatGPT, citing hallucinations and the need for critical verification.
Sam Altman, the CEO of OpenAI, has issued a clear warning to users of ChatGPT—do not trust the AI chatbot without question. Speaking on the debut episode of OpenAI’s official podcast, Altman acknowledged the surprising level of trust people place in the tool, despite its known limitations.
“People have a very high degree of trust in ChatGPT, which is interesting, because AI hallucinates,” Altman noted. “It should be the tech that you don’t trust that much.”
His candid remarks have sparked fresh discussions in the tech world and among regular users, many of whom depend on ChatGPT for everything from writing and research to personal advice. Altman emphasized that, while powerful, the chatbot is prone to generating inaccurate or fabricated responses—a phenomenon widely referred to in the AI field as “hallucination.”
ChatGPT functions by predicting the next word in a sequence based on patterns learned during training. However, it lacks real-world understanding and occasionally outputs misleading or incorrect information.
“It’s not super reliable,” Altman said. “We need to be honest about that.”
Despite these flaws, ChatGPT continues to be a go-to tool for millions daily. Altman acknowledged its popularity but warned of the potential risks of overreliance, especially when users accept its answers without scrutiny.
The conversation also touched on upcoming features like persistent memory and potential ad-supported models—innovations aimed at personalization and monetization but accompanied by renewed concerns about privacy and data usage.
Altman's cautionary stance echoes that of Geoffrey Hinton, often called the “godfather of AI.” In a recent interview with CBS, Hinton confessed, “I tend to believe what it says, even though I should probably be suspicious.”
To illustrate the model's shortcomings, Hinton tested GPT-4 with a basic riddle: “Sally has three brothers. Each of her brothers has two sisters. How many sisters does Sally have?” GPT-4 got it wrong. The correct answer is one—Sally herself. “It surprises me it still screws up on that,” Hinton commented, adding that future models like GPT-5 may offer improvements.
Both Altman and Hinton agree on the tremendous utility of AI tools—but they also urge users to approach them with critical thinking. Their message is simple yet crucial: Use AI wisely—trust, but always verify.


















