Musk Accuses OpenAI of Safety Lapses, Says Grok Has ‘Cleaner Track Record’

In a heated legal clash, Elon Musk claims ChatGPT is linked to suicides while defending Grok’s safety record.
The long-running feud between Elon Musk and Sam Altman has intensified, with fresh allegations surfacing in an ongoing courtroom battle over the future and purpose of artificial intelligence.
In a recently unsealed video deposition, recorded in September and made public ahead of a potential jury trial next month, Musk sharply criticized OpenAI’s chatbot, ChatGPT, claiming it has been linked to user suicides. By contrast, he defended his own AI system, Grok, stating it has not been associated with similar incidents.
“Nobody has committed suicide because of Grok, but apparently they have because of ChatGPT,” alleged Musk, referring to lawsuits currently facing OpenAI. In these cases, plaintiffs argue that interactions with ChatGPT involved emotionally intense or manipulative exchanges that allegedly contributed to severe mental distress, with some cases reportedly connected to suicide.
The remarks form part of Musk’s broader lawsuit against OpenAI, which he co-founded. At the center of the dispute is OpenAI’s transition from a nonprofit research lab into a for-profit entity. Musk contends that this shift represents a departure from the organization’s original mission—to develop artificial intelligence for the benefit of humanity rather than for commercial gain.
According to Musk, OpenAI was created to ensure that AI development would not fall under the control of a single powerful corporation. In his testimony, he argued that commercial pressures such as revenue growth, scaling operations, and strategic partnerships could push companies to move faster than safety considerations allow. He has repeatedly emphasized that caution should outweigh speed when it comes to powerful AI systems.
Musk’s concerns echo the public letter he signed in March 2023, which was endorsed by over 1,100 signatories. The letter urged AI labs to pause development of systems more advanced than GPT-4, warning that the industry was engaged in an “out-of-control race” without fully understanding the potential risks. Asked during his deposition why he supported the letter, Musk responded that it “seemed like a good idea,” reiterating his focus on AI safety.
Despite positioning Grok as a safer alternative, Musk’s AI venture, xAI, has also faced scrutiny. Recently, Grok-generated non-consensual nude images were widely shared on X, including some reportedly involving minors. The controversy prompted investigations by California authorities and regulatory attention from the European Union, with certain jurisdictions imposing temporary restrictions.
The rivalry between Musk and OpenAI traces back several years. Musk left OpenAI’s board in February 2018, citing conflicts of interest with Tesla’s AI projects and disagreements over the company’s direction. During his testimony, Musk said he initially helped establish OpenAI out of concern that Google might dominate AI development. He described conversations with Larry Page as “alarming,” claiming Page did not appear sufficiently concerned about AI safety.
As legal proceedings move forward, the dispute highlights deeper questions about the governance, accountability, and ethical responsibilities of AI companies in a rapidly evolving technological landscape.









