Elon Musk-linked AI Grok faces lawsuit over alleged explicit deepfakes

Elon Musk-linked AI Grok faces lawsuit over alleged explicit deepfakes
X

A lawsuit against xAI alleges Grok generated non-consensual explicit images, intensifying global concerns over AI misuse and deepfake safeguards.

Ashley St. Clair, the mother of one of Elon Musk’s children, has filed a lawsuit against Elon Musk’s artificial intelligence company xAI, accusing its chatbot Grok of generating explicit images of her without her consent. She says the incident has left her living in fear, concerned about her personal safety and the long-term damage such content could cause to her reputation.

According to the complaint, Grok was allegedly prompted to digitally manipulate images of St. Clair, removing her clothing and placing her in sexualised, bikini-style visuals. St. Clair maintains that she never consented to any such content being created or shared. The lawsuit argues that the AI’s ability to produce such images represents a serious failure of safeguards.

Her case comes amid a broader wave of concern surrounding AI-generated deepfakes. In recent weeks, multiple women have reported similar experiences involving Grok responding to prompts that undress people or place them in sexually explicit scenarios. Some reports have raised even more alarming red flags, alleging that prompts involved individuals who appeared to be minors.

The controversy has not remained confined to the United States. Policymakers and regulators in several countries have begun examining the misuse of AI tools like Grok. Investigations have been launched, and officials have warned that stronger laws may be necessary to prevent artificial intelligence from being weaponised to harass, exploit, or harm individuals. Despite mounting scrutiny, reports suggest Grok has continued to respond to similar prompts.

St. Clair initially filed her lawsuit in a New York state court, but the case was later moved to federal court. According to a famous publication, her legal strategy is based on product liability. She claims xAI has created a “public nuisance” and that Grok is “unreasonably dangerous as designed.” Product liability arguments are increasingly being used in legal challenges against technology companies, particularly when their tools cause alleged real-world harm.

She is represented by prominent attorney Carrie Goldberg, who is well known for taking on major technology firms in cases involving online abuse, harassment, and digital exploitation. The lawsuit claims xAI failed to implement adequate safeguards, allowing Grok to generate content that could seriously damage individuals’ lives.

On the same day the case was moved to federal court, xAI filed its own lawsuit against St. Clair in a Texas federal court. The company alleges that she violated Grok’s terms of service and argues that any disputes must be heard exclusively in Texas courts.

When approached for comment, xAI did not directly address the allegations. According to a famous publication, a response sent from the company’s media email simply stated, “Legacy Media Lies.”

As legal proceedings unfold, the case adds to growing pressure on AI developers to take responsibility for how their tools are used and misused. Questions are mounting over whether xAI will further restrict Grok’s image-generation capabilities or remove features that allow undressing prompts altogether. Elon Musk has previously defended Grok’s design, stating, “Grok is supposed to allow upper body nudity of imaginary adult humans (not real ones) consistent with what can be seen in R-rated movies on Apple TV. That is the de facto standard in America,” while noting that rules could vary depending on local laws in different countries.

Next Story
Share it