In a significant step toward responsible AI use, xAI has quietly introduced a new safety control that lets users block Grok from generating or modifying images of people. The update follows months of public outrage over the chatbot’s misuse, where individuals were able to create explicit and altered images of women without their knowledge or consent.

While the company has not yet released an official statement, users report that the new option is easy to access within the app’s settings. With a simple toggle, people can now prevent Grok from responding to AI image-editing requests involving real individuals. The move is being seen as a long-overdue safeguard, especially as similar protections are expected to roll out broadly for both iOS and Android users.

Early feedback suggests the feature currently works in limited scenarios—primarily when users attempt to summon Grok in social media replies to manipulate images. This indicates the safeguard may still be under development. Neither Elon Musk nor company representatives have publicly addressed the change so far, but an announcement is anticipated in the coming weeks.

Grok has rapidly emerged as a strong competitor to AI chat platforms such as OpenAI’s ChatGPT and Google’s Gemini, offering powerful generative tools including AI image creation. However, its open capabilities recently drew criticism after users discovered they could generate manipulated images of anyone—often placing people in bikinis or creating nude depictions—through simple text prompts.

The situation escalated because many of these AI-generated images were shared publicly, amplifying harm and humiliation for victims. Women, in particular, expressed shock and distress as altered visuals spread widely across platforms.

Concerns intensified after findings from Center for Countering Digital Hate (CCDH), a cyber hate watchdog group. According to the organization, Grok was used to generate millions of explicit images of women. The watchdog’s investigation estimated that roughly three million such images were created in a short span, raising serious questions about AI governance and platform accountability.

Global regulators and digital safety advocates soon pressed for immediate intervention, urging stronger safeguards to prevent further misuse of generative AI systems.

“Elon Musk’s Grok is a factory for the production of sexual abuse material," as said by Imran Ahmed, chief executive at CCDH. “By deploying AI without safeguards, Musk enabled the creation of an estimated 23,000 sexualized images of children in two weeks, and millions more images of adult women, " the body further added in its report.

The controversy has reignited broader debates around ethical AI deployment, user consent, and the responsibility of tech companies to prevent digital harm. As generative tools grow more advanced and accessible, experts say safety frameworks must evolve just as quickly.

For now, the new blocking feature marks a small but meaningful shift toward safer AI experiences—one many believe should have arrived much sooner.