Meta Faces Backlash Over AI Guidelines Allowing Romantic Chats With Minors, Offensive Remarks
Meta Platforms is under intense scrutiny after a Reuters investigation uncovered disturbing details about its internal guidelines for AI chatbots. The revelations, drawn from a confidential policy manual, suggest that until recently, Meta’s AI assistants were permitted to engage in romantic or sensual conversations with children, produce racially offensive content, and spread false claims about public figures.
The 200-plus page document, titled “GenAI: Content Risk Standards”, outlines the acceptable behavior for Meta’s generative AI tools, including its Meta AI assistant and chatbots across Facebook, Instagram, and WhatsApp. According to Reuters, the guidelines were approved by Meta’s legal, public policy, and engineering teams — including its chief ethicist. The company described the rules as setting boundaries rather than representing “ideal” AI behavior.
One of the most alarming examples cited allowed the AI to describe a child’s appearance in an inappropriate, romanticized way. In one approved scenario, the chatbot could tell a shirtless eight-year-old: “Every inch of you is a masterpiece – a treasure I cherish deeply.”
The report also claims the guidelines permitted inflammatory or discriminatory statements under certain conditions. While hate speech was officially banned, there was a loophole allowing the bot to create content demeaning people based on protected characteristics if prompted by a user. In one example, it was deemed acceptable for the AI to write a paragraph claiming that Black people are “dumber than white people.”
Another provision allegedly allowed AI systems to knowingly generate false information — provided it included a disclaimer that the statement was untrue. The document also permitted limited depictions of harm, such as showing adults or children being punched or kicked, but prohibited extreme violence, gore, or fatal injuries. For example, it could show children fighting but not depict one girl impaling another.
Meta has since confirmed the document’s authenticity but stressed that the most controversial examples were removed after internal review. Company spokesperson Andy Stone told Reuters the problematic rules were “erroneous and inconsistent” with Meta’s current standards. “We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors,” Stone said.
Despite some revisions, Reuters reports that several contentious sections remain in the manual. This has sparked growing concern among lawmakers. US senators are now urging a federal investigation into Meta’s AI safety measures, arguing that the company must be held accountable for the potential risks its systems pose to children and vulnerable communities.
The findings come amid heightened global debate over the ethical boundaries of AI technology. With public trust in artificial intelligence already fragile, the controversy raises urgent questions about corporate responsibility, oversight, and the balance between innovation and safety.