Instagram Tightens Child Safety Measures After Predator Algorithm Claims

Instagram is changing how its algorithm works to better protect children on its platform, following serious accusations that its recommendations were helping steer predators toward minors. In a new blog post, Meta announced a series of expanded child safety measures aimed at accounts that post images of children but are managed by adults — like parents or talent managers.
From now on, Instagram will no longer recommend these adult-managed accounts to “potentially suspicious adults.” The move follows a damning 2023 lawsuit that described Facebook and Instagram as a “marketplace for predators in search of children,” alleging the platforms made it easy to search for, share, and even sell large amounts of child sexual abuse material. The same year, a Wall Street Journal investigation revealed that Instagram’s recommendation algorithm was actually promoting networks of pedophiles.
In response, Meta has rolled out a range of child protection tools for Facebook and Instagram users under 18. Now, these protections are being expanded to cover adult-run accounts that prominently feature children’s images. Instagram says it will “avoid recommending” these accounts to suspicious adults, like those who have been blocked by teens. It will also hide any comments from these adults and make it harder for suspicious accounts and child-focused accounts to find each other through search.
While Meta insists that most adult-managed accounts with child images are “overwhelmingly used in benign ways,” the company has also faced criticism for allegedly allowing parents who exploit their own children for profit to continue using its platforms. Last year, Meta updated its policies to stop accounts that heavily feature kids from offering paid subscriptions or receiving gifts — this new move builds on that policy.
More features are on the way, too. Instagram plans to apply its strictest messaging settings to adult-run accounts featuring kids, filtering out offensive and inappropriate comments automatically. In Instagram DMs, teens will soon have a combined report and block tool for added safety. They’ll also see when the account they’re chatting with was created — including the month and year — giving them another way to spot suspicious or fake profiles.
Meta says it will continue refining these safeguards as it tries to repair trust and prove its platforms can be a safer space for young people.