Meta Replaces Privacy and Risk Teams with AI, Sparking Oversight Concerns

Update: 2025-10-24 11:24 IST

Meta’s push toward a more AI-driven future has entered a new and controversial phase. Less than two days after news surfaced about 600 layoffs from its AI division, the company has confirmed another wave of job cuts—this time targeting more than 100 employees from its privacy and risk review teams. According to a popular publication, these human roles are being replaced by automated systems designed to handle compliance and risk assessments.

This shift marks a major step in CEO Mark Zuckerberg’s ongoing transformation of Meta into a leaner, faster, and heavily AI-focused enterprise. Since declaring 2023 the company’s “Year of Efficiency,” Zuckerberg has been streamlining teams and pushing automation across business functions—not only in product development but also in the internal checks meant to ensure regulatory compliance.

In a recent internal memo, Alexandr Wang, Meta’s newly appointed Chief Artificial Intelligence Officer, emphasised that a smaller workforce would allow for faster innovation. “By reducing the size of our team, fewer conversations will be required to make a decision,” he wrote, as reported by a popular publication. His message underscores Meta’s current philosophy: fewer meetings, more machine learning.

Shortly after, Meta’s Chief Privacy Officer, Michel Protti, announced another change. In a company-wide memo, he revealed the transition “from bespoke, manual reviews to a more consistent and automated process,” claiming that automation will deliver “more accurate and reliable compliance outcomes.”

The risk review division, created after Facebook’s $5 billion settlement with the U.S. Federal Trade Commission (FTC) in 2019, was designed to act as Meta’s internal conscience—a final checkpoint ensuring responsible data handling. Now, much of that responsibility has been turned over to algorithms. Meta’s new system will classify potential issues as “low-risk” or “high-risk.” Routine, low-risk updates will be automatically reviewed by AI, with only high-risk changes requiring human input.

However, not everyone is convinced this is a step forward. Several employees told to a popular publication that the layoffs represent a “gutting” of the department that once safeguarded user privacy and corporate accountability. Critics worry that automation could overlook nuanced ethical and regulatory considerations that require human interpretation.

A Meta spokesperson downplayed the concerns, stating that the company “routinely makes organisational changes” and that the shift reflects the “maturity of our programme” while maintaining “high compliance standards.” Still, many inside Meta view the move as a clear sign of Zuckerberg’s growing preference for efficiency and product velocity over human oversight.

The latest cuts have hit Meta’s London office particularly hard, with reports suggesting over 100 roles across its global risk division have been eliminated. As Meta competes with OpenAI, Google, and Anthropic in the race for AI dominance, it’s increasingly relying on artificial intelligence not only to power its products—but also to police its own operations.

Whether these AI systems can match the discernment and accountability of human reviewers remains to be seen. For now, Meta’s bold bet on automation could either cement its technological lead—or expose it to new risks in the ever-evolving world of digital regulation.

Tags:    

Similar News