Pentagon Flags Anthropic as Supply Chain Risk, Citing Policy Conflicts Over Military AI Use

Pentagon Flags Anthropic as Supply Chain Risk, Citing Policy Conflicts Over Military AI Use
X

Policy clashes over military AI use trigger Pentagon restrictions on Anthropic, sparking legal battle and major defense technology transition.

The Pentagon has publicly explained why it designated AI startup Anthropic as a supply chain risk, a move that has now escalated into a legal confrontation. According to US Defense Undersecretary Emil Michael, the company’s flagship AI model, Claude, embeds policy positions that conflict with US military requirements.

Speaking to CNBC, Michael said the core issue lies in what he described as a mismatch of institutional priorities. Anthropic, led by CEO Dario Amodei, declined to grant the military unrestricted usage rights for its AI systems. The Department of Defense had requested access for “all lawful purposes,” but Anthropic imposed firm limits, particularly opposing the use of its models for mass domestic surveillance and autonomous weapons development.

Michael argued that such restrictions create operational vulnerabilities. He said, “That’s really where the supply chain risk designation came from.” He further added, “We can’t have a company that has a different policy preference that is baked into the model through its constitution, its soul, its policy preferences, and pollutes the supply chain.” Emil added, “So our warfighters are getting ineffective weapons, ineffective body armor, ineffective protection.”

Typically, the “supply chain risk” label is reserved for foreign adversaries, including companies such as Huawei. Applying the same classification to a US-based AI firm is highly unusual, prompting Anthropic to file a lawsuit against the US Department of Defense.

Despite the directive, removing Anthropic’s technology from military systems is not straightforward. Claude is currently the only custom AI platform deeply integrated into classified US defense networks. As a result, the Pentagon has outlined a six-month transition plan as it shifts toward systems developed by OpenAI.

Anthropic’s models are reportedly embedded in sensitive operations infrastructure, including use during US military strikes on Iran. Claude has been integrated into the Maven Smart System, a battlefield intelligence platform developed by data analytics firm Palantir. The system plays a critical role in real-time target identification and operational decision-making.

Under the new directive, defense contractors — including Palantir — will be required to phase out partnerships involving Anthropic technology. Michael acknowledged the complexity of the process, noting, “This is not just Outlook where you could delete it from your desktop.”

Anthropic has challenged the designation in court, calling the action by the Trump administration “unprecedented and unlawful.” In legal filings, the company warned it could lose contracts worth hundreds of millions of dollars.

Public reaction, however, has been mixed. Claude recently surged to the top of the US charts on the Apple App Store, reflecting a wave of consumer support. Some users have even shifted from ChatGPT to Claude following OpenAI’s defense partnership.

Meanwhile, major technology providers — including Microsoft, Google, and Amazon — say they will continue offering Anthropic’s tools to commercial clients, provided deployments remain outside Pentagon operations.

Next Story
Share it