Connect with us
US Broadens Pentagon AI Supplier List as Tensions with Anthropic Escalate

AI

US Broadens Pentagon AI Supplier List as Tensions with Anthropic Escalate

US Broadens Pentagon AI Supplier List as Tensions with Anthropic Escalate

The United States government is dramatically expanding its roster of approved artificial intelligence vendors for military and intelligence operations. Four new companies have been added to the list, with the Pentagon signing agreements that allow Microsoft, Reflection AI, Amazon, and Nvidia to provide their technologies for classified missions. These firms join OpenAI, xAI, and Google on the Department of Defense’s preferred supplier list, which now covers products that can be used for what the department calls ‘any lawful use.’

That phrase has become a flashpoint. It sits at the center of a growing rift between the administration and Anthropic, the AI company behind the Claude model family. Anthropic CEO Dario Amodei publicly objected to the open-ended language, arguing it would permit the government to deploy the company’s technology for domestic surveillance and autonomous weapons systems. Amodei wanted those applications explicitly barred. The Pentagon responded by canceling a $200 million contract with Anthropic. The company quickly sued, claiming the decision cost it millions in lost revenue and damaged its standing with other potential clients.

An Unprecedented Designation for a US-Based AI Firm

The Trump administration then labeled Anthropic a ‘supply chain risk.’ This marked the first time a company headquartered inside the United States had ever received that classification. Government sources later described Anthropic as a ‘woke’ company in subsequent statements. The moves sent a clear signal: the administration is willing to take aggressive action against AI firms that push back on military applications.

Anthropic’s legal challenge argues the government’s actions were retaliatory and financially devastating. The company contends that the canceled contract and the supply chain risk label have scared off other potential partners. For now, the dispute remains unresolved in court, but the broader implications for the AI industry are already clear.

Building an AI-First Fighting Force

The Pentagon’s official statement on the new agreements emphasizes long-term flexibility. ‘The Department will continue to build an architecture that prevents AI vendor lock-in and ensures long-term flexibility for the Joint force,’ the statement reads. The technologies will give ‘warfighter the tools they need to act with confidence and safeguard the nation against any threat.’

These AI systems will operate at Impact Levels six and seven, which correspond to secret data and the most highly classified materials. The stated goal is to create what the Pentagon has described as an ‘AI-first fighting force.’ But current usage of generative AI across defense departments remains largely focused on non-classified tasks like document drafting, summarization, and internal research.

The new suppliers are expected to help ‘streamline data synthesis’ and ‘elevate situational understanding’ while also ‘augment warfighter decision-making in complex operational environments.’ Noticeably absent from that language: any clarification on whether these capabilities might be deployed inside US borders for domestic operations. The Pentagon has not addressed that question directly.

What This Means for Vendor Dynamics

By broadening its AI supplier base, the military is insulating itself from sudden changes of heart by individual vendors. A single company’s internal politics or leadership decisions become far less consequential when the Pentagon has multiple alternatives ready to step in. Google and Amazon have both faced employee protests over their involvement in military AI projects. Those internal tensions matter less when the customer can simply turn to another supplier.

Anthropic’s Claude AI had previously been used on classified material as part of Palantir’s Maven toolset. The newcomers may now fill that role. But Anthropic is not entirely out of the picture. According to reports, the company’s Mythos model is currently in use by the National Security Agency, particularly for what sources describe as cyber warfare and defense capabilities.

Anthropic’s Mythos Model and Global Intelligence Interest

Mythos is drawing attention well beyond US borders. The model is currently under evaluation by 40 organizations worldwide, though only 12 have been publicly named. Among the unnamed 28, analysts believe the UK’s MI5 and the US NSA are likely participants. The model’s reported abilities in cyber operations make it a natural fit for intelligence agencies seeking advanced defensive and offensive tools.

The irony is that while the administration publicly distances itself from Anthropic, the company’s technology appears to remain embedded in critical government systems. This creates a complicated dynamic: official policy says one thing, but operational reality says another.

A Possible Reversal in the Works?

According to Axios, the White House may be quietly walking back its hardline stance. The outlet cited an unnamed source inside the administration stating that officials are trying to find ways to ‘save face and bring ’em back in.’ If true, it suggests the government recognizes the value Anthropic’s models bring and may be looking for an off-ramp from the current confrontation.

Anthropic’s Claude coding model is reportedly still being used by US government security organizations throughout these events. The White House has offered a somewhat conciliatory statement: ‘The US government continues to proactively engage across government and industry to protect our country and the American people, including by working with frontier AI labs.’ That phrasing leaves the door open for reconciliation, even as the legal battle continues.

What comes next is unclear. The Pentagon’s expanded supplier list gives it more flexibility, but it also signals that the era of cozy, exclusive AI partnerships with the military is over. Companies that want government contracts must now accept terms that include ‘any lawful use,’ a phrase that grants the state enormous discretion. Those that resist may find themselves locked out, at least temporarily.

The broader lesson for the AI industry is straightforward: when you sell tools to the world’s most powerful military, you don’t get to control how they are used. And if you try, the military may simply find someone else who won’t ask questions.

More in AI