On Friday, the Trump administration handed down back-to-back directives that threaten to curb growth for one of the country’s most successful artificial intelligence firms. First, President
Shortly after, the Pentagon declared the AI developer a supply-chain risk — a designation typically reserved for companies from countries the US views as adversaries.
The moves, which followed a tense showdown between the San Francisco-based startup and the Pentagon over AI safeguards, aim to not only to cut off Anthropic’s sales from the US government, but also numerous other firms. “No contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic,” Defense Secretary
The full impact on the company — and the AI ecosystem — remains to be seen. At minimum, rivals such as OpenAI, Alphabet Inc.’s Google and Elon Musk’s xAI now have an opportunity to take on government work that had previously gone to Anthropic. But some legal and policy experts warned the fallout could be far more dire if the Pentagon follows through on its declaration.
Barring Anthropic from working with corporate customers that do business with the Defense Department would be “a death blow” to Anthropic’s business, said Charlie Bullock, a lawyer and senior research fellow at the Institute for Law & AI, a Boston-based think tank. As laid out in Hegseth’s post, the Pentagon’s policy would effectively prevent Anthropic from working with some of its biggest partners, such as Amazon.com Inc., Bullock said.
Read More:
The uncertainty hits Anthropic at a pivotal moment. The company, which was founded in 2021 by several former employees of OpenAI, is widely expected to be preparing for an initial public offering as soon as this year. With its popular Claude chatbot, Anthropic has been racing to persuade more businesses to pay for its software to help offset the immense cost of developing AI and justify its lofty $380 billion valuation.
In a statement Friday Anthropic called the move “legally unsound” and “a dangerous precedent.” The company also set the stage for a legal battle over its software. “No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons,” it wrote. “We will challenge any supply chain risk designation in court.”
Some backers are concerned Anthropic’s refusal to concede to the Trump administration’s demands could harm the company’s brand, making the startup seem hostile and anti-American, according to an Anthropic investor who spoke to Bloomberg on the condition of anonymity.
Still, some investors are wary of complaining publicly or even pushing back in private. Anthropic is the crown jewel in many of their portfolios, and Chief Executive Officer Dario Amodei’s resolute control over the company’s direction has led many VCs to bite their tongues even if they disagree with his choice, multiple Anthropic investors said.
Other investors supported Anthropic whether it decided to work with the Pentagon or not, with some noting that the government provides minimal revenue for the model maker. The move has also generated considerable support for Anthropic within the tech community, with some CEOs lauding its stance.
Read More:
Hegseth had given the company until 5:01 p.m. on Friday to allow the Pentagon to use Claude for any purpose within legal limits — but without any usage restrictions from Anthropic. The startup has insisted that the chatbot not be used for mass surveillance against Americans or in fully autonomous weapons operations.
Trump’s decision to order agencies to ditch Anthropic posed some initial risk to the firm, though one that’s limited in scope for a company with a revenue run rate of $14 billion. Anthropic inked an agreement in July with the Defense Department worth up to $200 million, but Bloomberg Government contracting records show the Pentagon paid only $2 million to Anthropic last year.
This month, Anthropic signed its first deal for the State Department to use Claude, valued at just $19,000. The company also struck a broad deal with the General Services Administration for federal government agencies to use Claude for a nominal $1 fee last year. Hegseth has set a six-month maximum for Anthropic’s services to be handed over another AI provider.
The goals of the Defense Department’s actions are ultimately much broader, treating Anthropic similarly to Chinese firms that the US perceives to be a security threat. However, Bullock said the legal authority Hegseth is relying on is actually quite narrow, allowing the agency to prohibit its contractors from tapping Anthropic’s products for procurements related to defense contracting — but not necessarily for things like using Claude in their businesses.
Hegseth is likely to lean on the Federal Acquisitions Security Council, established in Trump’s first term, to enact the policy, said
If the Pentagon tried to force companies that contract with it from having other unrelated business with Anthropic, “the courts will throw that out quite quickly,” but “I can’t say they won’t try it.” Harrell said. “They seem to be willing to try things that are overturned in the courts.”
The Pentagon did not immediately respond to request for comment.
If Anthropic launches a court challenge, that could ultimately buy it time, Bullock said. For example, a court could grant the company a temporary restraining order or preliminary injunction, he said. “But we’ll have to wait and see.”
The Pentagon’s decision has quickly sent shockwaves throughout the AI community, both for its implications in the wider battle over how to deploy a powerful technology safely and because of the broad popularity of Claude Code for software development.
Virtually all companies that build software do business with Anthropic, according to one AI startup executive who spoke on condition of anonymity. Not being able to use Claude Code would be disastrous for both the industry and US competitiveness, the executive said.
--With assistance from
To contact the reporters on this story:
To contact the editors responsible for this story:
John Harney
© 2026 Bloomberg L.P. All rights reserved. Used with permission.
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.
