ADVERTISEMENT
The U.S. Department of War has formally designated AI company Anthropic as a “supply chain risk” to America’s national security, prompting the company to say it will challenge the action in court.
“Yesterday (March 4), Anthropic received a letter from the Department of War confirming that we have been designated as a supply chain risk to America’s national security,” the company said in a statement. “As we wrote on Friday, we do not believe this action is legally sound, and we see no choice but to challenge it in court.”
The company said the scope of the designation is limited. “The language used by the Department of War in the letter (even supposing it was legally sound) matches our statement on Friday that the vast majority of our customers are unaffected by a supply chain risk designation,” it said. “With respect to our customers, it plainly applies only to the use of Claude by customers as a direct part of contracts with the Department of War, not all use of Claude by customers who have such contracts.”
What did Anthropic CEO say?
Dario Amodei, Anthorpic’s CEO, also issued an apology over a leaked internal post that had criticised the President. “I also want to apologise directly for a post internal to the company that was leaked to the press yesterday,” the CEO said. “Anthropic did not leak this post nor direct anyone else to do so—it is not in our interest to escalate this situation.” The statement added, “It was a difficult day for the company, and I apologise for the tone of the post. It does not reflect my careful or considered views. It was also written six days ago, and is an out-of-date assessment of the current situation.”
The company said the designation is grounded in a narrow statutory provision. “The Department’s letter has a narrow scope, and this is because the relevant statute (10 USC 3252) is narrow, too,” it said. “It exists to protect the government rather than to punish a supplier; in fact, the law requires the Secretary of War to use the least restrictive means necessary to accomplish the goal of protecting the supply chain.” It added: “Even for Department of War contractors, the supply chain risk designation doesn’t (and can’t) limit uses of Claude or business relationships with Anthropic if those are unrelated to their specific Department of War contracts.”
Will Anthropic continue provide support to the Pentagon?
Anthropic also said that it has been in talks with the Department. “We had been having productive conversations with the Department of War over the last several days, both about ways we could serve the Department that adhere to our two narrow exceptions, and ways for us to ensure a smooth transition if that is not possible,” the company said. “We are very proud of the work we have done together with the Department, supporting frontline warfighters with applications such as intelligence analysis, modelling and simulation, operational planning, cyber operations, and more.”
Reiterating its position on military use of AI, the company said, “We do not believe, and have never believed, that it is the role of Anthropic or any private company to be involved in operational decision-making—that is the role of the military.” It added that its concerns are limited to “our exceptions on fully autonomous weapons and mass domestic surveillance, which relate to high-level usage areas, and not operational decision-making.”
Despite the legal challenge, Anthropic said it would continue to support U.S. national security operations during any transition. “Our most important priority right now is making sure that our warfighters and national security experts are not deprived of important tools in the middle of major combat operations,” the company said. “Anthropic will provide our models to the Department of War and national security community, at nominal cost and with continuing support from our engineers, for as long as is necessary to make that transition, and for as long as we are permitted to do so.”
“Anthropic has much more in common with the Department of War than we have differences. We both are committed to advancing US national security and defending the American people, and agree on the urgency of applying AI across the government. All our future decisions will flow from that shared premise,” the blog noted.