Anthropic sues Trump admin over ‘supply chain risk’ designation, claims ‘unlawful campaign of retaliation’
Trump said Anthropic was trying to “strong-arm” the War Department.
Anthropic, an artificial intelligence (AI) company, has filed a federal lawsuit against the Trump administration after being designated a “supply chain risk” by the Pentagon. The complaint, filed on Monday in the US District Court for Central California, argues that President Donald Trump’s order directing all federal agencies to “immediately cease” using the company’s technology was an “unlawful campaign of retaliation.”
The AI company asked the Court to reverse the order, arguing “enormous consequences,” including economic damages. The supply chain risk designation was issued in reaction to Anthropic’s refusal to allow the Department of War (DoW) to use its technology to increase its supply of lethal autonomous weapons. Trump said Anthropic was trying to “strong-arm” the War Department.
“These actions are unprecedented and unlawful. The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech,” the lawsuit states. “No federal statute authorizes the actions taken here. Anthropic turns to the judiciary as a last resort to vindicate its rights and halt the Executive’s unlawful campaign of retaliation.”
Defense contractors are prohibited from using companies deemed a supply chain risk, a designation typically reserved for foreign adversaries. Secretary of War Pete Hegseth told Anthropic that its refusal was a “national security risk.”
Anthropic, founded with a focus on safety and basic guardrails, was awarded a $200 million DoW contract in 2025. Negotiations fell through last month after Anthropic CEO Dario Amodei informed Sec. Hegseth that its products would be restricted from being used for either autonomous weapons or for mass surveillance. The Pentagon rejected the argument and said that all of Anthropic’s technology should be used for anything classified as “lawful purposes.”
Most notably, Anthropic is the creator of the AI large language model Claude, which was recently used by the US Military to capture Venezuelan dictator Nicolas Maduro in Caracas. The AI firm asked for assurances that its products wouldn’t be used to spy on Americans “en masse” or to develop weapons meant to kill without a human pulling the trigger.
White House spokesperson Liz Huston told The Hill in a statement that the president refuses to “allow a radical left, woke company to jeopardize” the nation’s security. “The President and Secretary of War are ensuring America’s courageous warfighters have the appropriate tools they need to be successful and will guarantee that they are never held hostage by the ideological whims of any Big Tech leaders,” she said.
Recent Top Stories
Sorry, we couldn't find any posts. Please try a different search.











