A United States judge has issued a temporary injunction against the Pentagon’s blacklisting of Anthropic, marking a new development in the company’s ongoing dispute with the military concerning the safety of artificial intelligence (AI) in combat situations.
The lawsuit filed by Anthropic in a California federal court contends that U.S. Secretary of War Pete Hegseth exceeded his authority by labeling Anthropic as a national security supply-chain risk without allowing the company to challenge the designation, which the government can assign to firms that may expose military systems to potential infiltration or sabotage by adversaries. Anthropic argued that this action violated its First Amendment right to free speech and its Fifth Amendment right to due process.
In a 43-page ruling, U.S. District Judge Rita Lin, appointed by former President Joe Biden, sided with Anthropic but delayed the enforcement of the decision for seven days to provide the administration an opportunity to appeal.
Hegseth’s controversial decision, made in response to Anthropic’s opposition to the military utilizing its AI chatbot Claude for surveillance or autonomous weapons, resulted in the company being barred from certain military contracts. Anthropic executives expressed concerns that this move could lead to significant financial losses and harm to the company’s reputation.
Anthropic asserts that AI models are not yet sufficiently reliable for use in autonomous weapons and opposes domestic surveillance as an infringement of rights. While the Pentagon argues that private companies should not be able to restrict military operations, it clarified that it is solely interested in utilizing the technology in lawful ways and not for the purposes opposed by Anthropic.
Judge Lin indicated in the ruling that the government’s actions seemed more punitive towards Anthropic rather than being aligned with stated national security objectives. She stated, “The record supports an inference that Anthropic is being punished for criticizing the government’s contracting position in the press,” deeming it a violation of the First Amendment.
Anthropic’s spokesperson, Danielle Cohen, expressed satisfaction with the ruling, emphasizing the company’s dedication to collaborating positively with the government to ensure the safe and effective use of AI for all Americans.
This marks the first instance where a U.S. company has been publicly identified as a supply-chain risk under a government-procurement statute designed to safeguard military systems from potential foreign sabotage. Anthropic’s lawsuit contests the legality and rationale behind the decision and highlights inconsistencies in the military’s previous commendations of Claude.
The Justice Department countered Anthropic’s stance by suggesting that the company’s refusal to comply with contractual terms could create uncertainty within the Pentagon regarding the use of Claude, potentially jeopardizing military operations. The government asserted that the designation was a result of Anthropic’s non-compliance with contractual obligations rather than its stance on AI safety.
Additionally, Anthropic is pursuing a separate lawsuit in Washington challenging another Pentagon supply-chain risk designation that could lead to its exclusion from civilian government contracts.
