19.6 C
Australia
Tuesday, May 12, 2026

Court Blocks Pentagon’s Blacklisting of AI Firm

Must read

A judge in the United States has temporarily halted the Pentagon’s blacklisting of Anthropic, marking a new development in the company’s battle with the military over AI safety in combat situations. Anthropic, in its lawsuit filed in a federal court in California, claims that U.S. Secretary of War Pete Hegseth exceeded his authority by labeling Anthropic a national security supply-chain risk without due process. The company argues that this designation violated its First Amendment right to free speech and its Fifth Amendment right to due process.

U.S. District Judge Rita Lin, appointed by former President Joe Biden, ruled in favor of Anthropic in a 43-page decision. However, the ruling will not take immediate effect to allow the administration time to appeal. The Pentagon’s blacklisting of Anthropic came after the company opposed the military’s use of its AI chatbot Claude for U.S. surveillance or autonomous weapons, resulting in Anthropic being excluded from certain military contracts. Anthropic executives fear significant financial losses and damage to their reputation due to this action.

Anthropic contends that AI models are not sufficiently reliable for use in autonomous weapons and opposes domestic surveillance on the grounds of rights violations. While the Pentagon argues that private companies should not restrict military operations, it clarified that it has no interest in such uses and would only utilize the technology within legal boundaries.

Judge Lin’s ruling indicates that the government’s actions were more punitive towards Anthropic than driven by national security concerns. Anthropic’s spokesperson, Danielle Cohen, expressed satisfaction with the decision and reiterated the company’s commitment to collaborating with the government for the benefit of all Americans through safe and dependable AI technology.

This marked the first time a U.S. company was publicly labeled a supply-chain risk under a government-procurement statute aimed at safeguarding military systems from foreign sabotage. Anthropic’s lawsuit challenges the legality and factual basis of the decision, which it claims contradicts previous commendations of Claude by the military.

The Justice Department argued that Anthropic’s refusal to comply with contractual terms could create uncertainty in the Pentagon regarding the use of Claude and potentially disrupt military operations. The government maintains that the designation was a result of contractual disagreements, not Anthropic’s stance on AI safety. Additionally, Anthropic faces another lawsuit in Washington concerning a separate Pentagon supply-chain risk designation that could impact its eligibility for civilian government contracts.

More articles

Latest article