US Court Temporarily Halts Pentagon’s Blacklisting of Anthropic amid AI Usage Dispute

A federal judge has issued an order temporarily stopping the U.S. Defense Department from blacklisting Anthropic, the company behind the Claude chatbot, as part of an escalating legal conflict over the use of artificial intelligence in military operations. The ruling also suspends enforcement of President Donald Trump’s directive barring federal agencies from using Anthropic’s products.

Judge Rita Lin, serving in the U.S. District Court, described the administration’s actions against Anthropic—including the application of rarely used military authority by Defense Secretary Pete Hegseth—as excessively punitive and lacking clear justification. Judge Lin noted these measures had previously only targeted foreign adversaries and warned they could severely harm the AI company’s operations.

Anthropic’s legal challenge, filed in federal court in California, claims Defense Secretary Hegseth exceeded his authority by identifying the company as a national security supply-chain risk—a designation meant for firms suspected of exposing U.S. military systems to possible intrusion or sabotage.

According to Anthropic, the government’s move retaliated against its positions on the responsible use of AI in warfare and denied the company the chance to contest the risk designation, allegedly violating both the First and Fifth Amendments of the U.S. Constitution.

During a recent hearing in San Francisco, Judge Lin questioned the administration’s rationale for imposing strict penalties on Anthropic after contract talks broke down. At issue was Anthropic’s reluctance to allow its AI models to be integrated into fully autonomous weapons or used for domestic surveillance, positions which Pentagon officials have described as attempts to impose private restrictions on military operations.

Anthropic maintains its technology is currently unsuitable for use in autonomous weapons and opposes surveillance of Americans. The Pentagon contends that decisions about the use of technology in defense applications should remain with the military.

Judge Lin, in her ruling, emphasized that the measures taken by the administration appeared to be retaliatory rather than focused on the stated goal of maintaining national security interests. The court’s order will remain on hold for one week, and does not obligate the Pentagon to retain Anthropic’s services or prevent the transition to other AI suppliers.

This case marks the first time a U.S. firm has been officially labeled a supply-chain risk under a seldom-used procurement statute designed to shield American military assets from foreign interference. Separately, Anthropic is also challenging another related Pentagon rule in an appeals court in Washington, D.C.

The court documents indicate substantial public interest in the case, with legal briefs submitted in support of Anthropic by Microsoft, industry associations, technology employees, retired military officials, and religious scholars.