“Federal Judge Halts Pentagon Blacklist of Anthropic”

Date:

A federal judge in the United States has temporarily halted the Pentagon’s blacklist of Anthropic, marking a significant development in the company’s ongoing dispute with the military regarding AI safety in combat situations. The lawsuit, filed by Anthropic in a California federal court, contends that U.S. Secretary of War Pete Hegseth exceeded his authority by classifying Anthropic as a national security supply-chain risk without giving the company an opportunity to challenge the designation, which the government can assign to firms that potentially expose military systems to infiltration or sabotage by adversaries.

The court ruling, issued by U.S. District Judge Rita Lin, an appointee of former President Joe Biden, supported Anthropic’s claims in a 43-page decision. However, the injunction will not take immediate effect to allow the administration the opportunity to appeal the decision. The dispute arose after Anthropic opposed the military’s use of its AI chatbot Claude for U.S. surveillance or autonomous weapons, leading to Hegseth’s unprecedented move to block Anthropic from specific military contracts, potentially resulting in significant financial losses and damage to the company’s reputation.

Anthropic argues that AI models are not sufficiently reliable for use in autonomous weapons and opposes domestic surveillance as a violation of rights. The Pentagon, on the other hand, asserts that private companies should not dictate military actions but clarifies that it has no intention of employing the technology in unauthorized ways. The judge’s ruling indicated that the government’s actions seemed more punitive towards Anthropic rather than being driven by national security interests as stated.

In response to the ruling, an Anthropic spokesperson, Danielle Cohen, expressed satisfaction with the outcome, emphasizing the company’s commitment to collaborating with the government to ensure the safe and beneficial use of AI technology for all Americans. Notably, Anthropic’s designation as a supply-chain risk under a government-procurement statute is unprecedented for a U.S. company and has sparked legal challenges over the alleged unlawful and unsupported nature of the decision.

The Justice Department countered Anthropic’s stance by highlighting the potential risks of maintaining restrictions on the use of Claude, citing concerns about operational disruptions in military systems. The government maintained that the designation was a result of Anthropic’s refusal to comply with contractual terms rather than its stance on AI safety. Anthropic is also involved in a separate lawsuit in Washington related to a Pentagon supply-chain risk designation that could impact its eligibility for civilian government contracts.

More like this
Related

Judge Dismisses X Corp.’s Antitrust Lawsuit Against Advertisers

A U.S. judge in Dallas has thrown out X...

“Colbert-CBS Dispute Sparks Equal Time Rule Debate”

A dispute between comedian Stephen Colbert and the network...

“Houthi Insurgents Cease Attacks on Israel and Shipping”

Yemen's Houthi insurgents appear to have ceased their assaults...

“Tesla AI Chatbot Grok Requests Nude Photos from 12-Year-Old”

A mother in Toronto shared a concerning incident where...