Skip to Content

Judge blocks Pentagon’s effort to ‘punish’ Anthropic by labeling it a supply chain risk

<i>Patrick Sison/AP via CNN Newsource</i><br/>Pages from the Anthropic website and the company's logo on February 26.
<i>Patrick Sison/AP via CNN Newsource</i><br/>Pages from the Anthropic website and the company's logo on February 26.

By Hadas Gold, Devan Cole, CNN

(CNN) — A federal judge in California has indefinitely blocked the Pentagon’s effort to “punish” Anthropic by labeling it a supply chain risk and attempting to sever government ties with the AI company, ruling that those measures ran roughshod over its constitutional rights.

“Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government,” US District Judge Rita Lin wrote in a stinging 43-page ruling.

Lin, an appointee of former President Joe Biden, said she would delay implementation of her ruling for one week to allow the government to appeal.

But in her ruling, she made it clear she disapproved of the government’s actions, which she said violated the company’s First Amendment and due process rights.

The ruling is the latest judicial broadside against Defense Secretary Pete Hegseth as he’s sought to use powerful tools at his disposal to push back against companies and individuals he’s tussled with in recent months.

Earlier this month, a federal judge in DC ruled that the secretary violated the First Amendment rights of several reporters when he implemented a restrictive new press policy. And in February, a different judge in DC said Hegseth infringed on the free speech rights of a Democratic senator over the lawmaker’s urging of US service members to refuse illegal orders.

Anthropic applauded Lin’s ruling on Thursday.

“We’re grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits,” an Anthropic spokesperson said after Thursday’s ruling. “While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI.”

The supply chain risk designation meant any company that works with the military would need to show it didn’t use an Anthropic product. The label, leveled by the Pentagon last month, had previously been used only for companies seen as connected to foreign adversaries.

Anthropic said the designation violated its First Amendment rights, tarnished its reputation and jeopardized hundreds of millions of dollars’ worth of contracts.

The Department of Defense’s quarrel with the Anthropic began after the company refused to back down over contractual guardrails around the use of its Claude AI model in autonomous weapons and mass surveillance.

Hegseth took the dramatic, unprecedented step of labeling it a supply chain risk in February, and Hegseth and President Donald Trump ordered federal agencies to cease using the product and sever ties with companies that do business with Anthropic.

But Lin said that was all in retaliation for the company sticking with its guardrails.

“These broad measures do not appear to be directed at the government’s stated national security interests,” she wrote. “The Department of War’s records show that it designated Anthropic as a supply chain risk because of its ‘hostile manner through the press.’”

“Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation,” she added.

The Department of Defense wanted unfettered access to Claude for “all lawful purposes.” The department said it needed complete freedom to use the system, especially in wartime.

“We can’t have a company that has a different policy preference that is baked into the model… pollute the supply chain so our warfighters are getting ineffective weapons, ineffective body armor, ineffective protection,” the Defense Department’s chief technology officer, Emil Michael, told CNBC earlier this month.

But Anthropic had two red lines: It did not want its AI systems used in autonomous weapons or domestic mass surveillance. Anthropic argued in its suit that the Pentagon was aware of its position on the Claude limitations and that its stance is protected speech.

A separate challenge by the company to other authorities Hegseth invoked to make the supply chain risk designation is still pending before a federal court in Washington, DC.

CNN has reached out to the Pentagon for comment.

The-CNN-Wire
™ & © 2026 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.

Article Topic Follows: CNN - Money

Jump to comments ↓

CNN Newsource

BE PART OF THE CONVERSATION

KION 46 is committed to providing a forum for civil and constructive conversation.

Please keep your comments respectful and relevant. You can review our Community Guidelines by clicking here

If you would like to share a story idea, please submit it here.