Skip to Content

When the Pentagon gave Anthropic the boot, OpenAI swooped in. Some staff are frustrated with how it unfolded

By Hadas Gold, CNN

(CNN) — Messages written in chalk covered the sidewalk outside OpenAI’s San Francisco offices Monday morning: “Where are your redlines?” “You must speak up.” “What are the safeguards?”

The messages, according to social media and news reports, were written by activists. But some of those feelings are shared by many within the building, after OpenAI struck a deal with the Pentagon on Friday to use its AI models in classified systems.

Anthropic had already rejected an update to its contract with the Pentagon because it felt the language didn’t adhere to the company’s redlines around the use of AI in mass surveillance and autonomous weapons. The Pentagon blacklisted the company as a result, designating it a supply chain risk.

The contracts are steeped in legal and technical complexity. But in public forums and in private conversations, OpenAI employees are venting about how OpenAI leadership handled the Pentagon negotiations. Many employees “really respect” Anthropic for standing up to the Pentagon and are frustrated with OpenAI’s handling of their own contract, one current employee told CNN on the condition of anonymity to speak freely.

As the hours ticked down to the Pentagon’s Friday deadline for Anthropic to agree to its contract, OpenAI CEO Sam Altman surprised many when he said he agreed with his rival, Anthropic CEO Dario Amodei, and shared the same redlines.

But it turned out Altman had been negotiating for their own deal. Criticism erupted hours later, when OpenAI announced its Pentagon contract, seemingly swooping in to take Anthropic’s place. After OpenAI published some of the terms of the contract on Saturday, many outside observers immediately questioned how the redlines on autonomous weapons and mass surveillance would actually be upheld, with some saying the language would still allow the safeguards to be disregarded.

In response, Altman fielded questions publicly over X on Saturday evening and announced on Monday that OpenAI had adjusted its Pentagon contract to more clearly establish guardrails that would prevent OpenAI services from being used in surveillance programs. (Autonomous weapons were not mentioned in the added language he posted online.)

Many employees recognize the need to support the government as the US competes with China in AI, according to the current employee. But they also felt a contract of such importance and magnitude was rushed through.

“It’s partly how it was perceived, how it was communicated, and what the narrative has become,” the employee said.

Some employees publicly expressed their frustrations. Research scientist Aidan McLaughlin posted on X Monday morning before Altman’s contract update: “i personally don’t think this deal was worth it.” He later called the internal discussion about the subject “overwhelming” but said he felt “incredibly proud to work somewhere where people can speak their mind.”

Jasmine Wang, who works on AI safety issues at OpenAI, posted that she wanted “independent legal counsel” to analyze the new contract language Altman announced on Monday. She later reposted legal analyses that both supported OpenAI’s claim that it solidified their redlines and others that criticized it as “weasel language.”

Altman acknowledged the communications breakdown.

“The issues are super complex, and demand clear communication,” Altman wrote on X on Monday. “We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy.”

During an all-hands gathering at OpenAI on Tuesday, Altman reiterated that rushing the deal out was a “mistake,” according to a source familiar with the meeting. But OpenAI can’t weigh in on individual use cases for its technology, Altman said, such as distinguishing which specific military operations might be considered good or bad.

An OpenAI spokesperson pointed CNN to Altman’s public statements.

Some employees on Tuesday also felt frustrated that some observers are portraying Anthropic as heroic despite previous years of work with the Pentagon and major defense contractor Palantir with little scrutiny.

Altman told employees on Tuesday he believes that governments should work with labs like OpenAI that enforce safety standards, rather than companies with fewer protections. He said he is urging the government to drop Anthropic’s supply chain risk designation.

“I believe we will hopefully have the best models that will encourage the government to be willing to work with us, even if our safety stack annoys them, or put some limits or something else,” he said at the meeting.

The-CNN-Wire
™ & © 2026 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.

Article Topic Follows: CNN - Money

Jump to comments ↓

CNN Newsource

BE PART OF THE CONVERSATION

KION 46 is committed to providing a forum for civil and constructive conversation.

Please keep your comments respectful and relevant. You can review our Community Guidelines by clicking here

If you would like to share a story idea, please submit it here.