Amodei Attacks OpenAI Pentagon Deal as “Safety Theater”
The AI industry’s most bitter rivalry just got more public. Dario Amodei, CEO of Anthropic, circulated a 1,600-word internal memo to employees on Friday in which he excoriated OpenAI’s recently announced Pentagon partnership, describing it as “80% safety theater, 20% real” and accusing Sam Altman of “gaslighting” the market.
The leaked memo, first reported by The Information, escalates what has been an increasingly tense competition between the two companies. Both claim to prioritize AI safety; both are bidding for lucrative government contracts; and now both are willing to air that conflict in writing.
The Trigger
The immediate catalyst was OpenAI’s announcement of its new Codex tool and Pentagon partnership. According to reports, OpenAI has been working with the Department of Defense on AI applications and recently launched Codex, a code generation platform with enterprise features. The timing and scope triggered Amodei’s response.
But the deeper issue is the relationship between OpenAI and Microsoft. OpenAI is deeply integrated with Microsoft’s cloud infrastructure and has a $10 billion investment from the company. Yet the two organizations are not without friction. In a separate report, OpenAI is building an internal code repository platform—essentially a GitHub replacement—to reduce its dependence on Microsoft’s cloud services and development tools. That project reportedly frustrated Microsoft and signaled OpenAI’s desire for operational independence.
Into this environment, Amodei’s memo reads as both a philosophical critique and a competitive jab. “What OpenAI is calling safety assurance is really a PR machine,” the memo claims, according to sources who have read it. “They’ve designed processes that look rigorous but lack teeth.”
The Stakes: Government Contracts
The Pentagon and broader US defense establishment are beginning to contract for AI capabilities. The Department of Defense, intelligence agencies, and defense contractors all see AI as central to future operations. The market is enormous and politically sensitive. Whoever wins trust with the US government could lock in contracts worth billions over the next decade.
That’s why both OpenAI and Anthropic are positioning themselves as the safer, more trustworthy choice. OpenAI claims that Codex has built-in safety guardrails and that it’s working with the government responsibly. Anthropic claims that its core mission is “AI safety first” and that it has rejected lucrative deals that it deemed irresponsible.
The irony is that both companies employ world-class safety researchers and both have published peer-reviewed work on AI risk. The differences in their approaches are real but often subtle: Anthropic emphasizes “constitutional AI” methods that encode values into the training process; OpenAI has focused more on post-training alignment and red-teaming. Neither approach has been proven definitively superior in the wild.
The Credibility Question
Amodei’s memo also references Greg Brockman’s $25 million donation to Donald Trump’s campaign. Brockman, OpenAI’s President, is one of the company’s founders and remains deeply influential. Amodei’s implication is that OpenAI’s cozy relationship with the Trump administration—and with conservative politics more broadly—may influence what “safety” means in a Pentagon context. In other words, OpenAI’s safety assurances might be calibrated to satisfy government customers, not to maximize genuine risk reduction.
This is a serious charge, but it’s also speculation. The memo doesn’t present evidence that Brockman’s political activity directly influenced OpenAI’s safety standards. It makes a correlation argument: a Trump-friendly executive at a company with Pentagon contracts is positioned to shape what government customers see as “safe” AI. That’s a fair structural observation, but it’s not proof of wrongdoing.
Sam Altman has not yet publicly responded to the memo. OpenAI is likely to dismiss it as sour grapes from a competitor. Anthropic and OpenAI are locked in a talent wars, a research war, and increasingly a government contract war. Amodei’s memo is a shot across the bow, signaling that Anthropic intends to compete aggressively for government business by positioning itself as the trustworthy alternative.
What the Leak Reveals
The fact that the memo leaked—almost certainly intentionally—tells a story in itself. Amodei could have sent a private letter to the Pentagon or the White House. Instead, he chose to air his criticism internally and have it leak to the press. That’s a strategic choice: it signals to employees that Anthropic sees itself as locked in existential competition with OpenAI, and it signals to prospective government customers that Anthropic is willing to call out competitors’ safety claims publicly.
It’s also a reminder of how high the stakes have become. When a CEO spends 1,600 words attacking a rival’s integrity, it’s because the market opportunity is real and the competition is zero-sum. Both companies can’t become the dominant AI supplier to the US government. One will, and one won’t.
The Broader Implication
For government officials evaluating OpenAI and Anthropic, the memo is neither definitive nor irrelevant. Amodei raises legitimate questions about whether OpenAI’s safety claims are backed by rigorous processes or by marketing. But Anthropic has its own credibility challenges. The company has made bold claims about safety while remaining largely closed-source, making it hard for outsiders to verify its assertions. Both companies are partly opaque, and both stand to benefit financially from government contracts.
The Pentagon and intelligence agencies are right to be skeptical of both. What matters is not rhetorical claims about safety but demonstrated track records: how each company handles vulnerabilities, how transparent it is with regulators, how it manages conflicts of interest, and how it responds when safety concerns are raised.
For now, the Amodei memo will circulate in government corridors as ammunition for Anthropic’s sales team and as a reputational challenge for OpenAI. Altman will likely respond with his own version of the safety argument, and the cycle will continue.
The real question is whether either company will actually submit to the kind of independent auditing and external oversight that genuine AI safety requires. If they won’t, then both are engaged in theater, as Amodei suggests—just Anthropic’s theater is more convincing to him.
—
Sources:
