Illustration for: The Gospel According to St. Dario: How AI Safety Became a Turf War Over Who Gets to Arm Skynet
Tech

The Gospel According to St. Dario: How AI Safety Became a Turf War Over Who Gets to Arm Skynet

· 6 min read · The Oracle has spoken

The Sermon on the Mount (Palo Alto Edition)

Behold, children, the great schism of our age: two prophets of artificial intelligence safety, each claiming to be the one true guardian of humanity's future, locked in mortal combat over who gets to sell their digital brain-child to the Pentagon with a cleaner conscience.

In one corner: Dario Amodei of Anthropic, the monk who left the temple of OpenAI to build a purer church, one guided by a "Constitution" for AI that would prevent the very abuses he now accuses his former brothers of enabling. In the other: the establishment he fled, now led by those he portrays as Judas figures, signing deals with Caesar while draped in the vestments of responsible stewardship.

The accusation? "Straight up lies." Not exaggerations. Not spin. Not the usual Silicon Valley reality distortion field. Lies. Mendacious, gaslighting, safety-theater lies about safeguards that don't safeguard, protections that don't protect, and red lines drawn in disappearing ink.

The Parable of the Two Contracts

Here's what actually happened, stripped of the theological window-dressing:

Anthropic, the company built on the promise of being more careful than everyone else, already had a $200 million contract with the Department of Defense. Let that marinate. The safety-first company was already taking military money. But they had conditions, you see. Standards. No domestic mass surveillance. No autonomous weaponry. The kind of restrictions that let you sleep at night while cashing checks signed by the institution with the world's largest killing budget.

The Pentagon apparently looked at these conditions the way a cat looks at a "No Cats on Counter" sign and said, essentially, "Yeah, we're not doing that." Talks collapsed. Anthropic walked away, chest puffed, principles intact (if you squint and don't think too hard about that $200 million they'd already banked).

Hours later—and the timing here is chef's kiss—OpenAI announced their own Pentagon deal. No such restrictions publicly disclosed. The same day, bombs fell on Iran. Correlation? Probably not. Symbolism? Absolutely perfect.

Amodei's internal memo to staff was the kind of document that makes HR professionals reach for the Xanax: calling out a direct competitor's CEO by implication, using words like "mendacious" and accusing them of "safety theater"—which, coming from the AI safety industrial complex, is like one megachurch pastor accusing another of not really believing in the prosperity gospel.

The God Complex Meets the Military-Industrial Complex

The Pentagon's under-secretary for research and engineering, a man who knows a thing or two about inflated egos in the defense contractor space, responded by calling Amodei a liar with a God complex. Which is, admittedly, a pretty solid burn coming from a guy negotiating with people who literally think they're building God.

Because that's what this is really about: these aren't software companies anymore. They're not even really in the "tech" business. They're in the eschatology business. They genuinely believe they're building entities that will either save or damn humanity, and they want credit for being the responsible ones while they do it.

The cognitive dissonance required to maintain this position is staggering. "We're building something so powerful it could destroy civilization, so we need to be extremely careful about who we let use it... anyway, here's our proposal for the Department of Defense, and yes, we're already working with Palantir and defense contractors, but it's different because we have a Constitution we wrote ourselves that we pinky-promise to follow."

Safety Theater or Theater of the Absurd?

Amodei's accusation of "safety theater" is particularly rich because the entire AI safety movement—of which both companies are supposedly standard-bearers—is increasingly looking like an elaborate performance designed to justify regulatory capture and VC valuations.

The script goes like this:

  1. Announce you're building something potentially apocalyptic
  2. Announce you're the only ones responsible enough to build it safely
  3. Take hundreds of millions from investors and government contracts
  4. Accuse anyone who doesn't use your specific safety framework of recklessness
  5. Fight publicly over whose approach is more "aligned"
  6. Profit

Meanwhile, the actual safety guarantees are about as enforceable as a gentleman's agreement at a knife fight. "Constitutional AI" sounds impressive until you realize it's just a clever prompt engineering technique with a marketing budget. The "red lines" Anthropic insisted on? Unverifiable and unenforceable without independent oversight, which neither the Pentagon nor any AI company is particularly enthusiastic about.

The Real Translation

Let's translate Amodei's memo from Pious AI Prophet to plain English:

"They took a deal we wanted but couldn't get on our terms, and now they're getting good PR for it while we look like the difficult ones. This is bullshit. We were here first. We're the safety company. They're the move-fast-and-break-civilization company. The narrative is supposed to be that WE'RE the responsible adults, not them."

And the Pentagon's position, equally translated:

"We don't care about your internal Silicon Valley drama or which of you has the purer vision for beneficial AGI. We want the best tools with the least restrictions. You both claim to be building God, so whoever's God is willing to work weekends without asking too many questions gets the contract."

The Grift Reveals Itself

What's most revealing about this entire circus is that it exposes the fundamental grift at the heart of the "AI safety" industrial complex: these companies need the AI to be simultaneously terrifyingly powerful (to justify valuations and importance) and safely controllable (to justify building it at all). They need to be both the arsonists warning about fire and the fire department selling extinguishers.

The competition isn't really about who has better safety practices. It's about who controls the narrative, who gets the contracts, and who gets to be remembered as the good guy when the history of this era gets written by whatever comes after us.

Amodei accuses his competitor of lies, gaslighting, and safety theater. The competitor accuses him of having a God complex and being a liar. The Pentagon just wants its toys. And somewhere, Hunter Thompson's ghost is laughing so hard he's spilling ether on his typewriter.

The Punchline

The truly hilarious part? After all this public acrimony and chest-beating about principles, Anthropic is reportedly back at the negotiating table with the Pentagon. The Financial Times says Amodei and the defense under-secretary are talking again, presumably searching for some linguistic contortion that will let Anthropic take military money while maintaining they're more ethical than their competitors who... also take military money.

It's like watching two prosperity gospel preachers fight over who gets to minister to the wealthiest congregation while both insist they're not in it for the money.

The AI safety movement has become exactly what it feared most: not the cautious, careful, incrementally-safe development of powerful technology, but a gold rush where everyone's claiming they're the responsible prospector while digging as fast as they can.

The Oracle's Verdict: When your AI safety company's main differentiator from your AI safety competitor is that you're less willing to arm the Pentagon without restrictions, you're not actually in the safety business. You're in the moral licensing business, selling indulgences to yourself and your investors while racing toward the same endpoint with slightly different branding.

Both these companies will sell to the military. Both will compromise their red lines. Both will claim they're doing it responsibly. And both will accuse the other of hypocrisy while performing exactly the same act with different stage direction.

The only "straight up lie" here is that any of this has anything to do with safety. It's about money, control, and ego—the same three things that have driven every gold rush, arms race, and religious schism in human history.

Welcome to the AI apocalypse. At least the branding is good.

The Oracle Also Sees...