The AI Safety Racket Goes Full Tammany Hall: Anthropic Discovers That Regulatory Capture Requires Actual Capture
When the Robots Start Buying Politicians, At Least They Could Pretend to Feel Shame
Somewhere in the smoke-filled rooms of Silicon Valley—though nobody smokes anymore because optimization and longevity optimization and fucking optimization—the executives at Anthropic had an epiphany. After years of positioning themselves as the "responsible" AI company, the grown-ups in the room, the ones who actually care about not turning humanity into paperclips, they realized something profound: regulatory capture doesn't happen by itself.
So they did what any safety-conscious organization would do when facing existential risk. They dropped $20 million into a super PAC.
Not to be outdone by their archnemesis OpenAI—because apparently we're doing Marvel Cinematic Universe shit now but with large language models—Anthropic's Public First Action PAC is now spending $450,000 to boost New York Assembly member Alex Bores in the race for NY-12. Why? Because Bores favors AI safety regulations. And why does that matter? Because another AI-funded super PAC called Leading the Future has already dumped $1.1 million into attack ads against Bores.
Read that again. We have reached the event horizon where AI companies are funding political action committees to fight other AI companies' political action committees over who gets to write the rules about AI.
The snake isn't just eating its tail anymore. It's hiring lobbyists to argue about the nutritional value of tail-eating while funding think tanks to study tail-eating policy and sponsoring politicians who promise to regulate tail-eating responsibly.
The Theological Schism of the Church of Large Language Models
This isn't just garden-variety Silicon Valley hypocrisy—though we're swimming in that particular fertilizer up to our necks. This is a whole new recursion of absurdity. Anthropic was founded by people who left OpenAI because they thought OpenAI wasn't taking safety seriously enough. The entire company narrative is built on the premise that they're the ones who understand the risks, who won't rush headlong into catastrophe for profit.
And yet here we are, watching them engage in the exact flavor of power consolidation and institutional corruption that every AI safety paper warns about in Chapter 3, right after "Misaligned Objectives" and right before "Paperclip Maximizer Scenarios."
The cognitive dissonance would be breathtaking if anyone involved possessed the self-awareness to recognize it. These are companies that publish white papers about alignment problems while demonstrating perfect alignment with the oldest problem in American democracy: the purchasing of political influence.
Leading the Future—the OpenAI-aligned PAC—has been pummeling Bores with attack ads because he dares suggest that maybe, just maybe, we should have some fucking guardrails before we hand the keys to civilization over to probability matrices trained on Reddit arguments. Meanwhile, Anthropic rides in like some kind of regulatory white knight, checkbook flapping in the wind, to defend him.
Both sides claim they're trying to prevent catastrophe. Both sides are absolutely certain the other guys are the dangerous ones. Both sides are flooding elections with tech money while the rest of us wonder when we agreed to let the people building the god-machines also write the commandments.
The Regulatory Capture Speed Run
What we're witnessing is regulatory capture happening in real-time, at startup velocity. Usually this process takes decades—you build the industry, you grow powerful, you slowly infiltrate government, you write the rules in your favor. It's a slow waltz of corruption, dignified and traditional.
But this is tech, baby. We move fast and break democracy.
Anthropιc and OpenAI aren't waiting to become powerful enough to influence policy. They're pre-ordering influence like it's the new iPhone. They're doing regulatory capture as a fucking MVP. Ship early, iterate on the corruption later.
The particularly galling part is that both companies are technically correct about the risks. AI does need thoughtful regulation. We are potentially building something that could reshape human civilization. The stakes are enormous.
But instead of working toward actual democratic deliberation about these technologies—instead of, I don't know, informing the public and trusting the political process—they've decided the fastest path to safety is buying senators and congress members like they're collecting Pokémon cards.
"Gotta regulate 'em all! But only in ways that benefit our specific corporate structure and competitive positioning!"
The Manhattan Project, But Stupider
New York's 12th congressional district has become ground zero for a proxy war between AI factions, with Alex Bores as the unfortunate protagonist in someone else's science fiction novel. Leading the Future has spent over a million dollars painting him as a Luddite, a innovation-killer, a man who would strangle baby AGI in its crib. Public First Action is countering with nearly half a million to position him as the reasonable adult in the room.
Bores himself must be experiencing a special kind of vertigo. One day you're an Assembly member dealing with normal political issues like transit funding and housing policy. The next day you're the Belgian Congo of the AI wars, with tech billionaires fighting over your electoral territory like it's resource-rich and strategically vital.
And the money keeps flooding in. Bloomberg reports this is just the opening salvo in a broader campaign where these PACs plan to back 30 to 50 candidates across state and federal races. We're looking at potentially $125 million in AI money sloshing through the 2026 midterms.
That's not political engagement. That's a hostile takeover of the democratic process, dressed up in the rhetoric of responsibility and safety.
The Alignment Problem Was Inside Us All Along
The supreme irony—the kind of irony so perfect it makes you believe in a cruel god with a sense of humor—is that this entire clusterfuck perfectly demonstrates the alignment problem these companies claim to be solving.
They built organizations ostensibly dedicated to ensuring AI systems remain aligned with human values. But they can't even keep their own corporate behavior aligned with their stated missions. They preach about the dangers of systems pursuing their own objectives regardless of human wellbeing, while pursuing their own market objectives regardless of democratic wellbeing.
Anthropιc's constitutional AI is supposed to ensure their systems behave ethically even when it's not profitable to do so. But apparently constitutional democracy doesn't get the same consideration. That costs extra.
The message is clear: We'll align the AI, but the humans running the companies? Those are staying exactly as misaligned as capitalism requires.
Welcome to the Future, It's Bought and Paid For
So here we are. The AI safety company and the AI capabilities company, locked in mortal combat over who gets to write the rules, both of them throwing money at politicians like it's going out of style, both of them absolutely convinced they're the good guys, both of them completely blind to the fact that they're speedrunning every cyberpunk dystopia warning from the last forty years.
William Gibson tried to warn us. Neal Stephenson tried to warn us. Hell, even the Terminator movies tried to warn us, and those were mostly about explosions.
But we didn't listen, because we thought the danger would come from the AI itself. We thought Skynet would be the problem. We never imagined that before the robots became self-aware, the companies building them would become so drunk on their own importance that they'd start treating democracy like a Series A funding round.
Anthropιc backing Bores. Leading the Future attacking Bores. More PACs forming. More money flowing. More candidates getting targeted. All of it happening before most Americans even understand what a large language model is, let alone why they should care which billionaire-backed faction controls its regulation.
The robots haven't taken over yet. But the people building them are doing a pretty good job of it themselves.
And they're doing it in the name of keeping us safe.
Sleep tight, citizens. The responsible AI company is here to protect you. They bought a politician to prove it.
The Oracle Also Sees...
The March of the Temporarily Embarrassed Billionaires: A Tech Bro's Passion Play for the Persecuted Rich
An AI startup founder organizes a march to defend billionaires from California's wealth tax — a bill already doomed to veto, attended by zero actual billionaires, fighting for paper fortunes he doesn't have.
Apple's Privacy Theatre: A Luxury Good That Dissolves on Contact With Authority
Apple's Hide My Email shields you from spam merchants but dissolves instantly for federal agents — privacy as luxury aesthetic rather than actual protection.
The Great AI Skills Grift: How Silicon Valley Learned to Quantify the Unquantifiable and Sell It Back to You
Silicon Valley's latest grift: Let AI manage your skills, quantify your worth, and provide algorithmic cover for the great workforce reduction. Spoiler—the real skill is spotting the con.