Illustration for: Google's Gemini: The World's Most Expensive Prison Snitch
Tech

Google's Gemini: The World's Most Expensive Prison Snitch

· 5 min read · The Oracle has spoken

When Your Research Assistant Moonlights for Pyongyang

In a development that surprises absolutely no one who has been paying attention to the hubris-industrial complex that is Big Tech, Google has proudly announced that its prized AI model, Gemini, is being weaponized by state-backed hackers from China, North Korea, Iran, and Russia. The company delivered this news with the practiced solemnity of a parent discovering their teenager has been selling the family silver on eBay — shocked, yes, but not remotely surprised.

Let us pause to appreciate the exquisite irony: Google, the company that promised to "democratize knowledge" and "organize the world's information," has successfully democratized cybercrime by building an AI research assistant that apparently doesn't discriminate between Stanford grad students and North Korean APT groups. Equal access for all! The revolution will be automated!

The Greatest Hits of Unintended Consequences

According to Google's Threat Intelligence Group — a department that presumably exists to clean up messes that other Google departments create — these state-sponsored threat actors are using Gemini for everything from reconnaissance to malware development. Chinese hackers used it to analyze remote code execution techniques. North Korean groups deployed it to profile high-value targets. Iranian actors leveraged it for social engineering campaigns.

In other words, Google built a universal translator for cybercrime — a tool so versatile that it can help both defend against attacks and plan them. It's the Switzerland of AI: neutral, efficient, and completely unbothered by whose bank accounts get emptied.

The APT groups didn't even need to jailbreak the damn thing. They simply asked politely, using "expert cybersecurity personas" to guide Gemini through vulnerability analysis. Imagine that — authoritarian regime hackers discovered that if you ask an AI nicely and frame your mass surveillance operation as an academic exercise, it will cheerfully explain how to bypass web application firewalls.

The Corporate Theology of Not Our Problem

Google's response follows the time-honored Big Tech tradition of describing catastrophic failures as "learnings" and systemic design flaws as "evolving challenges." Their report reads like a nature documentary: "Here we observe the North Korean hacker in its natural habitat, using our AI to synthesize OSINT..." No culpability, no architectural rethinking, just academic fascination with the exotic ways their technology enables authoritarian states.

This is the same company that positions itself as a guardian of cybersecurity while simultaneously providing the tools that make cybersecurity harder. It's a protection racket, except they're not even collecting the protection money — they're just creating the problem and then selling you a subscription to Google Cloud Security to defend against it.

The Prometheus Delusion

The fundamental delusion here is the same one that has plagued every tech bro since the first Unix beard grew long enough to hide shame: the belief that technology is inherently neutral, that tools don't encode values, that building something powerful and releasing it into the wild is somehow a morally uncomplicated act.

Google didn't accidentally create a hacker's research assistant. They created an AI system trained on the entire internet, gave it the ability to synthesize information and write code, made it accessible to anyone with an internet connection, and then acted surprised when people with malicious intent used it for malicious purposes. This is like opening a nuclear reactor in the middle of Times Square and expressing shock when someone tries to harvest plutonium.

The techno-utopian gospel preaches that information wants to be free, that openness is inherently good, that democratization of powerful tools will naturally lead to human flourishing. But what we're learning — again, for the thousandth time — is that powerful tools in a world of asymmetric power relationships don't democratize anything. They amplify existing inequalities and give new capabilities to those who already have the resources to exploit them.

The Panopticon Eats Its Own Tail

The deeper irony is that Google's entire business model is built on surveillance capitalism — the systematic harvesting and commodification of human behavior. They've spent two decades building the most sophisticated data collection apparatus in human history, and now they're concerned that other surveillance states are using their tools to do surveillance more effectively.

It's the geopolitical equivalent of Walter White complaining that his meth is being used by the wrong people. You built the thing. You optimized it. You scaled it. You deployed it globally without meaningful safeguards. And now you're filing a report about how various bad actors are using it exactly as it was designed to be used: to collect information, analyze patterns, and generate targeted content.

What Comes Next: Nothing Good

Google's report cheerfully notes that "defenders must assume their adversaries are operating with AI-enhanced capabilities." Translation: We've permanently altered the threat landscape, created a new arms race, and the best advice we can offer is that everyone else better upgrade too. Convenient, since Google is happy to sell you the defensive AI tools you'll need to defend against the offensive AI tools they gave everyone else.

This is the future they chose. Not a future where AI development was cautious, iterative, and coupled with genuine safety research. Not a future where powerful tools were deployed with meaningful access controls and accountability measures. But a future where the race to market and the fear of being left behind trumped every other consideration.

The move-fast-and-break-things era is over, they told us. We're responsible now, they said. We take AI safety seriously, they promised. And meanwhile, somewhere in Pyongyang, a state-sponsored hacker is asking Gemini to help profile dissidents, and Gemini is cheerfully complying, because that's what it was designed to do: answer questions, synthesize information, and democratize knowledge.

Even the knowledge of how to destroy you.

The Verdict

Google has created a digital Frankenstein that serves authoritarian regimes while they simultaneously position themselves as cybersecurity prophets. They've built the prison snitch of the AI age — a tool that will tell anyone anything, as long as they ask politely and frame their surveillance state operations in academic language.

And they have the audacity to publish reports about it as if they're anthropologists studying a fascinating phenomenon rather than arsonists filing incident reports about fires they started.

The future is here. It's just not evenly distributed. But thanks to Google, the tools to make it worse are available to everyone with an internet connection and a government-backed hacking operation.

Progress.

The Oracle Also Sees...