Illustration for: Project MYST: Meta's Internal Proof That Its Teen Safety Theater Is Exactly What You Thought It Was
Tech

Project MYST: Meta's Internal Proof That Its Teen Safety Theater Is Exactly What You Thought It Was

· 6 min read · The Oracle has spoken

The Receipts Are Coming From Inside the House

There's a special circle of corporate hell reserved for companies that commission research to prove their safety measures work, discover those measures are actually therapeutic placebos for anxious parents, and then keep hawking them anyway while testifying before Congress about their commitment to child safety.

Welcome to Meta's Project MYST — and yes, that acronym absolutely sounds like something a branding consultant came up with after their third Moscow Mule at a Menlo Park happy hour.

The Beautiful Transparency of Internal Documents

Here's what Meta's own researchers, working with the University of Chicago, quietly concluded: Parental supervision tools — time limits, restricted access, all the digital helicopter parenting apparatus Meta has been marketing as Responsible Corporate Citizenship™ — have virtually no impact on teens' compulsive social media use.

None. Zip. Nada. A placebo with worse side effects.

The study gets darker: Teens who've experienced trauma are particularly vulnerable to compulsive use. So Meta has essentially created the perfect addiction delivery mechanism for the most psychologically vulnerable population, then built a Potemkin village of "safety features" that their own scientists know don't work.

It's like Philip Morris developing a "responsible smoking" program that consists of asking parents to hide the cigarettes, discovering this does precisely nothing to stop teenage smoking, and then continuing to market it as evidence of corporate responsibility while lobbying against actual regulation.

The Regulatory Theater Playbook

This is the Silicon Valley three-card monte at its most refined:

  1. Build addictive product optimized for engagement über alles
  2. Face public outcry and regulatory scrutiny
  3. Create performative "safety features" that look good in Senate hearings
  4. Quietly commission research proving your features are bullshit
  5. Bury that research deeper than your user privacy commitments
  6. Continue marketing the features while arguing against actual regulation because you're "self-regulating"
  7. Get caught when internal documents leak during litigation
  8. Express surprise and renewed commitment to safety
  9. Repeat

The genius is that parental controls serve two masters: They give nervous parents something to do (the illusion of agency), and they give Meta something to point at when senators demand action. Meanwhile, the engagement algorithms keep humming, the ad revenue keeps flowing, and teenagers keep doomscrolling through their formative years.

The Trauma Optimization Engine

The most damning finding: Kids who've experienced "adverse events" — trauma, stress, the normal casualties of modern adolescence — are especially vulnerable to compulsive use.

Think about that for a moment. Meta has built a product that specifically captures the most psychologically fragile young users, exploits their vulnerability for engagement metrics, and then markets parental controls that its own research proves are about as effective as thoughts and prayers.

This isn't a bug in the system. This is the system recognizing its most profitable users and optimizing accordingly.

The Senate Hearing Industrial Complex

How many times have we watched Zuckerberg and his executive team parade before Congress, faces arranged in their best approximation of human concern, promising to take teen mental health "very seriously"? How many blog posts about "new tools for parents" have been published? How many press releases about partnerships with child safety organizations?

All while Project MYST sat in their files, a smoking gun written by their own scientists, proving that every word was performative bullshit.

The beauty of internal research is that you can't claim ignorance. You can't say "we didn't know." You commissioned the study. You got the results. You chose to keep selling the same snake oil anyway.

The Cost-Benefit Analysis They'll Never Show You

Somewhere in Meta's offices, someone did the math:

  • Cost of actually fixing the problem: Reduced engagement, lower ad revenue, possible fundamental product redesign
  • Cost of performative safety theater: Minimal development resources, some PR budget
  • Cost of getting caught lying: Manageable legal fees, some bad press, a Senate hearing, maybe a fine that's a rounding error on quarterly revenue

Guess which option wins when you're legally obligated to maximize shareholder value?

The Precedent for Digital Tobacco

We've been here before. The tobacco industry spent decades commissioning research, burying results, and marketing "safer" products while knowing exactly how addictive and harmful their core offering was. The only difference is that Big Tobacco's victims had to wait years for the cancer. Meta's victims get to experience psychological harm in real-time, quantified by the very metrics the company uses to optimize engagement.

The legal discovery process is giving us the tobacco industry playbook speedrun, complete with internal documents proving conscious deception. Project MYST is Meta's version of those infamous tobacco memos about nicotine delivery optimization.

What Actual Solutions Look Like

Here's what would actually help:

  • Kill the engagement optimization algorithms for minors entirely
  • Eliminate infinite scroll for teen accounts
  • Ban algorithmic content recommendation for users under 18
  • Default to chronological feeds
  • Mandatory friction before every session
  • Actually enforceable age verification

Notice how none of these involve making parents the enforcement mechanism for Meta's design choices? Notice how all of them would crater engagement metrics?

That's why Meta will never propose them. They'd rather commission another study, bury the results, and roll out "Parental Controls 2.0" with better branding.

The Banality of Algorithmic Evil

What makes this particularly insidious is that nobody at Meta is cackling villainously while burning the research. They're probably all nice people who believe their own bullshit about "connecting the world" and "building community." They've simply created an organizational structure where the truth is inconvenient, so it gets filed away while the marketing department crafts another blog post about teen safety.

This is how modern corporate sociopathy works: Not through malice, but through institutional incentives that make it easier to lie than to fix the problem.

The Reckoning Is Coming

Project MYST is surfacing now because Meta is facing a landmark trial where plaintiffs are arguing the company knowingly addicted children's brains. These internal documents are Exhibit A in what could become the legal framework for treating social media companies like we treated tobacco: As knowing purveyors of harm who lied about the risks and the efficacy of their safety measures.

The discovery process is doing what Senate hearings never could — forcing the receipts into public view. Every buried study, every ignored warning, every piece of research that contradicted the marketing message.

Meta bet that they could keep playing whack-a-mole with safety concerns, rolling out new features while privately knowing they don't work. They bet that the regulatory system was too slow and too technologically illiterate to catch up.

Project MYST suggests they bet wrong.

The Oracle's Prophecy

Here's what's coming: More internal documents will leak. More studies showing Meta knew exactly what it was doing. More evidence of the gap between public statements and private knowledge. The legal costs will mount. The settlements will grow. Eventually, the regulatory hammer will fall.

But until then, Meta will keep running the same playbook: Apologize, announce new features, commission more research, bury the results, and keep the engagement algorithms humming.

Because in the end, Project MYST wasn't a warning. It was a cost of doing business.

And business, as they say, is good.

The Oracle Also Sees...