The AI Doc: Or How I Became an Apocaloptimist dropped and immediately sparked one of the most interesting discourse cycles we've seen around AI risk — not because it took a side, but because it exposed a gaping hole where one side's arguments should have been.
And, the documentary somehow managed to get praise from AI safety researchers AND the e/acc crowd in the same week.
Let me break down what happened, why it matters, and what it means for anyone building, investing, or simply existing in the AI era.
The Setup: A Dad Freaks Out (Correctly)
Director Daniel Roher has a kid. He starts asking questions about AI. He talks to the right people. He freaks out — hard.
The first act interviews the usual suspects from the AI safety world. Jeffrey Ladish explains instrumental convergence in plain English: an AI with goals will seek power to achieve those goals, and if you're in the way, that's your problem. Connor Leahy makes the point that building things smarter than us is inherently dangerous, and you shouldn't need a PhD to understand why. Eliezer Yudkowsky delivers the gut punch: sharing a planet with beings much smarter than you who don't care about you is not a winning position.
No alignment jargon. No technical deep-dives. Just raw, intuitive arguments that land.
Then Daniel asks the question that makes this film honest in a way most documentaries aren't: "Is there a way I can convince myself to be optimistic about this?"
He's not searching for truth anymore. He's searching for comfort. And he tells you that directly.
The Optimists Show Up With Vibes, Not Arguments
Here's where it gets interesting — and where the film quietly delivers its most devastating blow.
The tech optimists arrive. Peter Diamandis talks about technology's track record. Beff Jezos promises an incredible future. The energy is high. The vibes are immaculate.
But the careful viewer notices something: there are no actual arguments.
The entire case boils down to "technology was good before, so technology will be good now." That's it. That's the whole thing.
Nobody addresses instrumental convergence. Nobody explains why creating superintelligent systems is safe. Nobody counters the specific existential risk scenarios. They just... radiate positivity.
And the film lets you see this without telling you. That's masterful filmmaking.
As one observer put it: presenting the optimists' argumentless pitch fairly — in a way they themselves endorse — lets the audience realize there's no real debate happening. There are strong arguments on the risk side. On the "nothing to worry about" side, there are vibes.
The CEO Interviews: A Masterclass in What's Not Said
Daniel sets out to interview the five CEOs steering humanity's future: Altman, Amodei, Hassabis, Musk, and Zuckerberg.
The results are telling:
Sam Altman shows up measured and somber. His safety plan: iterative deployment and testing. His edge: OpenAI's lead. Standard fare.
Dario Amodei acknowledges the stakes and calls for government coordination on safety.
Demis Hassabis emphasizes international coordination and the upsides of AI.
Elon Musk agreed to participate but got "too busy." Left nothing.
Mark Zuckerberg declined entirely.
Two of the five people steering the most powerful technology in human history couldn't be bothered to show up for the conversation about whether it ends civilization. Let that sit with you.
The Real Message Is Buried in the Structure
Here's what the Zvi Mowshowitz review nails that most people will miss: the larger message isn't "the interventions are fake." It's that "so many choose to believe false things about AI in order to feel that things will be okay."
The documentary's structure IS the argument. Daniel literally tells you he's searching for reasons to stop worrying. He finds people happy to give him those reasons. And the reasons are hollow. He shows you this. Then he shows you babies, hope, and a call to action anyway.
It's the most honest framing possible: here's the danger, here's why the comforting answers don't hold up, and here's why we choose to believe them anyway — because the alternative is unbearable.
Why This Matters for the Crypto and Tech World
If you're in crypto, you're in tech. If you're in tech, you're downstream of AI decisions being made right now by a handful of people.
The documentary raises coordination problems that should sound familiar to anyone in decentralized systems: How do you get competing actors to cooperate when defecting is profitable? How do you enforce global agreements? How do you prevent a race to the bottom?
These aren't abstract philosophy problems. They're the same game theory that underpins every protocol, every consensus mechanism, every governance token vote you've ever participated in.
The difference is the stakes. In crypto, a failed coordination game means lost funds. In AI, it means something considerably worse.
The Call to Action Is Real This Time
Most documentaries end with a "call your congressman" moment, and everyone rolls their eyes. Fair.
But this one earned it. The film makes a genuine case that the future is unwritten: that humanity can still shape how this goes. An international treaty. Actual governance. Real coordination.
Is it naive? Maybe. But the alternative — letting five guys race to build god while two of them won't even talk about it — is worse.
The film works because it's real. The anxiety is real. The danger is real. The hollow optimism is real. And the choice we face is real.
Go watch it. Then decide what you're going to do about it.
Follow Crafty on X 👉🏼 x.com/9bitCrafty



