The Ones Who Left the Room
What the departures tell us about what's coming
You are reading The Other Perspective, Winner of International Writing Challenge
A Strange Kind of Resignation Letter
In May 2024, Jan Leike walked away from OpenAI.
He wasn’t a critic from the outside. He was the head of their Superalignment team — the group specifically tasked with ensuring future AI systems remain safe and controllable. He’d spent years working on the hardest problem in the field.
His departure note was quiet. No dramatics. No accusations. Just this:
“Building smarter-than-human machines is an inherently dangerous endeavour. Over the past years, safety culture and processes have taken a back seat to shiny products.”
A few months later, more departures. Researchers who’d dedicated their careers to getting this right, walking out the door. Not because they stopped believing in the mission. Because they couldn’t do the work they came to do.
I keep thinking about what that must feel like.
The Weight of Knowing
Imagine spending your days trying to solve a problem that might determine humanity’s future.
You’re not in a movie. There’s no ticking clock on the wall, no dramatic music. Just a desk, a screen, endless papers, and the slow accumulation of understanding about what you’re building.
You start to see things.
You see how capable these systems are becoming. You see the gaps between what they can do and what we can verify. You see the distance between the safety work that’s needed and the safety work that’s funded.
And you see the calendar. The next release. The next capability jump. The investors. The competition. The pressure to ship.
You’re not building a bomb. You’re building something that might be extraordinary — or might be something else entirely. And you’re running out of time to figure out which.
What do you do with that knowledge?
The Conscience Inside the Machine
Here’s what I find profound about the departures, the leaked memos, the quiet warnings:
The people closest to the technology are often the most concerned about it.
Not the critics who’ve never seen the code. Not the philosophers debating abstractions. The engineers. The researchers. The ones who know exactly what’s in the room because they built it.
When Jan Leike left, he wasn’t speculating about hypothetical risks. He was reporting from inside the building. When researchers sign open letters warning about extinction risk, they’re not seeking attention — many of them would prefer to work in peace. They’re breaking professional norms because something feels urgent enough to break them.
This isn’t pessimism. It’s the opposite.
It means the conscience is alive inside the labs. It means the people building these systems haven’t stopped asking whether they should. It means the fire hasn’t burned out the part of them that worries about getting burned.
What the Fear Tells Us
We tend to dismiss fear as weakness. Especially in technology, where optimism is currency and doubt is a career risk.
But there’s another way to read it.
The researchers who worry aren’t the ones who understand least. They’re often the ones who understand most. Their fear isn’t ignorance — it’s information.
When a firefighter backs away from a building, we don’t call them a coward. We listen. They know something about the structural integrity that we can’t see from the street.
The AI researchers sounding alarms are the firefighters. They’ve been inside the building. They’ve felt the heat. When they say “we need more time” or “this is moving too fast” or “I don’t think we know how to control this,” that’s not pessimism.
That’s expertise, expressing itself as caution.
The Ones Who Stay
Not everyone leaves. Most researchers stay, working within the constraints, pushing for safety in the margins, trying to make the systems better from inside.
I don’t know if they’re right to stay or wrong. I don’t know if the ones who left were courageous or gave up too soon. These are impossible judgments to make from the outside.
But I notice something about both groups:
They’re awake.
They haven’t numbed themselves to the strangeness of what they’re doing. They haven’t convinced themselves that someone else will handle the hard parts. They carry the weight of knowing — and they keep showing up, one way or another, to wrestle with it.
In a world sleepwalking toward transformation, the people who stay awake matter.
Even when they disagree. Even when they leave. Even when they stay and fight.
The Lesson for the Rest of Us
You and I aren’t building AI systems. We’re not in the room where the architectures are designed and the capabilities are tested.
But we’re in rooms of our own.
The meeting where the new AI tool gets approved. The policy discussion about automation. The conversation about which jobs to augment and which to replace. The small daily choices about how much to trust, how much to verify, how much to cede.
In those rooms, we can sleepwalk or stay awake.
We can assume someone else is handling the hard questions, or we can ask them ourselves. We can defer to the confident voices, or we can notice the ones who hesitate — and wonder what they know that we don’t.
The researchers who left didn’t have answers. They had questions they couldn’t stop asking.
Maybe that’s the lesson.
The questions don’t make you a pessimist. The questions mean you’re paying attention. And in a moment when the future is being written faster than we can read it, paying attention might be the most important thing any of us can do.


