The Button
A short story about future of humans in the world AGI
The button doesn’t exist.
This is what I tell myself every morning at 6:47 AM, when my Car pulls into the parking garage at Prometheus AI. The button that could pause everything—delay AGI by five years, by ten—it’s not real. Just a thought experiment from some podcast. Something someone asked in an interview once.
But I think about it constantly.
My name is Eli, and I am a Senior Research Scientist at one of the three companies most likely to build artificial general intelligence. I have a PhD from MIT, 147 citations on my alignment papers, and a stock package worth approximately $4.2 million at current valuation. I also have a four-year-old daughter named Mei who asked me last week if the computers at Mommy’s work were going to “eat all the jobs,” because that’s what a kid at preschool told her.
I didn’t know what to say.
I said: “Mommy makes sure the computers are nice.”
This is technically true. My job is alignment research—making sure that as our models get more capable, they continue to do what we want them to do. On paper, I am the safety. I am the guardrail. I am the person who makes sure we don’t build something that turns the world into paperclips or whatever nightmare scenario you’ve read about.
In practice, I am a fig leaf.
Here’s something people outside the industry don’t understand: nobody wakes up wanting to end the world. Nobody sits in our Monday morning all-hands thinking “today I will take another step toward the obsolescence of humanity.” The people I work with are, genuinely, some of the most thoughtful, intelligent, careful people I have ever met. They think about these problems. They lose sleep over them. They read the papers about existential risk. They signed the open letter.
And then they go back to their desks and keep building.
I do too.
Why?
Because if we don’t, Nexus will. Or DeepScale. Or that lab in Beijing that nobody talks about but everyone thinks about. Because the race has its own logic now, its own momentum, and stepping out of it unilaterally doesn’t stop anything—it just means someone else crosses the finish line first, someone maybe less careful than us, and then what was the point?
This is what we tell ourselves.
This is what I tell myself.
There’s a phrase I’ve started seeing in internal documents: “closing the gap to human-level performance.” It appears in project updates, in planning decks, in the metrics we track. The gap, closing. Closing the gap. Like it’s a good thing. Like it’s the goal.
Because it is the goal. That’s the whole point. That’s why investors have put $80 billion into this company. That’s why our market cap exceeds the GDP of most nations. That’s why we have this beautiful campus with its meditation gardens and its farm-to-table cafeteria and its on-site childcare where Mei goes three days a week.
We are building the thing that will change everything, and we are being paid extraordinarily well to do it, and we have excellent benefits.
I try not to think about the fact that “closing the gap” means building something that could do my job better than me. Could do everyone’s job better than them. Could think and plan and decide and create better than any human who has ever lived.
I try not to think about what happens after the gap is closed.
Last Tuesday, I was in a meeting about our new architecture. I can’t say what it is—NDA, obviously—but I can say that when our lead researcher pulled up the benchmark results, the room went quiet. Not concerned quiet. Awed quiet. The kind of quiet where people are afraid to breathe because they’re watching something historic.
The numbers were better than anyone expected. Way better. The kind of better that means timelines measured in months, not years.
Someone whispered “holy shit.”
Someone else laughed nervously.
I felt like throwing up.
Instead, I asked about the alignment properties. Were the outputs stable? Had we seen any concerning behaviors in the extended evaluations? My voice sounded normal, professional, like I was asking about a quarterly report instead of something that might be—what did that essay call it?—”adding a new species of intelligence to Earth.”
The answers were fine. The answers are always fine, until they’re not, and by then—
But that’s not how anyone wants to think. That’s not how you build a company. That’s not how you win.
I read an essay recently. Someone sent it in our internal safety Slack, which is funny because you’d think a channel dedicated to AI safety would be the one place where people take this stuff seriously. But the culture, even there, is—complicated. You can raise concerns, but you can’t be a doomer. You can flag risks, but you can’t be the person who’s always flagging risks. There’s a fine line between being responsibly cautious and being seen as someone who “doesn’t get it,” who’s “not committed to the mission,” who might be happier “somewhere with lower stakes.”
Anyway. The essay.
It made an argument I’ve heard before but never quite so clearly: that AGI isn’t inevitable. That it’s a choice masquerading as fate. That the race only continues because everyone assumes everyone else will keep racing.
But what got me was this line about a button. The author said he’d asked around—hypothetically, if there were a button you could push to delay AGI by five or ten years, would you push it?
And the answer, from most people he asked, was yes.
Yes.
Even inside the companies. Especially inside the companies.
We would push the button.
We would slam that button.
But there is no button.
Except.
There might be.
Not a literal button, obviously. But there are choices. Small ones, every day. Which project to prioritize. Which results to emphasize. Which risks to escalate. Whether to stay late to optimize that training run, or go home and put Mei to bed.
And bigger choices too. Whether to stay at all.
I’ve been thinking about this more lately. About the collective action problem. About how everyone feels trapped by a race that nobody would choose if they could choose collectively. About how the race continues because we’ve all convinced ourselves we’re powerless.
But we’re not powerless. We’re the people building the thing. Without us—the researchers, the engineers, the thousands of highly specialized workers who actually understand how any of this works—there is no race. The companies are just buildings and servers. We are the ones who make them run.
What if we stopped?
Not all at once, maybe. But some of us. Enough of us. What if we said: not like this. Not this fast. Not without thinking through what we’re actually doing.
What if that became the brave thing, instead of the thing that tanks your career?
Mei asked me again last night about the computers. Whether they were nice.
I said yes. For now, yes.
She asked if they would always be nice.
I said: “That’s what Mommy is working on.”
She seemed satisfied. She went back to her dinosaurs. She’s been really into triceratops lately. She likes that they were plant-eaters, that they didn’t hurt anyone, that they just wanted to be left alone to eat leaves with their families.
The triceratops are gone now. So are the other dinosaurs. Sixty-six million years of evolution, ended in what might have been a single day, when something fell from the sky and changed everything.
They didn’t see it coming. They couldn’t have. They were just living their lives, eating their leaves, being dinosaurs.
We see it coming.
We are building it ourselves.
And every day, I go to work.
There’s a vigil that happens sometimes outside our campus. A small group, maybe thirty people, with signs about AI safety and human rights and the future of work. Security calls them “the protesters” like it’s a mild annoyance, like they’re there about parking or something.
I’ve never stopped to talk to them. I drive past in my Tesla with its tinted windows and I feel—something. Shame, maybe. Or gratitude that someone is paying attention. Or anger that their signs are so simplistic, that they don’t understand the nuances, that they think it’s as easy as just stopping.
But maybe it is that easy.
Maybe the nuances are just the story we tell ourselves so we can keep driving past.
The essay I read had a proposal. Four things, basically: track the compute, cap the compute, make companies liable for what they build, regulate based on risk. It’s not radical, when you think about it. We do similar things for nuclear technology, for bioweapons, for all sorts of dangerous capabilities.
We just haven’t decided that AI is dangerous enough.
Or rather: the people making the decisions haven’t decided. And those people are—
Us.
The researchers. The engineers. The policymakers who used to be researchers and engineers. The investors who fund the researchers and engineers. It’s a closed loop, and the loop has decided that the race must continue, and anyone who questions that is naive about geopolitics or doesn’t understand the technology or just doesn’t want humanity to have nice things like cured diseases and unlimited energy.
But I’ve read the internal discussions. I’ve seen how we talk when we think no one outside is listening. And the truth is, most of us are scared. Most of us would push the button. Most of us know, in some quiet part of ourselves, that what we’re doing is insane.
We just don’t know how to stop.
Maybe that’s what this is. Me, writing this down at 2 AM while Mei sleeps in the next room, while my husband pretends not to be worried about why I can’t sleep anymore. Maybe this is the smallest possible act of pushing back. Of saying: I see what we’re doing. I’m part of it, and I see it, and I don’t know what to do about it, but I refuse to pretend it’s fine.
Maybe enough small acts add up to something.
Or maybe I’ll delete this in the morning and go to work and sit in another meeting about another breakthrough and feel that same awe and that same nausea and keep telling myself the same story about how we’re the careful ones, we’re the responsible ones, if not us then who.
I don’t know.
What I know is this:
Mei deserves a future where she’s not just a legacy species, tolerated by our successors. She deserves a future where humans still matter, still decide, still have some say in what happens next.
The essay was right: this future isn’t inevitable. It’s a choice. We’re making it every day—in the labs, in the boardrooms, in the policy offices, in all the small decisions that add up to a trajectory.
The button doesn’t exist.
But we do. And every day, we choose whether to keep racing or to stop and ask what we’re racing toward.
Tomorrow I’m going to ask my manager about transferring to our AI-for-climate-research division. It’s less prestigious. The stock refresh will probably be smaller. Nobody there is going to build the thing that changes everything.
That’s the point.
It’s not much. It’s one person, making one choice, in a race with billions of dollars and geopolitical stakes and momentum that feels unstoppable.
But I think about all the other people—in my company, in the others—who would push the button if it existed. Who feel trapped in a race they never wanted to run. Who tell themselves the same stories I tell myself.
What if we all stopped telling those stories?
What if we remembered that we’re not passengers in this. We’re the drivers.
The button doesn’t exist.
So we have to be the button.
The end.



