P(doom) is AI’s latest apocalypse metric. Here’s how to calculate your score

 
December 07, 2023

So, what’s your p(doom) score?

The term that began as a half-serious inside joke on tech message boards to describe the odds that AI destroys humankind, has broken into the mainstream. The buzzword is p(doom), and it provides both AI experts and average know-nothings a common scale to describe where they stand on the question of whether AI is going to kill us. It’s “the morbid new statistic that is sweeping Silicon Valley,” the New York Times writes.

P(doom) officially stands for “probability of doom,” and as its name suggests, it refers to the odds that artificial intelligence will cause a doomsday scenario. According to tech columnist Kevin Roose:

It’s become a common icebreaker among techies in San Francisco—and an inescapable part of AI culture. I’ve been to two tech events this year where a stranger has asked for my p(doom) as casually as if they were asking for directions to the bathroom. “It comes up in almost every dinner conversation,” Aaron Levie, the chief executive of the cloud data platform Box, told me.

The scale runs from zero to 100, and the higher you score yourself, the more you’re convinced that AI is not only willing to eliminate humankind, if necessary, but in fact, is going to succeed at carrying out that task.

As AI grows more sophisticated, fears over the consequences do too. P(doom) offers an unequivocal way to show where you stand on this pressing existential question. For instance, it’s fair to assume someone whose number is 80 will be horrified when industry leaders warn—as OpenAI’s, Google DeepMind’s, and Anthropic’s CEOs did this May—that future AI will be capable of wiping us out as facilely as a pandemic or nuclear bomb. It may all sound like the singularity debate redux to seasoned doomers—just scored using a much more tweetable metric.

Anthropic CEO Dario Amodei has put his p(doom) at 10 to 25, depending on which day you ask. FTC Chair Lina Khan, often seen as one of AI’s leading combatants, told the Times hers actually lives down at a quite calm-ish 15. Meanwhile, Emmett Shear, OpenAI’s 72-hour interim CEO during Sam Altman’s weekend-long interregnum last month, says his p(doom) can shoot up as high as 50 on bad days. (On good ones, it’s 5.)

By now, most people active in the AI field have volunteered some kind of number, even if they believe it’s a stupid question. A recent survey of AI engineers found that their average p(doom) score was 40. Ethereum cofounder Vitalik Buterin—who has warned about AI’s existential threat—is only a 10. OpenAI superalignment colead Jan Leike puts it anywhere from 10 to 90, while Center for AI Safety director Dan Hendrycks recently updated his number from 20 to greater than 80. Elon Musk has said he is a 20 or 30. Tech reporter Casey Newton is a 5—which appears to be Times columnist Kevin Roose’s number as well.

Renowned software engineer Grady Booch says his score equals “P(all the oxygen in my room spontaneously moving to a corner thereby suffocating me).” And Eliezer Yudkowsky, cofounder of Berkeley’s Machine Intelligence Research Institute and an OG AI doomer, says his number is higher than 95. (“The designers of the RBMK-1000 reactor that exploded in Chernobyl understood the physics of nuclear reactors vastly better than anyone will understand artificial superintelligence at the point it first gets created,” he tweeted just this week.)

Just like in Las Vegas, though, having a bunch of p(doom)s, even those of smart people, doesn’t reveal if the house is going to win or not. “Nobody knows whether AI is 10% or 2% or 85.2% likely to kill us, of course,” Roose points out, noting we possess myriad ways of complicating the odds, too: “Would it still count as ‘doom’ if only 50% of humans died as a result of AI? What if nobody died, but we all ended up jobless and miserable? And how would AI take over the world, anyway?”

To many, those may sound like worthy follow-up questions. But probably not to everyone. Take this poll of X users last week by Garrett Jones, Bluechip’s chief economist. He asked what the chances are that p(doom) is actually negative, and nearly one-third of his respondents picked better than 90%.

(25)