A few weeks ago, Ezra Klein interviewed AI researcher Eliezer Yudkowsky regarding his newly released book, If Anyone Builds It, Everyone Dies.
Yudkowsky expresses concern over superintelligence, describing AI systems that far surpass human intelligence, making it impossible for us to contain or control them. He conveyed to Klein that once such systems are developed, humanity faces doom—not because the machines will aim to eliminate us, but because we will be so insignificant that they won’t even acknowledge our existence.
“When we construct a skyscraper where an ant hill once stood, we don’t intend to harm the ants; we are focused on building the skyscraper,” Yudkowsky clarifies. In this analogy, we represent the ants.
In this week’s podcast episode, I analyze Yudkowsky’s interview point by point, highlighting areas where I believe he exhibits flawed reasoning or exaggeration. However, I want to underscore what strikes me as the most remarkable aspect of the conversation: Yudkowsky fails to present a rationale for how he believes we can create something as speculative and extraordinary as superintelligent machines. He quickly shifts to discussing why he thinks these superintelligences would pose a threat.
It’s surprising that he does not provide this explanation.
Imagine attending a bioethics conference and attempting to deliver an hour-long talk on the best methods for constructing enclosures to contain a cloned Tyrannosaurus. Your fellow researchers would surely interrupt, demanding to understand why you believe we will soon be able to resurrect dinosaurs. If you couldn’t provide a realistic and detailed answer—beyond optimistic projections and a general sense that genetic research is advancing rapidly—they would dismiss you.
Yet, in specific AI safety circles (particularly those originating in Northern California), such discussions have become routine. The notion of superintelligence as an inevitability is widely accepted as a tenet of faith.
Here’s my perspective on how this has come to be...
In the early 2000s, several interconnected subcultures emerged from technology circles, all loosely committed to applying hyper-rational thinking to enhance individual and societal well-being.
One faction among these movements concentrated on existential risks to intelligent life on Earth. They utilized a principle from discrete mathematics called expected value, proposing that allocating considerable resources now to alleviate an extremely rare future risk could be worthwhile, especially if the potential consequences were catastrophic. This reasoning is familiar, as it mirrors the logic that Elon Musk employs to advocate for humanity’s transition to a multi-planetary species.
As discussions around rationalist existential risks gained traction, a significant topic of interest was the threat posed by rogue AI that could become uncontrollable. Thinkers like Yudkowsky, Oxford’s Nick Bostrom, and many others systematically examined the dreadful possibilities stemming from the emergence of highly intelligent AI.
The critical aspect of all this philosophical exploration is that, until recently, it was grounded in a hypothetical scenario: What would occur if a rogue AI were to exist?
Then the release of ChatGPT triggered a general sense of rapid progress and fading technological barriers. For many within these rationalist communities, this event instigated a subtle yet profound shift in their mindset: they transitioned from asking, “What will happen if we achieve superintelligence?” to pondering, “What will happen when we achieve superintelligence?”
These rationalists had pondered, written about, and fixated on the implications of rogue AI for so long that when a moment arose in which anything appeared conceivable, they couldn't help but embrace a fervent belief that their warnings had been substantiated; a change that positioned them, in their view, as potential saviors of humanity.
This explains why those of us who engage with these subjects professionally often encounter individuals who exhibit a dogmatic conviction that the emergence of AI gods is imminent, and who tend to sidestep inconvenient facts, resorting to dismissal or indignation when challenged.
(In one notable exchange during the interview, when Klein queried Yudkowsky regarding critics—like myself—who contend that AI advancements are lagging far short of superintelligence, he responded: “I had to tell these Johnny-come-lately kids to get off my lawn.” This suggests that if you’re not among the original believers, you shouldn't engage in this discussion! It reflects more of a sense of righteousness than a quest for truth.)
For the rest of us, however, the lesson is clear: don’t equate conviction with correctness. AI is not magic; it’s a technology like any other, with its own capabilities and limitations. Individuals with engineering expertise can analyze recent developments and make reasonable, evidence-based predictions about what to expect in the near future.
Indeed, if you press the rationalists long enough on the concept of superintelligence, they often revert to the same explanation: all we need to do is create an AI that is slightly smarter than humans (however that is defined), and it
A few weeks back, Ezra Klein interviewed AI researcher Eliezer Yudkowsky regarding his new, upbeat book titled If Anyone Builds it, Everyone Dies. Yudkowsky is ... Read more