✨🤖 3 ways that AI could kill us all
Good news: These scenarios are really unlikely and preventing them doesn't require banning or halting the further development of artificial intelligence
My fellow pro-growth/progress/abundance Up Wingers,
OK, here it is: Advanced AI might destroy the world. Let’s just put that out there.
It probably won't happen, of course. Still, experts and lawmakers shouldn't totally ignore the risk. A provocative new RAND report, “On the Extinction Risk from Artificial Intelligence,” explores exactly this unsettling apocalyptic premise: how AI could contribute to human extinction through all-out nuclear war, synthetic pandemics, or runaway climate engineering.
(Sorry, no scenarios based on more exotic technologies like a nanotechnology “gray goo” disaster since they “involve too much uncertainty to perform a useful evaluation of the extinction threat.”)
Again, the scenarios are implausible — indeed, the report highlights many of the barriers and bottlenecks — but not impossible in my view. For governments, the trick here is to take such tail risks seriously enough without freaking out and abandoning the light-touch, pro-innovation ethos that has underpinned the digital economy's success. That, especially given that the potential upside from AI might far exceed anything we've seen from computers and the internet so far.
Routes to extinction
Three elements worth noting in RAND’s scenario-building:
First, the report doesn't guess when super-smart AI will arrive. Yet extinction-level risks would require very advanced systems. These machines would work alone, control physical infrastructure, survive without human help, and, crucially, deceive or persuade when needed in some scenarios. These would be at least as smart as humans — and in some ways even more so.
Second, RAND distinguishes human goals from AI goals. In some scenarios, the machine is merely an efficient executor of malign human plans. In others, it becomes the architect of its own ends. Extinction, in RAND’s telling, is unlikely without both capability and purpose:
We assess that such an event involving these technologies could not happen merely by accident; it would require a threat actor to actively pursue the goal of human extinction. The capabilities and concerted efforts required to pose an extinction threat to human beings are immense, primarily because of the inherent adaptability and resilience of humans. Given the opportunity, it is expected that humans would actively respond and effectively implement measures to mitigate any such threats and survive the aftermath.
Third, “extinction” to RAND means not just catastrophic loss of life, but the complete and irreversible end of humanity. The report distinguishes this from scenarios where civilization collapses but some humans endure. (So also no “world in chains” outcome with an AI-empowered Big Brother creates a totalitarian global society.) Even global catastrophes like pandemics or nuclear war would leave pockets of survivors unless a malevolent and determined actor — human or machine — persistently hunted them down. The distinction matters: It frames extinction not as an accident, but as a deliberate, sustained effort, requiring capability, intent, and time to execute.
OK, here are the three RAND scenarios:
Keep reading with a 7-day free trial
Subscribe to Faster, Please! to keep reading this post and get 7 days of free access to the full post archives.