✨☢ A (pricey) Manhattan Project for AI Safety?
One study suggests spending at least $250 billion, or nearly 10 times the World War II atomic effort.
Recall Elon Musk’s 2014 quote — almost a decade before his involvement with Grok and xAI — about the potential risks posed by advanced artificial intelligence:
We need to be very careful with artificial intelligence. Increasingly scientists think there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence, we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.
That dramatic — even cinematic — warning and its infernal vibe were top of mind for me when writing my previous essay, ”AI acceleration: the solution to AI risk.” I focused on the recent report “Existential Risk and Growth” by Leopold Aschenbrenner, a former OpenAI employee who gained prominence for predicting human-level AI, or artificial general intelligence, by the late 2020s, and Philip Trammell of Stanford’s Digital Economy Lab.
Racing through technological progress
In their analysis, the researchers argue that accelerating technological progress is what may actually reduce extinction risks — that, through two key mechanisms: first, faster progress means less time exposed to risks as the technology develops and improves toward superintelligence. (To quote the 2006 country hit by Rodney Atkins: “If you're goin' through hell keep on going / Don't slow down if you're scared don't show it / You might get out before the devil even knows you're there.”)
Second, according to Aschenbrenner and Trammel, increased wealth enables greater investment in safety measures. I agree. In his earlier analysis, Aschenbrenner predicted the US government will take control of AGI development by 2027-2028 through a Manhattan Project-style initiative due to inadequate private sector security and potential Chinese theft of research. Specifically, such an effort would a) merge leading AI labs under government oversight, b) enhance security against foreign threats, c) establish proper command structures, and d) mobilize national resources (such as energy and compute) to ensure the US develops safe superintelligent AI before China or other competitors. So, yeah, it’s a lot.
But let’s think more broadly about that point: How much should we invest in AI safety research? How do we even begin to think about the problem?
Keep reading with a 7-day free trial
Subscribe to Faster, Please! to keep reading this post and get 7 days of free access to the full post archives.