🤔🤖 How an economist thinks about the AI dilemma: growth vs. existential risk
'AI could raise living standards by more than electricity or the internet. But it may pose risks that exceed those from nuclear weapons.'
Quote of the Issue
“But we don’t need the Singularity or even something just short of it to create a fantastic future. We only need the sort of economic and tech-driven productivity growth that has already happened in the real world—although that happening again with the American economy would stun most forecasters.” - James Pethokoukis, The Conservative Futurist: How to Create the Sci-Fi World We Were Promised
I have a new book out. The Conservative Futurist: How To Create the Sci-Fi World We Were Promised is currently available pretty much everywhere. I’m very excited about it! Let’s gooooo! 🆙↗⤴📈
The Essay
🤔🤖 How an economist thinks about the AI dilemma: growth versus existential risk
Amid America’s burgeoning panic — especially among its elites — about the latest advances in artificial intelligence, let’s take a breath: AI, especially large language models such as ChatGPT and other chatbots, presents both opportunities and risks.
Can we agree on this? They might greatly boost technological progress and economic growth by enhancing innovation and labor productivity. (Yay!) Yet many people worry that AI could also one day pose a catastrophic threat to humanity if an artificial superintelligence diverges from human values, potentially causing disaster, and even human extinction. (Boo!) Or as Stanford University economist Charles I. Jones frames the issues in his new NBER working paper, “The A.I. Dilemma: Growth versus Existential Risk”:
More succinctly, A.I. could raise living standards by more than electricity or the internet. But it may pose risks that exceed those from nuclear weapons. Moreover, these possibilities — however likely or unlikely — are correlated. It is precisely the state of the world in which A.I. could lead to profound increases in living standards that seems most likely to pose existential risk.
Jones, whose scholarly focus at Stanford’s graduate business school is the subject of economic growth, doesn’t offer any concrete conclusions. How could he? We don’t know what the future capabilities of AI will be — or even fully understand all that generative AI is capable of today. Rather, he explains how an economist might go about thinking through the relevant issues. He does this by presenting two models, with the second building on the first.
The big choice we have to make
So: Imagine a scenario where AI is developing at a rapid pace and is starting to do the jobs of humans, particularly in creating new ideas and inventions. “Next stop; AGI!” Instead of the US economy growing a few percent a year, it’s now growing at a white-hot 10 percent annually. Yet the more we use this ever-evolving AI, the higher the risk of something going horribly wrong, potentially ending human civilization. On the one hand, you have the potential for spectacular economic growth. On the other hand, there's a chance that everything could end. How do you balance the possible reward and risk?
Keep reading with a 7-day free trial
Subscribe to Faster, Please! to keep reading this post and get 7 days of free access to the full post archives.