🤔 Is Sam Altman is being ... tricksy?
The OpenAI boss seems to be downplaying just how disruptive and game-changing human-level AI would likely be. Not the worst idea, though
Quote of the Issue
“You can quibble about if general intelligence is akin to human level intelligence, or is it like human-plus, or is it some far-future super intelligence. But to me, the important part is actually the breadth of it, which is that intelligence has all these different capabilities where you have to be able to reason and have intuition.” - Mark Zuckerberg
The Conservative Futurist: How To Create the Sci-Fi World We Were Promised
“With groundbreaking ideas and sharp analysis, Pethokoukis provides a detailed roadmap to a fantastic future filled with incredible progress and prosperity that is both optimistic and realistic.”
The Essay
🤔 Is Sam Altman is being ... tricksy?
In a new essay for Time magazine, I give readers a full dose of Up Wing techno-optimism:
We live in an emerging age of technological signs and wonders. Among the tantalizing possibilities: Genetic medicine to cure Alzheimer’s and cancer. Reusable rockets to build an orbital economy and Moon colony. New kinds of nuclear reactors that are easier to build and could supply nearly limitless clean energy. And the next U.S. president just might step to the podium in the West Wing and announce that an American technology company has created an artificial intelligence as smart as the best human mind. Such a seemingly sci-fi advance could radically change the job market, government finances, scientific research, and, really, the entire American way of life—yet almost certainly for the better overall, as I explain in my new book, The Conservative Futurist: How to Create the Sci-Fi World We Were Promised.
A radically better world is my hope and, yes, my guarded expectation. Then again, perhaps I’ve fallen victim to AI hypsters and all the great memework from the burgeoning e/acc movement — as well as my own natural optimism. Maybe there really are no economically significant use cases for generative AI and those hallucination-prone large language models. Furthermore, maybe even achieving artificial general intelligence — AI that could outperform us carbon-based lifeforms on most or all thinking tasks — would be no big deal. Humanity would be facing neither Singularity nor Robopocalypse. We would just keep on keepin’ on.
That’s hardly a crazy notion. Think about this: Since the late 19th century, real GDP per person in the US has grown at roughly 2 percent a year, what Stanford University economist Charles Jones calls “a remarkably steady” pace. Good enough, Jones adds, to boost per capita GDP by 17-fold. Yet if you glance at the chart below, what you see is an economy that keeps on keepin’ on (other than the Great Depression) seemingly no matter what great inventions come on the scene — whether electric motors, internal combustion engines, modern industrial chemistry, telephones, communications satellites, or the computer-internet combo.
Now, if all those great or really important inventions had stopped coming, we might’ve veered far way from that trend line, veered downward. Even so, the steady nature of that trend line should give pause to those techno-optimists forecasting AI advances that generate ahistorically high economic growth for America and other rich countries.
Keeping that economic context in mind, here’s what Sam Altman, CEO of startup OpenAI (a company that’s explicitly trying to create AGI) said at Davos during an interview with The Economist: “I believe that someday we will make something that qualifies as an AGI by whatever fuzzy definition you want the world will have a two-week freakout and then people will go on with their lives.”
Altman urged AI boomers and doomers alike — and, one would think, regulators — to think of AGI as evolutionary rather than revolutionary, likening it “to the evolution of the iPhone, where no single new model represented a big leap but the jump from the first version to the latest one has been extraordinary,” as The Economist described his perspective.
The framing here, as I see it: AGI will be important but not shocking or scary, at least not in the foreseeable future. It will be a general-purpose technology, but we’ve had lots of those before, such as smartphones. The history of technology should be a first-order guide even if we don’t really know exactly what happens after AGI. AGI will exist within the ken of human understanding. No need for a regulatory freakout (although some rules would be helpful) or societal neo-Luddite backlash. AGI isn’t the atomic bomb, and I’m not Oppenheimer! (Although really he thinks he might be, or at least used to think that.)
Keep reading with a 7-day free trial
Subscribe to Faster, Please! to keep reading this post and get 7 days of free access to the full post archives.