Faster, Please!
Faster, Please! — The Podcast
✨⏩ My chat (+transcript) with ... economist Robin Hanson on AI, innovation, and economic reality
3
3
0:00
-26:56

✨⏩ My chat (+transcript) with ... economist Robin Hanson on AI, innovation, and economic reality

Faster, Please! — The Podcast #62
3
3

In this episode of Faster, Please! — The Podcast, I talk with economist Robin Hanson about a) how much technological change our society will undergo in the foreseeable future, b) what form we want that change to take, and c) how much we can ever reasonably predict.

Hanson is an associate professor of economics at George Mason University. He was formerly a research associate at the Future of Humanity Institute at Oxford, and is the author of the Overcoming Bias Substack. In addition, he is the author of the 2017 book, The Elephant in the Brain: Hidden Motives in Everyday Life, as well as the 2016 book, The Age of Em: Work, Love, and Life When Robots Rule the Earth.

In This Episode

  • Innovation is clumpy (1:21)

  • A history of AI advancement (3:25)

  • The tendency to control new tech (9:28)

  • The fallibility of forecasts (11:52)

  • The risks of fertility-rate decline (14:54)

  • Window of opportunity for space (18:49)

  • Public prediction markets (21:22)

  • A culture of calculated risk (23:39)

Below is a lightly edited transcript of our conversation


Innovation is Clumpy (1:21)

Do you think that the tech advances of recent years — obviously in AI, and what we're seeing with reusable rockets, or CRISPR, or different energy advances, fusion, perhaps, even Ozempic — do you think that the collective cluster of these technologies has put humanity on a different path than perhaps it was on 10 years ago?

. . . most people don't notice just how much stuff is changing behind the scenes in order for the economy to double every 15 or 20 years.

That’s a pretty big standard. As you know, the world has been growing exponentially for a very long time, and new technologies have been appearing for a very long time, and the economy doubles roughly every 15 or 20 years, and that can't happen without a whole lot of technological change, so most people don't notice just how much stuff is changing behind the scenes in order for the economy to double every 15 or 20 years. So to say that we're going more than that is really a high standard here. I don't think it meets that standard. Maybe the standard it meets is to say people were worried about maybe a stagnation or slowdown a decade or two ago, and I think this might weaken your concerns about that. I think you might say, well, we're still on target.

Innovation's clumpy. It doesn't just out an entirely smooth . . . There are some lumpy ones once in a while, lumpier innovations than usual, and those boost higher than expected, sometimes lower than expected sometimes, and maybe in the last ten years we've had a higher-than-expected clump. The main thing that does is make you not doubt as much as you did when you had the lower-than-expected clump in the previous 10 years or 20 years because people had seen this long-term history and they thought, “Lately we're not seeing so much. I wonder if this is done. I wonder if we're running out.” I think the last 10 years tells you: well, no, we're kind of still on target. We're still having big important advances, as we have for two centuries.

A history of AI advancement (3:25)

People who are especially enthusiastic about the recent advances with AI, would you tell them their baseline should probably be informed by economic history rather than science fiction?

[Y]es, if you're young, and you haven't seen the world for decades, you might well believe that we are almost there, we're just about to automate everything — but we're not.

By technical history! We have 70-odd years of history of AI. I was an AI researcher full-time from ’84 to ’93. If you look at the long sweep of AI history, we've had some pretty big advances. We couldn't be where we are now without a lot of pretty big advances all along the way. You just think about the very first digital computer in 1950 or something and all the things we've seen, we have made large advances — and they haven't been completely smooth, they've come in a bit of clumps.

I was enticed into the field in 1984 because of a recent set of clumps then, and for a century, roughly every 30 years, we've had a burst of concern about automation and AI, and we've had big concern in the sense people said, “Are we almost there? Are we about to have pretty much all jobs automated?” They said that in the 1930s, they said it in the 1960s — there was a presidential commission in the 1960s: “What if all the jobs get automated?” I jumped in in the late ’80s when there was a big burst there, and I as a young graduate student said, “Gee, if I don't get in now, it'll all be over soon,” because I heard, “All the jobs are going to be automated soon!”

And now, in the last decade or so, we've had another big burst, and I think people who haven't seen that history, it feels to them like it felt to me in 1984: “Wow, unprecedented advances! Everybody's really excited! Maybe we're almost there. Maybe if I jump in now, I'll be part of the big push over the line to just automate everything.” That was exciting, it was tempting, I was naïve, and I was sucked in, and we're now in another era like that. Yes, if you're young, and you haven't seen the world for decades, you might well believe that we are almost there, we're just about to automate everything — but we're not.

I like that you mentioned the automation scare of the ’60s. Just going back and looking at that, it really surprised me how prevalent and widespread and how serious people took that. I mean, you can find speeches by Martin Luther King talking about how our society is going to deal with the computerization of everything. So it does seem to be a recurrent fear. What would you need to see to think it is different this time?

The obvious relevant parameter to be tracking is the percentage of world income that goes to automation, and that has been creeping up over the decades, but it's still less than five percent.

What is that statistic?

If you look at the percentage of the economy that goes to computer hardware and software, or other mechanisms of automation, you're still looking at less than five percent of the world economy. So it's been creeping up, maybe decades ago it was three percent, even one percent in 1960, but it's creeping up slowly, and obviously, when that gets to be 80 percent, game over, the economy has been replaced — but that number is creeping up slowly, and you can track it, so when you start seeing that number going up much faster or becoming a large number, then that's the time to say, “Okay, looks like we're close. Maybe automation will, in fact, take over most jobs, when it's getting most of world income.”

If you're looking at economic statistics, and you're looking at different forecasts, whether by the Fed or CBO or Wall Street banks and the forecasts are, “Well, we expect, maybe because of AI, productivity growth to be 0.4 percentage points higher over this kind of time. . .” Those kinds of numbers where we're talking about a tenth of a point here, that's not the kind of singularity-emergent world that some people think or hope or expect that we're on.

Absolutely. If you've got young enthusiastic tech people, et cetera — and they're exaggerating. The AI companies, even they're trying to push as big a dramatic images they can. And then all the stodgy conservative old folks, they're afraid of seeming behind the times, and not up with things, and not getting it — that was the big phrase in the Internet Boom: Who “gets it” that this is a new thing?

I'm proud to be a human, to have been part of the civilization to have done this . . . but we've seen that for 70 years: new technologies, we get excited, we try them out, we try to apply them, and that's part of what progress is.

Now it would be #teamgetsit.

Exactly, something like that. They're trying to lean into it, they're trying to give it the best spin they can, but they have some self-respect, so they're going to give you, “Wow 0.4 percent!” They'll say, “That's huge! Wow, this is a really big thing, everybody should be into this!” But they can't go above 0.4 percent because they've got some common sense here. But we've even seen management consulting firms over the last decade or so make predictions that 10 years in the future, half all jobs would be automated. So we've seen this long history of these really crazy extreme predictions into a decade, and none of those remotely happened, of course. But people do want to be in with the latest thing, and this is obviously the latest round of technology, it's impressive. I'm proud to be a human, to have been part of the civilization to have done this, and I’d like to try them out, and see what I can do with them, and think of where they could go. That's all exciting and fun, but we've seen that for 70 years: new technologies, we get excited, we try them out, we try to apply them, and that's part of what progress is.

 The tendency to control new tech (9:28)

Not to talk just about AI, but do you think AI is important enough that policymakers need to somehow guide the technology to a certain outcome? Daron Acemoglu, one of the Nobel Prize winners, has for quite some time, and certainly recently, said that this technology needs to be guided by policymakers so that it helps people, it helps workers, it creates new tasks, it creates new things for them to do, not automate away their jobs or automate a bunch of tasks.

Do you think that there's something special about this technology that we need to guide it to some sort of outcome?

I think those sort of people would say that about any new technology that seemed like it was going to be important. They are not actually distinguishing AI from other technologies. This is just what they say about everything.

It could be “technology X,” we must guide it to the outcome that I have already determined.

As long as you've said, “X is new, X is exciting, a lot of things seem to depend on X,” then their answer would be, “We need to guide it.” It wouldn't really matter what the details of X were. That's just how they think about society and technology. I don't see anything distinctive about this, per se, in that sense, other than the fact that — look, in the long run, it's huge.

Space, in the long run, is huge, because obviously in the long run almost everything will be in space, so clearly, eventually, space will be the vast majority of everything. That doesn't mean we need to guide space now or to do anything different about it, per se. At the moment, space is pretty small, and it's pretty pedestrian, but it's exciting, and the same for AI. At the moment, AI is pretty small, minor, AI is not remotely threatening to cause harm in our world today. If you look about harmful technologies, this is way down the scale. Demonstrated harms of AI in the last 10 years are minuscule compared to things like construction equipment, or drugs, or even television, really. This is small.

Ladders for climbing up on your roof to clean out the gutters, that's a very dangerous technology.

Yeah, somebody should be looking into that. We should be guiding the ladder industry to make sure they don't cause harm in the world.

The fallibility of forecasts (11:52)

I'm not sure how much confidence we should ever have on long-term economic forecasts, but have you seen any reason to think that they might be less reliable than they always have been? That we might be approaching some sort of change? That those 50-year forecasts of entitlement spending might be all wrong because the economy's going to be growing so much faster, or the longevity is going to be increasing so much faster?

Previously, the world had been doubling roughly every thousand years, and that had been going on for maybe 10,000 years, and then, within the space of a century, we switched to doubling roughly every 15 or 20 years. That's a factor of 60 increase in the growth rate, and it happened after a previous transition from forging to farming, roughly 10 doublings before.

It was just a little over two centuries ago when the world saw this enormous revolution. Previously, the world had been doubling roughly every thousand years, and that had been going on for maybe 10,000 years, and then, within the space of a century, we switched to doubling roughly every 15 or 20 years. That's a factor of 60 increase in the growth rate, and it happened after a previous transition from forging to farming, roughly 10 doublings before.

So you might say we can't trust these trends to continue maybe more than 10 doublings, and then who knows what might happen? You could just say — that's 200 years, say, if you double every 20 years — we just can't trust these forecasts more than 200 years out. Look at what's happened in the past after that many doublings, big changes happened, and you might say, therefore, expect, on that sort of timescale, something else big to happen. That's not crazy to say. That's not very specific.

And then if you say, well, what is the thing people most often speculate could be the cause of a big change? They do say AI, and then we actually have a concrete reason to think AI would change the growth rate of the economy: That is the fact that, at the moment, we make most stuff in factories, and factories typically push out from the factory as much value as the factory itself embodies, in economic terms, in a few months.

If you could have factories make factories, the economy could double every few months. The reason we can't now is we have humans in the factories, and factories don't double them. But if you could make AIs in factories, and the AIs made factories, that made more AIs, that could double every few months. So the world economy could plausibly double every few months when AI has dominated the economy.

That's of the magnitude doubling every few months versus doubling every 20 years. That's a magnitude similar to the magnitude we saw before from farming to industry, and so that fits together as saying, sometime in the next few centuries, expect a transition that might increase the growth rate of the economy by a factor of 100. Now that's an abstract thing in the long frame, it's not in the next 10 years, or 20 years, or something. It's saying that economic modes only last so long, something should come up eventually, and this is our best guess of a thing that could come up, so it's not crazy.

The risks of fertility-rate decline (14:54)

Are you a fertility-rate worrier?

If the population falls, the best models say innovation rates would fall even faster.

I am, and in fact, I think we have a limited deadline to develop human-level AI, before which we won't for a long pause, because falling fertility really threatens innovation rates. This is something we economists understand that I think most other people don't: You might've thought that a falling population could be easily compensated by a growing economy and that we would still have rapid and fast innovation because we would just have a bigger economy with a lower population, but apparently that's not true.

If the population falls, the best models say innovation rates would fall even faster. So say the population is roughly predicted to peak in three decades and then start to fall, and if it's falls, it would fall roughly a factor of two every generation or two, depending on which populations dominate, and then if it fell by a factor of 10, the innovation rate would fall by more than a factor of 10, and that means just a slower rate of new technologies, and, of course, also a reduction in the scale of the world economy.

And I think that plausibly also has the side effect of a loss in liberality. I don't think people realize how much it was innovation and competition that drove much of the world to become liberal because the winning nations in the world were liberal and the rest were afraid of falling too far behind. But when innovation goes away, they won't be so eager to be liberal to be innovative because innovation just won't be a thing, and so much of the world will just become a lot less liberal.

There's also the risk that — basically, computers are a very durable technology, in principle. Typically we don't make them that durable because every two years they get twice as good, but when innovation goes away, they won't get good very fast, and then you'll be much more tempted to just make very durable computers, and the first generation that makes very durable computers that last hundreds of years, the next generation won't want to buy new computers, they'll just use the old durable ones as the economy is shrinking and then the industry that commuters might just go away. And then it could be a long time before people felt a need to rediscover those technologies.

I think the larger-scale story is there's no obvious process that would prevent this continued decline because there's no level at which, when you get that, some process kicks in and it makes us say, “Oh, we need to increase the population.” But the most likely scenario is just that the Amish and [Hutterites] and other insular, fertile subgroups who have been doubling every 20 years for a century will just keep doing that and then come to dominate the world, much like Christians took over the Roman Empire: They took it over by doubling every 20 years for three centuries. That's my default future, and then if we don't get AI or colonize space before this decline, which I've estimated would be roughly 70 years’ worth more of progress at previous rates, then we don't get it again until the Amish not only just take over the world, but rediscover a taste for technology and economic growth, and then eventually all of the great stuff could happen, but that could be many centuries later.

This does not sound like an issue that can be fundamentally altered by tweaking the tax code.

You would have to make a large

Large turn of the dial, really turn that dial.

People are uncomfortable with larger-than-small tweaks, of course, but we're not in an era that's at all eager for vast changes in policy, we are in a pretty conservative era that just wants to tweak things. Tweaks won't do it.

Window of opportunity for space (18:49)

We can't do things like Daylight Savings Time, which some people want to change. You mentioned this window — Elon Musk has talked about a window for expansion into space, and this is a couple of years ago, he said, “The window has closed before. It's open now. Don't assume it will always be open.”

Is that right? Why would it close? Is it because of higher interest rates? Because the Amish don't want to go to space? Why would the window close?

I think, unfortunately, we've got a limited window to try to jumpstart a space economy before the earth economy shrinks and isn't getting much value from a space economy.

There's a demand for space stuff, mostly at the moment, to service Earth, like the internet circling the earth, say, as Elon's big project to fund his spaceships. And there's also demand for satellites to do surveillance of the earth, et cetera. As the earth economy shrinks, the demand for that stuff will shrink. At some point, they won't be able to afford fixed costs.

A big question is about marginal cost versus fixed costs. How much is the fixed cost just to have this capacity to send stuff into space, versus the marginal cost of adding each new rocket? If it's dominated by marginal costs and they make the rockets cheaper, okay, they can just do fewer rockets less often, and they can still send satellites up into space. But if you're thinking of something where there's a key scale that you need to get past even to support this industry, then there's a different thing.

So thinking about a Mars economy, or even a moon economy, or a solar system economy, you're looking at a scale thing. That thing needs to be big enough to be self-sustaining and economically cost-effective, or it's just not going to work. So I think, unfortunately, we've got a limited window to try to jumpstart a space economy before the earth economy shrinks and isn't getting much value from a space economy. Space economy needs to be big enough just to support itself, et cetera, and that's a problem because it's the same humans in space who are down here on earth, who are going to have the same fertility problems up there unless they somehow figure out a way to make a very different culture.

A lot of people just assume, “Oh, you could have a very different culture on Mars, and so they could solve our cultural problems just by being different,” but I'm not seeing that. I think they would just have a very strong interconnection with earth culture because they're going to have just a rapid bandwidth stuff back and forth, and their fertility culture and all sorts of other culture will be tied closely to earth culture, so I'm not seeing how a Mars colony really solves earth cultural problems.

Public prediction markets (21:22)

The average person is aware that these things, whether it's betting markets or these online consensus prediction markets, that they exist, that you can bet on presidential races, and you can make predictions about a superconductor breakthrough, or something like that, or about when we're going to get AGI.

To me, it seems like they have, to some degree, broken through the filter, and people are aware that they're out there. Have they come of age?

. . . the big value here isn't going to be betting on elections, it's going to be organizations using them to make organization decisions, and that process is being explored.

In this presidential election, there's a lot of discussion that points to them. And people were pretty open to that until Trump started to be favored, and people said, “No, no, that can't be right. There must be a lot of whales out there manipulating, because it couldn't be Trump's winning.” So the openness to these things often depends on what their message is.

But honestly, the big value here isn't going to be betting on elections, it's going to be organizations using them to make organization decisions, and that process is being explored. Twenty-five years ago, I invented this concept of decision markets using in organizations, and now in the last year, I've actually seen substantial experimentation with them and so I'm excited to see where that goes, and I'm hopeful there, but that's not so much about the presidential markets.

Roughly a century ago there was more money bet in presidential betting markets than in stock markets at the time. Betting markets were very big then, and then they declined, primarily because scientific polling was declared a more scientific approach to estimating elections than betting markets, and all the respectable people wanted to report on scientific polls. And then of course the stock market became much, much bigger. The interest in presidential markets will wax and wane, but there's actually not that much social value in having a better estimate of who's going to win an election. That doesn't really tell you who to vote for, so there are other markets that would be much more socially valuable, like predicting the consequences of who's elected as president. We don't really have much markets on those, but maybe we will next time around. But there is a lot of experimentation going in organizational prediction markets at the moment, compared to, say, 10 years ago, and I'm excited about those experiments.

A culture of calculated risk (23:39)

I want a culture that, when one of these new nuclear reactors, or these nuclear reactors that are restarting, or these new small modular reactors, when there's some sort of leak, or when a new SpaceX Starship, when some astronaut gets killed, that we just don't collapse as a society. That we're like, well, things happen, we're going to keep moving forward.

Do you think we have that kind of culture? And if not, how do we get it, if at all? Is that possible?

That's the question: Why has our society become so much more safety-oriented in the last half-century? Certainly one huge sign of it is the way we way overregulated nuclear energy, but we've also now been overregulating even kids going to school. Apparently they can't just take their bikes to school anymore, they have to go on a bus because that's safer, and in a whole bunch of ways, we are just vastly more safety-oriented, and that seems to be a pretty broad cultural trend. It's not just in particular areas and it's not just in particular countries.

I've been thinking a lot about long-term cultural trends and trying to understand them. The basic story, I think, is we don't have a good reason to believe long-term cultural trends are actually healthy when they are shared trends of norms and status markers that everybody shares. Cultural things that can vary within the cultures, like different technologies and firm cultures, those we're doing great. We have great evolution of those things, and that's why we're having all these great technologies. But things like safetyism is more of a shared cultural norm, and we just don't have good reasons to think those changes are healthy, and they don't fix themselves, so this is just another example of something that’s going wrong.

They don't fix themselves because if you have a strong, very widely shared cultural norm, and someone has a different idea, they need to be prepared to pay a price, and most of us aren’t prepared to pay that price.

If we had a healthy cultural evolution competition among even nations, this would be fine. The problem is we have this global culture, a monoculture, really, that enforces everybody.

Right. If, for example, we have 200 countries, if they were actually independent experiments and had just had different cultures going different directions, then I'd feel great; that okay, the cultures that choose too much safety, they'll lose out to the others and eventually it'll be worn out. If we had a healthy cultural evolution competition among even nations, this would be fine. The problem is we have this global culture, a monoculture, really, that enforces everybody.

At the beginning of Covid, all the usual public health efforts said all the usual things, and then world elites got together and talked about it, and a month later they said, “No, that's all wrong. We have a whole different thing to do. Travel restrictions are good, masks are good, distancing is good.” And then the entire world did it the same way, and there was strong pressure on any deviation, even Sweden, that would dare to deviate from the global consensus.

If you look about many kinds of regulation, it's very little deviation worldwide. We don't have 200, or even 100, independent policy experiments, we basically have a main global civilization that does it the same, and maybe one or two deviants that are allowed to have somewhat different behavior, but pay a price for it.

Share



On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were Promised

Image


Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.


Micro Reads

▶ Economics

▶ Business

▶ Policy/Politics

▶ AI/Digital

▶ Biotech/Health

▶ Clean Energy/Climate

▶ Robotics/AVs

▶ Space/Transportation

▶ Up Wing/Down Wing

▶ Substacks/Newsletters

Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Discussion about this podcast

Faster, Please!
Faster, Please! — The Podcast
Welcome to Faster, Please! — The Podcast. Several times a month, host Jim Pethokoukis will feature a lively conversation with a fascinating and provocative guest about how to make the world a better place by accelerating scientific discovery, technological innovation, and economic growth.