Faster, Please!
Faster, Please! — The Podcast
🚀 Faster, Please! — The Podcast #6
0:00
Current time: 0:00 / Total time: -34:23
-34:23

🚀 Faster, Please! — The Podcast #6

🔮 A conversation about futurism, innovation, longtermism, and economic growth with economist Robin Hanson of George Mason University
The World If | Know Your Meme

Few economists think more creatively and also more rigorously about the future than Robin Hanson, my guest on this episode of Faster, Please! — The Podcast. So when he says a future of radical scientific and economic progress is still possible, you should take the claim seriously. Robin is a professor of economics at George Mason University and author of the Overcoming Bias blog. His books include The Age of Em: Work, Love and Life when Robots Rule the Earth and The Elephant in the Brain: Hidden Motives in Everyday Life.

In This Episode:

  • Economic growth over the very long run (1:20)

  • The signs of an approaching acceleration (7:08)

  • Global governance and risk aversion (12:19)

  • Thinking about the future like an economist (17:32)

  • The stories we tell ourselves about the future (20:57)

  • Longtermism and innovation (23:20)

Next week, I’ll feature part two of my conversation with Robin, where we discuss whether we are alone in the universe and what alien life means for humanity's long-term potential.

Below is an edited transcript of our conversation.

Economic growth over the very long run

James Pethokoukis: Way back in 2000, you wrote a paper called “Long-Term Growth as a Sequence of Exponential Modes.” You wrote, “If one takes seriously the model of economic growth as a series of exponential … [modes], then it seems hard to escape the conclusion that the world economy will likely see a very dramatic change within the next century, to a new economic growth mode with a doubling time perhaps as short as two weeks.” Is that still your expectation for the 21st century?

Robin Hanson: It's my expectation for the next couple of centuries. Whether it's the 21st isn’t quite so clear.

Has anything happened in the intervening two decades to make you think that something might happen sooner rather than later … or rather, just later?

Just later, I'm afraid. I mean, we have a lot of people hyping AI at the moment, right?

Sure, I may be one of them on occasion.

There are a lot of people expecting rapid progress soon. And so, I think I've had a long enough baseline there to think, "No, maybe not.” But let's go with the priors.

Is it a technological mechanism that will cause this? Is it AI? Is it that we find the right general-purpose technology, and then that will launch us into very, very rapid growth?

That would be my best guess. But just to be clear for our listeners, we just look at history, we seem to see these exponential modes. There are, say, four of them so far (if we go pre-human). And then the modes are relatively steady and then have pretty sharp transitions. That is, the transition to a growth rate of 50 or 200 times faster happens within less than a doubling time.

So what was the last mode?

We're in industry at the moment: doubles roughly every 15 years, started around 1800 or 1700. The previous mode was farming, doubled every thousand years. And so, in roughly less than a thousand years, we saw this rapid transition to our current thing, less than the doubling time. The previous mode before that was foraging, where humans doubled roughly every quarter million years. And in definitely less than a quarter million years, we saw a transition there. So then the prediction is that we will see another transition, and it will happen in less than 15 years, to a faster growth mode. And then if you look at the previous increases in growth rates, they were, again, a factor of 60 to 200. And so, that's what you'd be looking for in the next mode. Now, obviously, I want to say you're just looking at a low data set here. Four events. You can't be too confident. But, come on, you’ve got to guess that maybe a next one would happen.

If you go back to that late ‘90s period, there was a lot of optimism. If you pick up Wired magazine back then, [there was] plenty of optimism that something was happening, that we were on the verge of something. One of my favorite examples — and a sort of non-technologist example, was a report from Lehman Brothers from December 1999. It was called “Beyond 2000.” And it was full of predictions, maybe not talking about exponential growth, but how we were in for a period of very fast growth, like 1960s-style growth. It was a very bullish prediction for the next two decades. Now Lehman did not make it another decade itself. These predictions don't seem to have panned out — maybe you think I'm being overly pessimistic on what's happened over the past 20 years — but do you think it was because we didn't understand the technology that was supposedly going to drive these changes? Did we do something wrong? Or is it just a lot of people who love tech love the idea of growth, and we all just got too excited?

I think it's just a really hard problem. We're in this world. We're living with it. It's growing really fast. Again, doubling every 15 years. And we've long had this sense that it's possible for something much bigger. So automation, the possibility of robots, AI: It sat in the background for a long time. And people have been wondering, “Is that coming? And if it's coming, it looks like a really big deal.” And roughly every 30 years, I'd say, we've seen these bursts of interest in AI and public concern, like media articles, you know…

We had the ‘60s. Now we have the ‘90s…

The ‘60s, ‘90s, and now again, 2020. Every 30 years, a burst of interest and concern about something that's not crazy. Like, it might well happen. And if it was going to happen, then the kind of precursor you might expect to see is investors realizing it's about to happen and bidding up assets that were going to be important for that to really high levels. And that's what you did see around ‘99. A lot of people thought, “Well, this might be it.”

Right. The market test for the singularity seemed to be passing.

A test that is not actually being passed quite so much at the moment.

Right.

So, in some sense, you had a better story then in terms of, look, the investors seem to believe in this.

You could also look at harder economic numbers, productivity numbers, and so on.

Right. And we've had a steady increase in automation over, you know, centuries. But people keep wondering, “We're about to have a new kind of automation. And if we are, will we see that in new kinds of demos or new kinds of jobs?” And people have been looking out for these signs of, “Are we about to enter a new era?” And that's been the big issue. It's like, “Will this time be different?” And so, I’ve got to say this time, at the moment, doesn't look different. But eventually, there will be a “this time” that'll be different. And then it'll be really different. So it's not crazy to be watching out for this and maybe taking some chances betting on it.

The signs of an approaching acceleration

If we were approaching a kind of acceleration, a leap forward, what would be the signs? Would it just be kind of what we saw in the ‘90s?

So the scenario is, within a 15-year period, maybe a five-year period, we go from a current 4 percent growth rate, doubling every 15 years, to maybe doubling every month. A crazy-high doubling rate. And that would have to be on the basis of some new technology, and therefore, investment. So you'd have to see a new promising technology that a lot of people think could potentially be big. And then a lot of investment going into that, a lot of investors saying, “Yeah, there's a pretty big chance this will be it.” And not just financial investors. You would expect to see people — like college students deciding to major in that, people moving to wherever it is. That would be the big sign: investment moving toward anything. And the key thing is, you would see actual big, fast productivity increases. There'd be some companies in cities who were just booming. You were talking about stagnation recently: The ‘60s were faster than now, but that's within a factor of two. Well, we're talking about a factor of 60 to 200.

So we don't need to spend a lot of time on the data measurement issues. Like, “Is productivity up 1.7 percent, 2.1?”

If you're a greedy investor and you want to be really in on this early so you buy it cheap before everybody else, then you’ve got to be looking at those early indicators. But if you’re like the rest of us wondering, “Do I change my job? Do I change my career?” then you might as well wait and wait till you see something really big. So even at the moment, we’ve got a lot of exciting demos: DALL-E, GPT-3, things like that. But if you ask for commercial impact and ask them, “How much money are people making?” they shrug their shoulders and they say “Soon, maybe.” But that's what I would be looking for in those things. When people are generating a lot of revenue — so it’s a lot of customers making a lot of money — then that's the sort of thing to maybe consider.

Something I've written about, probably too often, is the Long Bets website. And two economists, Robert Gordon and Erik Brynjolfsson, have made a long bet. Gordon takes the role of techno-pessimist, Brynjolfsson techno-optimist. Let me just briefly read the bet in case you don't happen to have it memorized: “Private Nonfarm business productivity growth will average over 1.8 percent per year from the first quarter of 2020 to the last quarter of 2029.” Now, if it does that, that's an acceleration. Brynjolfsson says yes. Gordon says no…

But you want to pick a bigger cutoff. Productivity growth in the last decade is maybe half that, right? So they're looking at a doubling. And a doubling is news, right? But, honestly, a doubling is within the usual fluctuation. If you look over, say, the last 200 years, and we say sometimes some cities grow faster, some industries grow faster. You know, we have this steady growth rate, but it contains fluctuations. I think the key thing, as always, when you're looking for a regime change, is you're looking at — there's an average and a fluctuation — when is a new fluctuation out of the range of the previous ones? And that's when I would start to really pay attention, when it's not just the typical magnitude. So honestly, that's within the range of the typical magnitudes you might expect if we just had an unusually productive new technology, even if we stay in the same mode for another century.

When you look at the enthusiasm we had at the turn of this century, do you think we did the things that would encourage rapid growth? Did we create a better ecosystem of growth over the past 20 years or a worse one?

I don’t think the past 20 years have been especially a deviation. But I think slowly since around 1970, we have seen a decline in our support for innovation. I think increasing regulations, increasing size of organizations in response to regulation, and just a lot of barriers. And even more disturbingly, I think it’s worth noting, we’ve seen a convergence of regulation around the world. If there were 150 countries, each of which had different independent regulatory regimes, I would be less concerned. Because if one nation messes it up and doesn’t allow things, some other nation might pick up the slack. But we’ve actually seen pretty strong convergence, even in this global pandemic. So, for example, challenge trials were an idea early voiced, but no nation allowed them. Anywhere. And even now, hardly they’ve been tried. And if you look at nuclear energy, electric magnetic spectrum, organ sales, medical experimentation — just look at a lot of different regulatory areas, even airplanes — you just see an enormous convergence worldwide. And that's a problem because it means we're blocking innovation the same everywhere. And so there's just no place to go to try something new.

Global governance and risk aversion

There's always concern in Europe about their own productivity, about their technological growth. And they’re always putting out white papers in Europe about what [they] can do. And I remember reading that somebody decided that Europe's comparative advantage was in regulation. Like that was Europe’s superpower: regulation.

Yeah, sure.

And speaking of convergence, a lot of people who want to regulate the tech industry here have been looking to what Europe is doing. But Europe has not shown a lot of tech progress. They don't generate the big technology companies. So that, to me, is unsettling. Not only are we converging, but we're converging sometimes toward the least productive areas of the advanced world.

In a lot of people's minds, the key thing is the unsafe dangers that tech might provide. And they look to Europe and they say, “Look how they're providing security there. Look at all the protections they're offering against the various kinds of insecurity we could have. Surely, we want to copy them for that.”

I don't want to copy them for that. I’m willing to take a few risks.

But many people want that level of security. So I'm actually concerned about this over the coming centuries. I think this trend is actually a trend toward not just stronger global governance, but stronger global community or even mobs, if we call it that. That is the reason why nuclear energy is regulated the same everywhere: the regulators in each place are part of a world community, and they each want to be respected in that community. And in order to be respected, they need to conform to what the rest of the community thinks. And that's going to just keep happening more over the coming centuries, I fear.

One of my favorite shows, more realistic science-fiction shows and book series, is The Expanse, which takes place a couple hundred years in the future where there's a global government — which seems to be a democratic global government. I’m not sure how efficient it is. I’m not sure how entrepreneurial it is. Certainly the evidence seems to be that global governance does not lead to a vibrant, trial-and-error, experimenting kind of ecology. But just the opposite: one that focuses on safety and caution and risk aversion.

And it’s going to get a lot worse. I have a book called The Age of Em: Work, Love, and Life when Robots Rule the Earth, and it’s about very radical changes in technology. And most people who read about that, they go, “Oh, that's terrible. We need more regulations to stop that.” I think if you just look toward the longer run of changes, most people, when they start to imagine the large changes that will be possible, they want to stop that and put limits and control it somehow. And that's going to give even more of an impetus to global governance. That is, once you realize how our children might become radically different from us, then that scares people. And they really, then, want global governance to limit that.

I fear this is going to be the biggest choice humanity ever makes, which is, in the next few centuries we will probably have stronger global governance, stronger global community, and we will credit it for solving many problems, including war and global warming and inequality and things like that. We will like the sense that we've all come together and we get to decide what changes are allowed and what aren't. And we limit how strange our children can be. And even though we will have given up on some things, we will just enjoy … because that's a very ancient human sense, to want to be part of a community and decide together. And then a few centuries from now, there will come this day when it's possible for a colony ship to leave the solar system to go elsewhere. And we will know by then that if we allow that to happen, that's the end of the era of shared governance. From that point on, competition reaffirms itself, war reaffirms itself. The descendants who come out there will then compete with each other and come back here and impose their will here, probably. And that scares the hell out of people.

Hardcover The Age of Em: Work, Love, and Life when Robots Rule the Earth Book

Indeed, that’s the point of [The Expanse]. It's kind of a mixed bag with how successful Earth’s been. They didn't kill themselves in nuclear war, at least. But the geopolitics just continues and that doesn't change. We're still human beings, even if we happen to be living on Mars or Europa. All that conflict will just reemerge.

Although, I think it gets the scale wrong there. I think as long as we stay in the solar system, a central government will be able to impose its rule on outlying colonies. The solar system is pretty transparent. Anywhere in the solar system you are, if you're doing something somebody doesn't like, they can see you and they can throw something at you and hit you. And so I think a central government will be feasible within the solar system for quite some time. But once you get to other star systems, that ends. It's not feasible to punish colonies 20 light-years away when you don't get the message of what they did [until] 20 years later. That just becomes infeasible then. I would think The Expanse is telling a more human story because it's happening within this solar system. But I think, in fact, this world government becomes a solar system government, and it allows expansion to the solar system on its terms. But it would then be even stronger as a centralized governance community which prevents change.

Thinking about the future like an economist

In a recent blog post, you wrote that when you think about the future, you try to think about it as an economist. You use economic analysis “to predict the social consequences of a particular envisioned future technology.” Have futurists not done that? Futurism has changed. I've written a lot about the classic 1960s futurists who were these very big, imaginative thinkers. They tended to be pretty optimistic. And then they tended to get pessimistic. And then futurism became kind of like marketing, like these were brand awareness people, not really big thinkers. When they approached it, did they approach it as technologists? Did they approach it as sociologists? Are economists just not interested in this subject?

Good question. So I'd say there are three standard kinds of futurists. One kind of futurist is a short-term marketing consultant who's basically telling you which way the colors will go or the market demand will go in the short term.

Is neon green in or lime green in, or something.

And that's economically valuable. Those people should definitely exist. Then there's a more aspirational, inspirational kind of futurist. And that's changed over the decades, depending on what people want to be inspired by or afraid of. In the ‘50s, ‘60s, it might be about America going out and becoming powerful. Or later it's about the environment, and then it's about inequality and gender relations. In some sense, science fiction is another kind of futurism. And these two tend to be related in the sense that science fiction mainly focuses on an indirect way to tell metaphorical stories about us. Because we're not so interested in the future, really, we're interested in us. Those are futures serving various kinds of communities, but neither of them are that realistically oriented. They're not focused on what's likely to actually happen. They're focused on what will inspire people or entertain people or make people afraid or tell a morality tale.

But if you're interested in what's actually going to happen, then my claim is you want to just take our standard best theories and just straightforwardly apply them in a thoughtful way. So many people, when they talk about the future, they say, “It's just impossible to say anything about the future. No one could possibly know; therefore, science fiction speculations are the best we can possibly do. You might as well go with that.”

And I think that's just wrong. My demonstration in The Age of Em is to say, if you take a very specific technology scenario, you can just turn the crank with Econ 101, Sociology 101, Electrical Engineering 101, all the standard things, and just apply it to that scenario. And you can just say a lot. But what you will find out is that it's weird. It's not very inspiring, and it doesn't tell the perfect horror story of what you should avoid. It's just a complicated mess. And that's what you should expect, because that's what we would seem to our ancestors. [For] somebody 200 or 2000 years ago, our world doesn't make a good morality tale for them. First of all, they would just have trouble getting their head around it. Why did that happen? And [what] does that even mean? And then they're not so sure what to like or dislike about it, because it's just too weird. If you're trying to tell a nice morality tale [you have] simple heroes and villains, right? And this is too messy. The real futures you should just predict are going to be too messy to be a simple morality tale. They're going to be weird, and that's going to make them hard to deal with.

The stories we tell ourselves about the future

Do you think it matters, the kinds of stories we tell ourselves about what the future could hold? My bias is, I think it does. I think it matters if all we paint for people is a really gloomy one, then not only is it depressing, then it's like, “What are we even doing here?” Because if we're going to move forward, if we're going to take risks with technology, there needs to be some sort of payoff. But yet, it seems like a lot of the culture continues. We mentioned The Expanse, which by the modern standard of a lot of science fiction, I find to be pretty optimistic. Some people say, "Well, it's not optimistic because half the population is on a basic income and there's war.” But, hey, there are people. Global warming didn't kill everybody. Nuclear war didn't kill everybody. We continued. We advanced. Not perfect, but society seems to be progressing. Has that mattered, do you think, the fact that we've been telling ourselves such terrible stories about the future? We used to tell much better ones.

The first-order theory about change is that change doesn't really happen because people anticipated or planned for it or voted on it. Mostly this world has been changing as a side effect of lots of local economic interests and technological interests and pursuits. The world is just on this train with nobody driving, and that's scary and should be scary, I guess. So to the first order, it doesn't really matter what stories we tell or how we think about the future, because we haven't actually been planning for the future. We haven't actually been choosing the future.

It kind of happens while we're doing something else.

The side effect of other things. But that's the first order, that's the zeroth-order effect. The next-order effect might be … look, places in the world will vary in to what extent they win or lose over the long run. And there are things that can radically influence that. So being too cautious and playing it safe too much and being comfortable, predictably, will probably lead you to not win the future. If you're interested in having us — whoever us is — win the future or have a bright, dynamic future, then you’d like “us” to be a little more ambitious about such things. I would think it is a complement: The more we are excited about the future, and the future requires changes, the more we are telling ourselves, “Well, yeah, this change is painful, but that's the kind of thing you have to do if you want to get where we're going.”

Long-term thinking and innovation

If you've been reading the New York Times lately or the New Yorker, the average is related to something called “effective altruism,” is the idea that there are big, existential problems facing the world, and we should be thinking a lot harder about them because people in the future matter too, not just us. And we should be spending money on these problems. We should be doing more research on these problems. What do you think about this movement? It sounds logical.

Well, if you just compare it to all the other movements out there and their priorities, I’ve got to give this one credit. Obviously, the future is important.

They are thinking directly about it. And they have ideas.

They are trying to be conscious about that and proactive and altruistic about that. And that's certainly great compared to the vast majority of other activity. Now, I have some complaints, but overall, I'm happy to praise this sort of thing. The risk is, as with most futurism, that even though we're not conscious of it, what we're really doing is sort of projecting our issues now into the future and sort of arguing about future stuff by talking about our stuff. So you might say people seem to be really concerned about the future of global warming in two centuries, but all the other stuff that might happen in two centuries, they're not at all interested. It's like, what's the difference there? They might say global warming lets them tell this anti-materialist story that they'd want to tell anyway, tell why it's bad to be materialist and so to cut back on material stuff is good. And it's sort of a pro-environment story. I fear that that's also happening to some degree in effective altruism. But that's just what you should expect for humans in general. Effective altruists, in terms of their focus on the future, are overwhelmingly focused as far as I can tell on artificial intelligence risk. And I think that's a bit misdirected. In a big world I don’t mind it …

My concern is that we'll be super cautious and before we have developed anything that could really create existential risk … we will never get to the point where it's so powerful because, like the Luddites, we'll have quashed it early on out of fear.

A friend of mine is Eric Drexler, who years ago was known as talking about nanotechnology. Nanotechnology is still a technology in the future. And he experienced something that made him a little unsure whether he should have said all these things, he said, which is that once you can describe a vivid future, the first thing everybody focuses on is almost all the things that can go wrong. Then they set up policy to try to focus on preventing the things that can go wrong. That's where the whole conversation goes. And then people are distancing themselves from it. He found that many people distanced themselves from nanotechnology until they could take over the word, because in their minds it reflected these terrible risks. So people wanted to not even talk about that. But you could ask, if he had just inspired people to make the technology but not talked about the larger policy risks, maybe that would be better? It might be in fact true that the world today is broken so much that if ordinary people and policymakers don't know about a future risk, the world's better off, because at least they won't mess it up by trying to limit it and control it too early and too crudely.

Then the challenge is, maybe you want the technologists who might make it to hear about it and get inspired, but you don't want everybody else to be inspired to control it and correct it and channel it and prepare for it. Because honestly, that seems to go pretty bad. I guess the question is, what technology that people did see well ahead of time, did they not come up with terrible scenarios to worry about?

For example, television: People didn't think about television very much ahead of time. And when it came, a lot of people watched it. And a lot of people complained about that. But if you could imagine ahead of time that in 20 years people are going to spend five hours a day watching this thing. If that's an accurate prediction, people would've freaked out.

Or cars: As you may know, in the late 1800s, people just did not envision the future of cars. When they envisioned the future of transportation, they saw dirigibles and trains and submarines, even, but not cars. Because cars were these individual things. And if they had envisioned the actual future of cars — automobile accidents, individual people controlling a thing going down the street at 80 miles an hour — they might have thought, “That's terrible. We can't allow that.” And you have to wonder… It was only in the United States, really, that cars took off. There's a sense in which the world had rapid technological progress around 1900 or so because the US was an exception worldwide. A lot of technologies were only really tried in the US, like even radio, and then the rest of the world copied and followed because the US had so much success with them.

I think if you want to pick a point where that optimistic ‘90s came to an end, it might have been, speaking of Wired magazine, the Bill Joy article … Why the Future Doesn't Need Us.” Talking about nanotech and gray goo… Since you brought up nanotech and Eric Drexler, do you know what the state of that technology is? We had this nanotechnology initiative, but I don't think it was working on that kind of nanotech.

No, it wasn’t.

It was more like a materials science. But as far as creating these replicating tiny machines…

The federal government had a nanotechnology initiative, where they basically took all the stuff they were doing that was dealing with small stuff and they relabeled it. They didn't really add more money. They just put it under a new initiative. And then they made sure nobody was doing anything like this sort of dangerous stuff that could cause what Eric was talking about.

Stuff you’d put in sunscreen…

Exactly. So there was still never much funding there. There's a sense in which, in many kinds of technology areas, somebody can envision ahead of time a new technology that was possible if a concentrated effort goes into a certain area in a certain way. And they're trying to inspire that. But absent that focused effort, you might not see it for a long time. That would be the simplest story about nanotech: We haven't seen the focused effort and resources that he had proposed. Now, that doesn't mean had we had those efforts he would've succeeded. He could just be wrong about what was feasible and how soon. But nevertheless, that still seemed to be an exciting, promising technology that would've been worth the investment to try. And still is, I would say.

One concern I have about the notion of longtermism, is that it seems to place a lot of emphasis on our ability to rally people, get them thinking long term, taking preparatory steps. And we've just gone through a pandemic which showed that we don't do that very well. And the way we dealt with it was not through preparation, but by being a rich, technologically advanced society that could come up with a vaccine. That's my kind of longtermism, in a way: being rich and technologically capable so you can react to the unexpected.

And that's because we allowed an exception in how vaccines were developed in that case. Had we gone with the usual way vaccines had been developed before, it would've taken a lot longer. So the problem is that when we make too many structures that restrain things, then we aren't able to quickly react to new circumstances. You probably know that most companies, they might have a forecasting department, but they don't fund it very much. They don't actually care that much. Almost everything they do is reactive in most organizations. That's just the fact of how most organizations work. Because, in fact, it is hard to prepare. It’s hard to anticipate things.

I'm not saying we shouldn't try to figure out ways to deflect asteroids. We should. To have this notion of longtermism over a broad scope of issues … that's fine. But I hope we don't forget the other part, which is making sure that we do the right things to create those innovative ecosystems where we do increase wealth, we do increase our technological capabilities to not be totally dependent on our best guesses right now.

Here's a scary example of how this thinking can go wrong, in my mind. In the longtermism community, there's this serious proposal that many people like, which is called the Long Reflection.

The Long Reflection, which is, we’ve solved all the problems and then we take a time out.

We stop allowing change for a while. And for a good long time, maybe a thousand years or even longer, we’re in this period where no change substantially happens. Then we talk a lot about what we could do to deal with things when things are allowed to change again. And we work it all out, and then we turn it back on and allow change. That's giving a lot of credit to this system of talking.

Who's talking? Are these post-humans talking? Or is it people like us?

It would be before the change, remember. So it would be people like us. I actually think this is this ancient human intuition from the forger world, before the farming era, where in the small band the way we made most important decisions was to sit down around the campfire and discuss it and then decide together and then do something. And that's, in some sense, how everybody wants to make all the big decisions. That's why they like a world government and a world community, because it goes back to that. But I honestly think we have to admit that just doesn't go very well lately. We're not actually very capable of having a discussion together and feeling all the options and making choices and then deciding together to do it. That's how we want to be able to work. And that's how we maybe should, but it's not how we are. I feel, with the Long Reflection, once we institutionalize a world where change isn't allowed, we would get pretty used to that world.

It seems very comfortable, and we'd start voting for security.

And then we wouldn’t really allow the Great Reflection to end, because that would be this risky, into the strange world. We would like the stable world we were in. And that would be the end of that.

I should say that I very much like Toby Ord's book, The Precipice. He's also one of my all-time favorite guests. He's really been a fantastic guest. Though, the Long Reflection, I do have concerns about.

Come back next Thursday for part two of my conversation with Robin Hanson.

Share

Discussion about this podcast

Faster, Please!
Faster, Please! — The Podcast
Welcome to Faster, Please! — The Podcast. Several times a month, host Jim Pethokoukis will feature a lively conversation with a fascinating and provocative guest about how to make the world a better place by accelerating scientific discovery, technological innovation, and economic growth.