Faster, Please!
Faster, Please! — The Podcast
💥 My chat (+transcript) with economist Eli Dourado on creating a fantastic future
0:00
-38:21

💥 My chat (+transcript) with economist Eli Dourado on creating a fantastic future

Faster, Please! — The Podcast #60

Eli Dourado is on a mission to end the Great Stagnation, that half-century period of economic and technological disappointment that began in the 1970s (what I refer to in my 2023 book, The Conservative Futurist, as the Great Downshift). If we want to turn the page on this chapter of slow progress and deserved skepticism, we’re going to have to accept some creative destruction.

Dourado believes that the courage to embrace major change is key to meeting our potential. Today on Faster, Please! — The Podcast, I talk with Dourado about the future of the US job market and energy production in a world of AI.

Dourado is chief economist at the Abundance Institute, and author of his own Substack newsletter.

In This Episode

  • The dawn of a productivity boom? (1:26)

  • Growing pains of job market disruption (7:26)

  • The politics of productivity growth (15:20)

  • The future of clean energy (23:35)

  • The road to a breakthrough (30:25)

  • Reforming NEPA (35:19)

  • The state of pro-abundance (37:08)

Below is a lightly edited transcript of our conversation


The dawn of a productivity boom? (1:26)

Pethokoukis:  Eli, welcome to the podcast.

Dourado: Thanks for having me on, Jim.

I would like to think that what we are experiencing here in the 2020s is the beginnings of an extended productivity boom. We have some good economic data over the past year and a half. I know this is something that you care about, as I do . . . What's your best guess?

I think the seeds of a boom are there. There's plenty of low-hanging fruit, but I'd say the last few quarters have not been that great for TFP growth, which is what I followed most closely. So we actually peaked in TFP in the US in Q4, 2021.

Now what is that, what is TFP?

Total factor productivity. So that's like if you look at inputs and how they translate into outputs.

Capital, labor . . .

Capital and labor, adjusting for quality, ideally. We've gotten less output for the amount of inputs in the last quarter than we did at the end of 2021. So slight negative growth over the last three years or so, but I think that you're right that there is room for optimism. Self-driving cars are coming. AI has immense potential.

My worry with AI is other sociopolitical limits in the economy will hold us back, and you kind of see the news breaking today as we're recording this, is there's a strike at the ports on east coast, and what's at issue there is are we allowed to automate those jobs? Are the owners of the ports allowed to automate those jobs? And if the answer ends up being “no,” then you can say goodbye to productivity gains there. And so I really think the technology is there to do a lot more to kick off a productivity boom, but it's the sociopolitical factors that are slowing us down.

And I definitely want to talk about those sociopolitical factors, and the port strike is hopefully not a harbinger. But before I leave this topic, I suppose the super bullish case for productivity is that AI will be so transformative, and so transformative throughout the economy, both automating some things, helping us do other things more efficiently, and creating brand new high-productivity things for us to do that we will have maybe an extended 1990s, maybe more, I might hope?

What is your bullish case, and does that bullish case require what they call artificial general intelligence, or human-level, or human-level plus intelligence? Is that key? Because obviously some people are talking about that.

Can we have an important productivity boom from AI without actually reaching that kind of science-fictional technology?

I don't actually think that you need one-to-one replacement for humans, but you do need to get humans out of the loop in many, many more places. So if you think about the Baumol effect, the idea here is if there are parts of the economy that are unevenly growing in productivity, then that means that the parts of the economy where there is slow productivity growth, perhaps because you have human labor still being the bottleneck, those parts are going to end up being massive shares of the economy. They're going to be the healthcares, the educations, the parts of the economy where we have lots of inflation and increased costs. So the real boom here, to me, is can you replace as many humans as possible? Over the short run, you want to destroy jobs so that you can create a booming economy in which the jobs are still available, but living standards are much higher.

If you think about these big chunks of GDP like health, housing, energy, transportation, that's what you need to revolutionize, and so I can think of lots of ways in health that we could use AI to increase productivity. And I also have very little doubt that even current levels of AI could massively increase productivity in health. I think the big question is whether we will be allowed to do it.

So you don't need AGI that is as good as a human in every single thing that a human might do to limit the number of humans that are involved in providing healthcare. Housing, I think there's construction robots that maybe could do it, but I think the main limits are, like land use regulation, more sociopolitical. In energy, it's kind of the same thing, NIMBYism is kind of the biggest thing. Maybe there's an R&D component that AI could contribute to. And then in transportation, again, we could automate a lot of transportation. Some of that's happening with autonomous cars, but we are having trouble automating our ports, for example, we're having trouble automating cargo railroads for similar make-work reasons.

I think the bull case is you don't need AGI, really, really sophisticated AI that can do everything, but you do need to be able to swap out human workers for even simpler AI functions.

I don't actually think that you need one-to-one replacement for humans, but you do need to get humans out of the loop in many, many more places.

Growing pains of job market disruption (7:26)

I'm sure that some people are hearing you talk about swapping out human workers, replacing human workers. They're thinking, this is a world of vast technologically-driven unemployment; that is what you are describing. Is that what you're describing?

Not at all. If we had the kind of productivity boom we're talking about, the economy would be so incredibly hot, and you need that hot market. People have all kinds of fantasies about how good AI could get. Can it substitute for a human in every single thing? And I'm not even positing that. I'm saying if we could just get it good enough to substitute in some things, the economy's going to be booming, it's going to be hot, there will still be things that humans can do that AIs can’t. There's lots of things that maybe we want a human to do, even if the AI can do it, and we will be able to afford that a lot better.

I think that the world I'm thinking about is one where living standards are way higher for everybody — and higher levels of equality, even. If you have the sort of uneven productivity gains that we've had for the last several decades, where tech does really well, but every other part of the economy does badly, well, that drives a lot of regional inequality, that drives a lot of different kinds of demographic inequality, and if we had broad-base productivity growth, that means better living standards for everybody, and I think that's what we should aim for.

When I talk about what you've been referring to as these sociopolitical factors or how we might slow down progress, slow down automation, the whimsical example I use is there being a law saying that yes, you can have kiosks in every McDonald's, but you have to have an employee standing next to the kiosk to actually punch the buttons.

As you mentioned with this port worker strike, we don't need my scenario. That is kind of what's happening on these ports, where there could be a lot more automation, but because of both unions and our acquiescence to these unions, we don't have the kind of automation — forget about sci-fi — that doesn't exist in other places in the world. And I wonder if that doesn't sort of encapsulate, at least in this country, the challenge: Can we get our heads around the idea that it's okay in the long run, that there will be some downsides, and some people might be worse off, and we need to take care of those people, but that's the disruption we need to tolerate to move forward?

You can't have a growing economy where there's no churn, where there's no displacement, where it's complete, where there's no dynamism. You need to be able to accept some level of change. I sympathize with people whose jobs get destroyed by automation. It is hard, but it's much less hard if the economy is super hot because we've been prioritizing productivity growth, and if that were the case, I think we'd find new jobs for those people very quickly. The process is not automatic, but it's much slower when you have low productivity growth and a stagnant economy than it is when you have high productivity growth and a booming economy.

The question I always get is, what about the 60-year-old guy? What's he going to do? And I'm not sure I have a much better answer. Maybe there's other jobs, but it's tough to transition, so maybe the answer there is you cut him a check, you cut that 60-year-old a check, and if you have a high-productivity economy, you have the resources for that to be an option.

Right! So that's the other thing is that we can afford to be generous with people if we have a really rapidly growing economy. It’s that we don't have the resources if we're stagnating, if we're already overextended fiscally, that's a terrible position to be in because you can't actually afford to be generous. And if there are people that truly, like you said, maybe they're very old and it doesn't make sense to retrain, or something like that, they’re near retirement, yeah, absolutely, we can afford that much better when GDP is much higher.

Where do you think, as a nation, our head is at as far as embracing or not being fearful of disruption from technological change? If I only looked at where our head was at with trade, I would be very, very worried about entering a period of significant technological disruption, and I would assume that we will see lots and lots of pushback if AI, for instance, is the kind of important, transformative, general purpose technology that I hope it is.

Again, if I look at trade, I think, “Boy, there's going to be a lot of pushback.” Then again, when I think about risk broadly, and maybe it's not quite the same thing, I think, “Well, then again, we seem to be more embracing of nuclear energy, which shows maybe — it's not the same thing, but it shows a greater risk tolerance.” And I'm always thinking, what's our societal risk tolerance? Where do you think we're at right now?

I think most people, most Americans, don't actually think in those terms. I think most Americans just think about, “How are things going for me?” They kind of evaluate their own life, and if their communities, or whatever, have been struggling due to trade stuff, or something like that, they'll be against it. So I think the people who think in these more high-level terms, it’s like societal elites, and I think normal people who have just lived under 50 years of stagnation, they're kind of distrustful of the elites right now: “I don't pay attention to policy that closely, and my life is bad, at least in some dimensions is not as good as I wanted it to be, it's hasn't had the increase that my parents' generation had,” or something like that. And they're very distrustful of elites, and they're very mad, and you see this nihilistic populism popping up.

You see kind of a diverse array of responses to this nihilistic populism. Some people might say, “Well yeah, elites really have messed up and we need to do what the common people want.” And then the other people are like, “No, we can't do that. We need to stay the course.” But I think that there's a hybrid response, where it's like, the elites really have done bad, but we don't just want to do what the populists want, we want to just have better elite-led policies, which include things like, we have to take productivity growth seriously, we can't just paper over a lot of the tensions and the conflicts that arise from that, we need to embrace them head-on and do everything we can to produce an economy that is productive, that works for everybody, but maybe not in the way that the populists think it will work.

You can't have a growing economy where there's no churn, where there's no displacement, where it's complete, where there's no dynamism. You need to be able to accept some level of change.

The politics of productivity growth (15:20)

I would love to see what American politics looks like if the rest of this decade we saw the kind of economic productivity and wage growth that we saw in the fat part of the 1990s. We act like the current environment, that's our reality, and that's our reality as far as the eye can see, but I'll tell you, in the early ’90s, there was a lot of gloom and doom about the economy, about productivity, how fast we could grow, the rise and fall of great powers, and America was overstretched, and after really three or four years of strong growth, it's like America Triumphant. And I’m wondering if that would be the politics of 2030 if we were able to generate that kind of boom.

Yeah, I think that's totally right. And if you look at total factor productivity, which is my KPI [key performance indicator] or whatever, if you look at 1995 to 2005, you were back to almost two percent growth, which is what we had from 1920 to 1973. So you had a slow period from 1973 to 1995, and an even slower period since 2005, and you get back to that two percent. That's the magic number. I think if we had TFP at two percent, that changes everything. That's a game-changer for politics, for civility, for social stability, we'd really be going places if we had that.

I was mentioning our reaction to trade and nuclear power. The obvious one, which I should have mentioned, is how we are reacting to AI right now. I think it's a good sign that Congress has not produced some sort of mega regulation bill, that this recent bill in California was not signed by Governor Newsom. Congress has spent time meeting with technologists and economists trying to learn something about AI, both the benefits and risks.

And I think the fact that it seems like, even though there was this rush at some point where we needed to have a pause, we needed to quickly regulate it, that seems to have slowed down, and I think that's a good sign that perhaps we're able to hit a good balance here between wanting to embrace the upside and not utterly panicking that we're producing the Terminator.

Absolutely. I think AI is something where the benefits are very clear, we're starting to see them already. The harms are extremely hypothetical, it's not evidence-based, it’s really a lot of sci-fi scenarios. I think the right attitude in that kind of world is to let things ride for a while. If there are harms that arise, we can address them in narrowly tailored ways.

I think government is sometimes criticized for being reactive, but reactive is the right approach for a lot of issues. You don't want to slow things down preemptively. You want to react to real facts on the ground. And if we need to react quickly, okay, we'll react quickly, but in a narrowly tailored way that addresses real harms, not just hypothetical stuff.

I love what you're saying there about reaction. I'm a big preparer. I love preparation. If I'm going to go anywhere, I over-prepare for all eventualities, I will bring a messenger bag so if the world should end while I'm out, I'll be okay. I love to prepare. But one lesson I draw from the pandemic is that only gets you so far, preparation, because before the pandemic, there were a gazillion white papers about the possibility of a pandemic, all kinds of plans as a culture, we were sort of marinating in pandemic apocalypse films, maybe about turning us into zombies rather than giving us a disease.

And then when we finally have a pandemic, it's like, “Where's the respirators? Where's this, where's that? We didn't have enough of this.” And so, while I'm sure preparation is great, what really helped us is we reacted. We reacted in real time because we're a rich country, we're a technologically advanced country, and we came up with a technological fix in a vaccine. To me — and again, I'm not sure how this is you meant it — but the power of being able to react effectively, boy, that's a pretty good capability of a well-functioning country.

Yeah, and a slight difference between the pandemic and AI is it was not the first pandemic. AI is just such a unique set of theorized risks that people are like, nothing like this has ever happened before. This is like the introduction of a brand new super-intelligent species to the planet. This is the first time two intelligent species — if you want to count humans as an intelligent species — two intelligent species will the planet at the same time. And the theorization here is just so far out of the spectrum of our experience that it is hard to even see how you could prepare if those risk materialize. The only intelligent thing that is likely to do any good is to have our eyes open, and let's see what the harms are as they materialize.

The problem with coming up with remedies for theorized harms is that the remedies never go away once they're implemented. Safety regulation never gets laxer over time. And so if you're implementing safety regulations because of real safety problems, okay, fair play, to some extent. I think in some dimensions we're too safe, but it kind of makes sense. But if you're doing it to just theorized harms that have never materialized, I think that's a big mistake.

And you've written about this fairly recently. To me, there's a good kind of complexity with an economy that you have a high-functioning economy where people can connect, and colleges and universities, and businesses, and entrepreneurs, these networks work together to produce computer chips or large language models. That's a good kind of complexity.

But then there's the other kind of complexity, in which you just have layer after layer of bureaucracy, and programs meant to solve a problem that was a problem 20 years ago and is no longer a problem, and that kind of complexity, that's not the kind we want, right?

Yeah, I think you want the sophistication in the economy, but in a way that works for everybody. There have to be benefits to it. If you increase the burden of complexity without producing any net benefits, then people start to rebel against it, they start to be indifferent to or apathetic about the health of society. And there's an anthropologist, Joseph Tainter, who wrote this book, The Collapse of Complex Societies, and his theory is that once you have complexity without the marginal benefits of complexity, you're in for a shock, at some point, when people start becoming apathetic or hostile to the current order. And the complexity grows and shrinks as a system, you can't ever just control like, “Oh, let's do more, or let's do one percent less complexity.” Once people start to rebel against it, it snowballs and you could end up with a very bad situation.

The problem with coming up with remedies for theorized harms is that the remedies never go away once they're implemented. Safety regulation never gets laxer over time.

The future of clean energy (23:35)

Nuclear versus solar versus geothermal: What do you like there?

Solar panels have massively come down in cost, and we're not that far away from — in sort of number of doublings of deployment, and sort of long-deployment space — we're not that far away from the cost being so low that . . . you could almost round the panels cost to free. It almost makes sense. And the problem is, if you look at the solar electricity costs on utility-scale farms, they have not really moved in the last few years. And I think this is in large part because we're designing the solar farms wrong, we're not designing them for the era of cheap panels, we're designing them, still, to track the sun, and complex mechanisms, and too much space between the panels, and too much mowing required, and all that. So as we adapt to the new paradigm of very, very cheap panels, I think that you'll get lower solar costs.

I think the other thing that is obviously complimentary to all of these sources actually is battery innovation. I'm very excited about one particular new cathode chemistry that maybe could drive the cost way, way down for lithium ion batteries. And so you're in a world where solar and batteries is potentially very, very cheap. And so for nuclear and geothermal, they have some advantages over solar.

If batteries get cheap, the advantage of not the firmness . . . I think people think that the advantage of these sources versus solar is just that solar is variable and the other sources are constant, but that's less of an advantage if batteries are cheap, and I think you also want batteries to be able to respond to the fluctuations in demand. If we had an entirely nuclear-powered economy, the nuclear plants actually want to run at constant speed. You don't want to ramp them up and down very quickly, but demand fluctuates. And so you still want batteries to be a buffer there and be the lowest-cost way to balance the network.

So the things that nuclear and geothermal can really compete on is land density — even gigawatt-scale nuclear where you have these giant exclusion zones and tons of land around them and so on, they're still more dense per acre than solar, and geothermal is maybe even denser because you don't need that exclusion zone, and so they could be much, much better in terms of density.

There's an advantage — if you want a lot of power in a city, you probably want that to be supplied by nuclear. If you're more rural, you could do solar. Another possibility is portability. So there's future versions of nuclear that are more mobile. People have talked about space-based nuclear for being able to go to Mars or something like that, you want thermonuclear propulsion and you can't do that with solar. Or powering submarines and stuff. So I think there's always a place for nuclear.

And then the other advantage for both nuclear and geothermal is if you don't need to produce electricity. So if you're producing just the heat — it turns out a big part of the cost of any sort of thermal source is converting it to electricity. You have to have these giant steam turbines that are very capital intensive. And so, if you just need heat, say up to 600 degrees C heat for nuclear and maybe 400 degrees C heat for deep geothermal, those are really good sources for doing that, and maybe if we had continued advances in drilling technology for geothermal or if we could figure out the regulatory stuff for nuclear, I think you could have very cheap industrial thermal energy from either of those sources.

Nuclear and geothermal are competing against a backdrop where we'll probably have pretty cheap solar, but there's still some advantages and these sources still have some utility and we should get good at both of them.

What do you think that energy mix looks like in 25 years, the electrical generation mix for this country?

It would be surprising if it wasn't a lot of solar. My friend Casey Handmer thinks it's going to be 90-plus percent solar, and I think that's a little crazy.

Do you happen to know what the percent is now?

Oh, I don't know. It's probably like three or four or something like that, off the top of my head, maybe less. The other question is, what's the base? I think a lot of people just want to replace the energy we have now with clean energy, and much more we need to be thinking about growing the energy supply. And so I think there's a question of how much solar we could deploy, but then also how much other stuff are we deploying? Let's do a lot of everything. You do have to drive the cost of some of these sources down a bit for it to make sense, but I think we can.

And then the real gains happen when maybe some of these . . . what if you could do some sort of conversion without steam turbines? What if you had ways to convert the thermal energy to electricity without running a steam cycle, which is hundreds-of-year-old technology? Essentially

You're just finding a new way to heat it up.

Yeah, so you look at why has solar come down so much? It's because it’s solid-state, easy to manufacture, any manufacturing process improvements just move forward to all future solar panels. If we had thermoelectric generators or other ways of converting the heat to electricity, that could be really great, and then there's other kinds of nuclear that are like solid-state conversion, like alpha voltaics and things like that. So you could have a box with cobalt 60 in it that's decaying and producing particles that you're converting to electricity, and that would be solid state. It's sometimes called a “nuclear battery,” it's not really a battery, but that would be a way to power cars maybe with something like that. That would be awesome.

Nuclear and geothermal are competing against a backdrop where we'll probably have pretty cheap solar, but there's still some advantages and these sources still have some utility and we should get good at both of them.

The road to a breakthrough (30:25)

When, if ever, this century, do you think we get AGI, and when, if ever, this century, do you think we get a commercial fusion reactor?

AGI, I'm still not really a 100 percent clear on how it's defined. I think that AI will get increasingly more capable, and I think that's an exciting future. Do we even need to emulate every part of the human brain in silicon? I don't think so. Do we need it to have emotions? Do we need it to have its own independent drive? We definitely don't need it to be a perfect replica of a human brain in terms of every capability, but I think it will get more capable over time. I think there's going to be a lot of hidden ways in which AGI, or powerful AI, or highly capable AI is going to happen slower than we think.

I think my base reasoning behind this is, if you look at neurons versus transistors, neurons are about a million times more energy efficient. So six orders of magnitude is kind of what we have to traverse to get something that is equally capable. And maybe there's some tricks or whatever that you can do that means you don't have to be equally capable on an energy basis, but you still need to get four orders of magnitude better. And then the other thing about it is that, if you look at current margins that people are working on, things like the ChatGPT o1 model, it's a lot slower, it does a lot of token generation behind the scenes to get the answer, and I think that that's the kind of stuff that could maybe drive progress.

Let's say we have a world where you ask an AI for a cure for cancer, and you run it on a big data center, and it runs for six months or a year, and then it spits out the answer, here's the cure for cancer, that's still a world where we have very, very powerful AI, but it's slow and consumes a lot of resources, but still ultimately worth it. I think that might be where we're headed, in a way, is that kind of setup. And so is that AGI? Kind of. It's not operating the same way as humans are. So this is different.

You're not going to fall in love with it. It's nothing like that.

I’m pretty uncertain about AGI: A) what it means, but what does it even look like in the end?

Fusion, I'll give you a hot take here, which is, I think there will be net energy gain fusion developed in this decade. I think that someone will have it. I think that probably the first people to get it will be doing it in a completely uneconomical way that will never work economically. Most of the people that are working on fusion are working on DT fusion, which is another one of these sources that basically produces heat, and then you use a steam turbine, and then that produces electricity. I think that the steam turbine is just a killer in terms of the added costs.

So all these sources are basically fancy ways of boiling water and then running a steam turbine. So what you want to look at is: What is the cheapest way to boil water? With fission, you just hold two magic rocks together and they boil water. With geothermal, you drill a hole in the ground and send water down there and it boils. With these DT fusion reactors, you build the most complex machine mankind has ever seen, and you use that to boil water — that's not going to be as cheap as fission should be. So I think that we'll struggle to compete with fission if we can ever get our act together.

There's other kinds of fusion called aneutronic fusion. That's harder to do. I think it’s still possible, maybe this decade, that someone will crack it, but that's harder to do. But the nice thing about that is that you can harvest electricity from those plasmas without a steam turbine. So if it's going to be economical fusion, I think it's plausible by 2030 somebody could crack it, but it would be that aneutronic version, and it is just technically a bit harder. You'll see some reports in a couple of years, like, “Oh, these people, they got net energy out of a fusion reactor.” It's like, okay, it's a scientific breakthrough, but look for the cost. Is it going to be competitive with these other sources?

Do we even need to emulate every part of the human brain in silicon? I don't think so . . . We definitely don't need it to be a perfect replica of a human brain in terms of every capability, but I think it will get more capable over time.

Reforming NEPA (35:19)

Do you think we've sort of got a handle, and we've begun to wrangle the National Environmental Policy Act [NEPA] to the ground? Where are we on reforming it so that it is not the kind of obstacle to progress that you've written so much about and been a real leader on?

My base scenario is we're going to get reforms on it every two years. So we had some a year and a half ago with the Fiscal Responsibility Act, I think we were possibly going to get some in the lame duck session this year in Congress. None of these reforms are going to go far enough, is the bottom line. I think that the problem isn't going to go away, and so the pressure is going to continue to be there, and we're just going to keep having reforms every two years.

And a lot of this is driven by the climate movement. So say what you will about the climate movement, they're the only mainstream movement in America right now that's not complacent, and they're going to keep pushing for, we’ve got to do something that lets us build. If we want to transform American industry, that means we've got to build, and NEPA gets in the way of building, so it's going to have to go.

So I think my baseline case is we get some reforms this year in the lame duck, probably again two years later, probably again two years later, and then maybe like 2030, people have kind of had enough and they just say, “Oh, let's just repeal this thing. We keep trying to reform it, it doesn't work.” And I think you could repeal NEPA and the environment would be fine. I am pro-environment, but you don't need NEPA to protect the environment. I think it's just a matter of coming to terms with, this is a bad law and probably shouldn't exist.

I am pro-environment, but you don't need NEPA to protect the environment. I think it's just a matter of coming to terms with, this is a bad law and probably shouldn't exist.

The state of pro-abundance (37:08)

What is the state of, broadly, a pro-abundance worldview? What is the state of that worldview in both parties right now?

I think there's a growing, but very small, part of each party that is thinking in these terms, and I think the vision is not really concrete yet. I think they don't actually know what they're trying to achieve, but they kind of understand that it's something in this general direction that we've been talking about. My hope is that, obviously, the faction in both parties that is thinking this way grows, but then it also develops a little bit more of a concrete understanding of the future that we're trying to build, because I think without that more-concrete vision, you're not actually necessarily tackling the right obstacles, and you need to know where you're trying to go for you to be able to figure out what the obstacles are and what the problems you need to address are.

Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Discussion about this podcast

Faster, Please!
Faster, Please! — The Podcast
Welcome to Faster, Please! — The Podcast. Several times a month, host Jim Pethokoukis will feature a lively conversation with a fascinating and provocative guest about how to make the world a better place by accelerating scientific discovery, technological innovation, and economic growth.