Faster, Please!
Faster, Please! — The Podcast
🚀 Faster, Please! — The Podcast #28

🚀 Faster, Please! — The Podcast #28

📈 A conversation with economist Simon Johnson on his new book 'Power and Progress' with Daron Acemoglu

Does technological progress automatically translate into higher wages, better standards of living, and widely shared prosperity? Or is it necessary to steer the development of technological improvement to ensure the benefits don't accrue only to the few? In a new book, two well-known economists argue the latter. I'm joined in this episode by one of the authors, Simon Johnson.

Simon is the Kurtz Professor of Entrepreneurship at MIT. He and Daron Acemoglu are authors of the new book Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity. Simon is also co-author with Jonathan Gruber of 2019's Jump-Starting Americanow out in a new paperback.

In This Episode

  • Is America too optimistic about technology? (1:24)

  • Ensuring progress is widely shared (11:10)

  • What about Big Tech? (15:22)

  • Can we really nudge transformational technology? (19:54)

  • Evaluating the Biden administration’s science policy (24:14)

Below is an edited transcript of our conversation


Is America too optimistic about technology?

James Pethokoukis: Let me start with a sentence or two from the prologue: “People understand that not everything promised by Bill Gates, Elon Musk, or even Steve Jobs will likely come to pass. But, as a world, we have become infused by their techno-optimism. Everyone everywhere should innovate as much as they can, figure out what works, and iron out the rough edges later.” Later, you write that that we are living in a “blindly optimistic” age.

But rather, I see a lot of pessimism about AI. A very high percentage of people want an AI pause. People are very down on the concept of autonomous driving. They're very worried that these new technologies will only make climate change worse. We don't seem techno-optimistic to me. we certainly don't see it in our media. First of all, let me start out with, why do you think we're techno-optimistic right now, outside of Silicon Valley?

Simon Johnson: Well, Silicon Valley is a very influential culture, as you know, nationally and internationally. So I think there's a deep-running techno-optimistic trend, Jim. But I also think you put your finger on something very important, which is since we finished the book and turned in the final version in November, I think the advance of ChatGPT and some of our increased awareness that this is not science fiction — this is actual, this is real, and the people who are developing this stuff have no idea how it works, for example—I wouldn't call it pessimism, but I think there's a moment of hesitation and concern. So good, let's have the discussion now about what we're inventing, and why, and could we put it on a better path?

When I think about the past periods where it seemed like there was a lot of tech progress that was reflected in our economic statistics, whether it's productivity growth or economic growth more broadly, those were also periods where we saw very rapid wage growth people think very fondly about. I would love to have a repeat of 1995-2000. If we had technologies that could manage that kind of impact on the economy, what would be the downside? It seems like that would be great.

I would love a repeat of the Henry Ford experience, actually, Jim. Henry Ford, as you know, automated the manufacturing of cars. We went from producing tens of thousands of cars in the US to, 30 years later, producing millions of cars because of Ford's automation. But at the same time Ford and all the people around him — a lot of entrepreneurs, of course, working with Ford and rivals to Ford — they created a lot of new jobs, new tasks. And that's the key balance. When you automate, when you have a big phase of automation, and we did have another one during World War II and after World War II. We also created a lot of new tasks, new jobs. Demand for labor was very strong. And I think that it's that balance we need. A lot of the concerns, the justified concerns about AI you were mentioning a moment ago, are about losing jobs very quickly and faster than we can create other tasks, jobs, demand for labor in other, non-automating parts of the economy.

Your book is a book of deep economic history. It's the kind of book I absolutely love. I wonder if you could just give us a bit of a flavor of the history of what's interesting in this book about those two subjects and how they interact.

We tried to go back as far as possible in economic and human history, recorded history, to understand technological transformations. Big ones. And it turns out you can go back about 1000 years with quite reliable information. There are some things you can say about earlier periods, a little bit more speculative to be honest. But 1000 years is a very interesting time period, Jim, because as you know, that's pretty much the rise of Europe timeframe. A thousand years ago, Europe was a nothing place on the edge of a not very important part of one continent. And through a series of technological transformations, which took a long time to get going — and that's part of the medieval story that we explore — [there was] a huge amount of innovativeness in those societies. But it did not translate into shared prosperity, and it was a very stop-start. I'm talking about over the period of centuries.

Then, eventually, we get this Industrial Revolution, which is initially in Britain, in England, but it's also shared fairly quickly around northwest Europe: individual entrepreneurship, private capital, private ownership, markets as a dominating part of how you organize that economy. And eventually, not immediately, but eventually that becomes the basis for shared prosperity. And of course, that becomes the basis for American society. And the Americans by the 1850s to 1880s, depending how you want to cut it, have actually figured out industrial technology and boosted the demand for labor more than the Europeans ever imagined. Then the Americans are in the lead, and we had a very good 20th century combining private capital, private innovation with some (I would say) selective public interventions where a private initiative didn't work. And this actually carried a lot of countries, including countries in that European tradition, through to around 1980. Since 1980, it's become much more bumpy. We've had a widening of income inequality and much more questioning of the economic and political model.

Going back into the history: Oftentimes people treat the period before the steam engine and the loom as periods of no innovation. But there was. It just didn't have the impact, and it wasn't sustained. But we were doing things as a society before the Industrial Revolution. There was progress.

There was technological progress, technological change. Absolutely.

The compass, the printing press, gunpowder — these are advances.

Right. The Europeans, of course, were sort of the magpies of the world at that point. A lot of those innovations began in China. Some of them began in the Arab world. But the Europeans got their hands on them and used them, sometimes for military purposes. They figured out civilian uses as well. But they were very innovative. Some people got rich in those societies, but only a very few people, mostly the kings and their hangers-on and the church. Broad-shared prosperity did not come through because it was mostly forced labor. People did not own their labor. There was some private property, but there wasn't individual rights of the kind that we regard as absolutely central to prosperity in the United States, because they are central to prosperity and because they're in the Constitution for a reason, because it was coming out of feudalism and the remains of that feudal system that our ancestors in the United States were escaping from. So they said, “Let's enumerate those rights and make sure we don't lose them.” That's coming out of 800 years of hard-learned history, I would say, at that point. And that's one reason why, not at the moment of independence but within 50 to 70 years, the American economy was really clicking and innovating and breaking through on multiple technologies and sharing prosperity in a way that nobody had ever seen before in the world.

Before that period in the 1800s, the problem was not the occasional good idea that changed something or made somebody rich; it was having sustained progress, sustained prosperity that eventually spread out wide among the people.

Absolutely. And I think it was a question of who benefited and who was empowered and who could go on and invent the next things. Joel Mokyr, who's an economic historian at Northwestern, one of our favorite authors, has written about the sort of revolution of tinkerers. And that's actually my family history. My family, as far back as we can go, was carpenters out of Chesterfield in the north of England. They made screws for a hundred years starting in the mid-19th century in Sheffield. They would employ a couple of people at any one time. Maybe no more than eight, maybe as few as two. They probably initially polished blades of knives and eventually ended up making specialized screws. But very, very small scale. There was not a lot of formal education in the family or among the workforce, but it was all kind of relationships with other manufacturers. It was being plugged into that community. Alfred Marshall talked about these clusters and cities of regional entrepreneurship. That's exactly where I'm from. So, yes, I think that was a really key breakthrough: having the institutions, the politics, and the social pressure that could sustain that kind of economic initiative.

In the middle of the Industrial Revolution, late 1800s, what were the changes that we saw that made sure the gains from this economic progress were widely shared?

If we're talking about the United States, of course, the key moment is the mechanization of agriculture, particularly across the West. So people left their farms in Nebraska or somewhere and moved to Chicago to work for McCormick, making the reapers that allowed more people to leave their farms. So you needed a couple of things in that. One was, of course, better sanitation and basic infrastructure in the big cities. Chicago grew from nothing to be one of the largest cities in the world in period of about a decade and a half. That requires infrastructure that comes from local government. And then there's the key piece, Jim, which is education. There was what's known as a “high school movement.” Again, very local. I don't think the national government knew much about it until it was upon them. [It was] pushing to educate more people in basic literacy and numeracy and to be better workers. At the same time, we did have from the national government, of course particularly in the context of the Civil War, the land grant universities, of which MIT is very proudly one of by the way — one of the only two that became private for various reasons. But we were initially founded to support the manufacturing arts in Massachusetts. That was a state initiative, but it was made possible by a funding arrangement, a land swap, actually, with the federal government.

Ensuring progress is widely shared

The kind of interventions which you've already mentioned — education and infrastructure — these seem like very non-controversial, public-good kinds of things. How do those kinds of interventions translate into the 2020s and 2030s in advanced countries, including the United States? Do we have need to do something different than those?

Well, I think we should do those, particularly education, better and more and update it really quickly. I think people are going to agree on that in principle; there may be argument about how exactly you do that. I do think there are three things that should be on the table for potential serious discussion and even potential bipartisan agreement. The first is what Jaron Lanier calls “data dignity,” which is basically [that] you and I should own the data that we produce. This is an extension of private property rights from the right of the political spectrum. The left would probably have other terminology for it. But what's basically happening, and the value that's being created in these large language models, is those models are taking data that they find for free — actually, it's not really free, but it's not well protected on the internet, digital data — and they're using that to train these very large models. And it's that training process that's generating, already and will train even more, huge value and potential monopoly power for incumbents there. So Jaron’s point is, that's not right. Let's have a proper organization and recognition of proper rights, and you can pay for it. And then it also gives consumers the ability to bargain potentially with these large monopolies to get developers some technologies rather than other technologies.

The second thing is surveillance. I think everyone on the right and the left should be very uncomfortable with where we are on surveillance, Jim, where we've slipped into already on surveillance, and also where AI is going to take us. Shoshana Zuboff has a great book, The Age of Surveillance Capitalism on exactly this, going through where we are in the workplace and where we are in in our society. And then of course there's China and what they're doing in terms of surveillance, which I'm sure we're not going to do. In fact, I think the next division of the world may be between the low-surveillance or safeguarded-surveillance places, which I hope will include the US, and the high-surveillance places, which will be pretty much authoritarian places, I would suggest. That's a really different approach to the technology of how you interact with workers, citizens, everybody in all their various roles in life.

The third one we're probably not going to agree on right away, but I do want us to have some serious discussion about it, is corporate taxation. Kim Clausing from UCLA, a former senior Treasury person, points out that we do have a graduated corporate tax system in the US but bigger companies pay less. Smaller companies’ effective tax rate is higher than bigger companies because they move their profits around the globe. That's not fair and that's not right. And she proposes that we tax mega profits above $10 billion, for example, at a higher rate than we tax smaller profits to give the big companies that are very successful, very profitable an incentive to make themselves smaller. The reason I like Kim's proposal is I want competition, not just between companies directly in terms of what they're offering, but also between business models and mental models. And I think what we're getting too much from Microsoft and Google and the others who are likely to become the big players is machine intelligence, as they call it, which basically means replacing people as much as possible. We argue for machine usefulness, which is also, by the way, a strong tradition in computer science — it's not the ascendant tradition or ascendant idea right now — that is, focusing technology on making humans more effective. Like this Zoom call is making us more effective. We didn't have to get ourselves in the same room. We are able to leverage our time. We're able to organize our lives differently.

Find those kinds of opportunities, particularly for lower-income workers. We are not getting that right now because we lack competition, I think, in the development of these models. Jim, too much. You joked at the beginning that the Silicon Valley is the only optimist. Maybe that's true, but they're the optimists that matter because they're the ones who control the development of the technology. Almost all those strings are in their hands right now, and you need to give them an incentive to give up some of that. I'm sure we can agree on the fact that having the government break things up, or the courts, is going to be a big mess and not where we want to go.

What about Big Tech?

Does it suggest caution, as far as worrying about corporate size or breaking up these companies, that these big advances, which could revolutionize the economy, are coming from the very companies you're worried about and are interested in breaking up? Doesn't it argue that they're kind of doing something right, if that's the source of this great innovation, which may be one of the biggest innovations of our life?

Yes, potentially. We're trying to be modest and we're trying to be careful here, Jim. We're saying if you make these really big profits, you pay the higher tax rate. And then you have a conversation with your shareholders about, do we really need to be so big? When Standard Oil was broken up before World War I, it was broken into 25 or 26 pieces, Rockefeller became richer. That created value for shareholders. More competition was also good, I think we can say safely at this distance, it was good for consumers. Competition for consumers is something I think we should always attempt to pursue, but competition in mental models, competition for ideas, getting more plurality of ideas out there in the tech sphere. I think that's really important, Jim. While I believe this can be — and we wrote the book in part because we believe it is — a very big moment in sort of technological choices that we humans have made and will continue to make. This is a big one. But if it's all in the hands of a few people, we're less likely to get better outcomes than if it's in the hands of hundreds of people or thousands of people. More competition for ideas, more competition to develop ways to make machines and algorithms useful to people. That's our focus.

You have OpenAI, a company which was invested in by Microsoft, and Google/Alphabet is working on their version. And I think now you have Facebook and Amazon devoting more resources. Elon Musk is talking about creating his own version. Plus you have a lot of companies taking those models and doing things with them. It seems like there's a lot of things going on a lot of ferment. It doesn't to me seem like this kind of staid business environment where you have one or two companies doing something. It seems like a fairly vibrant innovation ecology right now.

Of course, if you're right, Jim, then nobody is going to make mega excess profits, and then we don't have to worry about the tax rate proposal that I made. My proposal, or Kim's proposal, would have bite only if there are a couple of very big winners that make hundreds of billions of dollars. I'm not a computer scientist, I’m an economist, but it seems…

Right, but it seems like those mega profits might be competed away, so I'd be careful about right now breaking up Google into eight Googlettes.

Fine. I'm not trying to break them up. I'm saying give them a tax system so they confront that incentive and they can discuss it with their shareholders. The people who follow this closely, my computer science colleagues at MIT, for example, feel that Microsoft and OpenAI are in the lead by some distance. Google, which is working very closely with Anthropic, which broke away from OpenAI, is probably a either a close second or a slightly distant second. It's sort of like Manchester City versus the rest of the Premier League right now. But the others you mentioned, Facebook, Amazon, are some years behind. And years are a big deal here. Elon Musk, of course, proposed a pause in AI development and then suggested he get to launch his own AI business — I suppose to take advantage of the pause.

That’s a little suspicious.

There's not going to be a pause. And there's not going to be a pause in part because we know that China is developing AI capabilities. While I am not arguing for confrontation with China over this or other things necessarily, we do have to be cognizant that there's a major national security dimension to this technology. And it is not in the interest of the United States to fall behind anyone. And I'm sure the Chinese are having the same discussion. That's going to keep us going pretty much full speed. And I think is also the case that many corporate executives can see this is a potential winner-take-all. And on the applications, the thinking there is that we're going to be talking very soon about a sort of supply chain where you have these fundamental large language model, the [General-Purpose Technology] type at the bottom, and then people can build applications on top of them. Which would make a lot of sense, right? You can focus on healthcare, you can focus on finance, but you'll be choosing between, right now it looks like, one or two of the large language models. Which does suggest really big upstream profits for those fundamental suppliers, just like how Microsoft has been making money since the mid-1980s, really.

Can we really nudge transformational technology?

With an important technology which will evolve in directions we can't predict, can we really nudge it with a little bit of tax policy, equalizing capital labor rates? Can we really nudge it in the kind of direction that we might want? If generative AI or machine learning more broadly is as significant as some people say, including folks at MIT and Stanford, I just wonder if we're really operating at the margins here. That the technology is going to be what the technology is. And maybe you make sure we can retrain people, and we can change education, and maybe we need to worry a bit about taxing this profit away if you're worried about corporate power. But as far as how the technology interacts with the workplace and the tasks people do, can we really influence it that much?

I think that's the big question of the day, Jim. Absolutely. This is a book, not a policy memo, because we feel that the bigger issue is to have the discussion. To confront the question, as you pose it, and to discuss, what do we as a society want? How do we develop the technology that we need? Are we solving the problems that we really want to solve? Historically, of course, we didn't have many of those conversations. But we weren't as rich then as we are now. Hopefully we're more aware of our history now and more aware of the impact of these choice points. And so it's exactly to have that discussion and to say, if this is as big as people say, how are we going to move it in various directions?

I like, as you know, to propose specific policy. I do think, particularly in Washington, it's the specifics that people want to seize. “What do we mean by surveillance? What do we mean by s safeguards over surveillance? How could you operationalize protections against excessive surveillance? By whom? By employers, by the police, by companies from whom you buy stuff? From your local government?” That conversation still needs to be had. And it's a very big, broad conversation. So let's have it quickly, because the technology is moving very quickly.

What does the more recent history of concerns about technology, what lessons should we draw? I think of, I think of nuclear technology, which there are lots of concerns and we pass lots of rules. We basically paused that technology. And now we're sitting here in the, you know, in the 2020s worried about climate change. That, to me, is a recent powerful example of the dangers of trying to slow a technology, delay a technology that may evolve in ways you don't understand, but also can solve problems that we don't understand. It's, to me, are the history of least in the United States of technology over the past half century has been one of being overly cautious, not pedal to the metal gungho, you know, you know, let's, let's just keep going as fast as possible.

As I think you may remember, Jim, I'm a big advocate for more science spending and more innovation in some fundamental sense across the whole economy because I think that generates prosperity and jobs. In my previous book, Jump-Starting America, we went through the nuclear history, as you flag. And I think the key thing there is at the beginning of that industry, right after World War II, there was over-optimism on the part of the engineers. The Atomic Energy Commission chair famously promised free electricity, and there was very little discussion about safety. And people who raised the issues of safety were kind of shunted to one side with the result that Three Mile Island a little bit and Chernobyl a lot was a big shock to public consciousness about the technology. I'm in favor of more innovation…

I wonder if we've overlearned that lesson, you know? I think we may have overlearned it.

Yes. I think that's quite possibly right. And we are not calling for an end to innovation on AI just because somebody made a movie in which AI takes over the world. Not at all. What we're saying is there are choices and you can either go more towards replacing people, that's automation, and more towards new task creation, that's machine usefulness. And that's not a new thing. That's a very old, thousand-year or maybe longer tension we've had in the history of innovations and how we manage them. And we have an opportunity now, because we're a more conscious, aware, and richer society, to try and pull ourselves through various means — and it might not be tax policy, I'll grant you that, but through various means — towards what we want. And I think what we want is more good jobs. We always want more good jobs, Jim. And we always want to produce useful things. We don't want just to replace people for the sake of replacement.

Evaluating the Biden administration’s science policy

Since you brought it up, I'm going to take the opportunity to ask you a final question about some of your other work about trying to create technology hubs across America. It seems like those ideas have to some degree made their way into policy during the Biden administration. What do you think of its efforts on trying to spend more on R&D and trying to spread that spending across America and trying to make sure it's not just Austin and Boston and New York and San Francisco and LA as areas of great innovation?

In the Chips and Science Act, there's two parts: chips and science. The part that we are really advocating for is the science part. And it's exactly what you said, Jim, which is you spend more on science, spread it around the country. There are a lot of people in this country who are innovative, want to be innovative. There are some really good resources, private sector, but also public sector, public-sector universities, for example, in almost every state where you could have more innovation in some basic knowledge-creation sense. And that can become commercialized, that can become private initiative, that can generate jobs. That's what we are supporting. And I think the Science Act absolutely did internalize that. In part, because people learned some hard lessons during COVID, for example.

The CHIPS Act is not what we were advocating for. And that's going to be really interesting to see how that plays out. That's more, I would say, conventional, somewhat old-fashioned industrial policy: Pick a sector, back a sector, invest in the sector from the public sector perspective. Chips are of course a really important sector, and the discussion of AI is absolutely part about that. And of course we're also worried, in part because of COVID but also because of the rise of China, about the security of supply chains, including chips that are produced in, let's say, parts of Asia. I think there are some grounds for that. There's also some issues, how much does it cost to build a state-of-the-art fab and operate it in the US versus Taiwan or South Korea, or even China for that matter? Those issues need to be confronted and measured. I think it's good that we're having a go. I'm a big believer in more science, more science spending, more responsible deployment of it and more discussion of how to do that. The chips industrial policy, we'll see. I hope something like this works. It would be quite interesting to pursue further, but we have had some bumps in those roads before.


Faster, Please!
Faster, Please! — The Podcast
Welcome to Faster, Please! — The Podcast. Several times a month, host Jim Pethokoukis will feature a lively conversation with a fascinating and provocative guest about how to make the world a better place by accelerating scientific discovery, technological innovation, and economic growth.