Faster, Please!
Faster, Please! — The Podcast
🤖 My chat (+transcript) with Google economist Guy Ben-Ishai on seizing the historic AI moment
0:00
-31:31

🤖 My chat (+transcript) with Google economist Guy Ben-Ishai on seizing the historic AI moment

Faster, Please! — The Podcast #52

Artificial intelligence may revolutionize the American economy, but whether we see that potential actualized depends on a few key factors: whether generative AI is a general purpose technology, whether the labor force makes a smooth pivot, how employers prioritize their resources, and whether the US chooses to take the lead in AI’s deployment. These are just a few of the topics I cover on the podcast today with Guy Ben-Ishai.

Ben-Ishai is the head of economic policy research at Google. He previously served as a principal at the Brattle Group and as chief economist in the office of the attorney general of the state of New York. He is also a co-author of the paper “AI and the Opportunity for Shared Prosperity: Lessons from the History of Technology and the Economy.”

In This Episode

  • Is gen AI a general purpose tech? (1:22)

  • Risks and benefits (7:46)

  • Barriers to a boom (14:27)

  • Investing in employees (19:16)

  • Human-complimenting AI (25:29)

Below is a lightly edited transcript of our conversation


Is GenAI a general purpose tech? (1:22)

Pethokoukis: Do you have any doubt that generative AI, and perhaps machine learning more broadly, is an important general purpose technology that will eventually make a substantial and measurable impact in the economic statistics and productivity and economic growth?

Ben-Ishai; The immediate response is absolutely, but let me unpack that: Do I have doubts about the immense potential of the technology? No, and I'm saying that very confidently, which is uncommon for an economist. We put together the paper that you've initially cited at Google to look exactly at that question: When we say that AI marks a pivotal moment in human history, what does that actually mean for an economist? And I think the conclusion there that we're looking not in an ordinary technology, but rather at a general purpose technology, that is immense. That means that we're probably looking at the most transformative economic development of our generation. And to think, Jim, that the two of us are having a conversation about that today, that is historic. I feel incredibly privileged and lucky to think and work of these issues in our day and age.

But the second part of your question alluded to not the potential, but actually the actual impact. And if there's one takeaway from that exercise from the paper that we put together and from my conversation with so many academics and policymakers around the world, is that this is not just a watershed moment, it's not just a pivotal moment in human history, it's a fragile moment as well. This story can easily be a story of missed opportunity. I think that we so easily take for granted the fact that, yes, we will of course develop AI and deploy it and apply it very successfully. And it's so easy to get caught in the moment, particularly as the nation that advanced the science, I think somewhere in the back of the minds of all of us, there's that presumption that we will be the global leaders in deployment of AI. I am actually worried about that. To ensure that we are, it’s a tumultuous, fragile, and careful process that we’ve got to be really thoughtful about, with a lot of deliberate action about what we do, how do we proceed, and how do we ensure that we are indeed the ones that capitalize on the potential?

It's remarkable how quickly the narrative around American tech has shifted. Not long ago, Silicon Valley faced criticism for focusing on social media rather than groundbreaking innovations like the Apollo program or cancer cures. Now, they've unveiled generative AI, potentially the most significant technology of our era.

Regarding fragility, it's worth considering why AI might need special handling. Unlike the seamless diffusion of technologies like the internal combustion engine or electricity, AI seems more akin to nuclear power - a technology that was stifled by regulation. Do we need a proactive agenda to prevent AI's potential from being similarly constrained?

That's a great question, Jim. I'm so tempted to go back to your first part of the question about the importance of digital technologies, and economists get a really bad rap, but try to be a librarian these days. We tend to overlook the tremendous importance of information as a driver of economic growth in our economies. And even if you look just at small businesses and the tremendous opportunities that digital technologies have provided them. To think that a mom and shop store today can actually run a marketing campaign, analyze its customer base on large databases, export products to far markets, those are things that used to be the exclusive domain of just a few large companies that today are actually available broadly and widely through digital technologies. And maybe it's the fault of economists that we are not shouting off the tops of mountains frequently enough about the tremendous power of information and digital technologies and the accumulation of knowledge as a driver of economic growth.

The application of knowledge and intelligence — that seems to me to be pretty important.

I cannot agree more! And I think, to a great degree, it explains some of the tremendous optimism around AI as a technology that really reduces the barriers to interact with technology and democratizes its use in a way that we haven't seen before.

Risks and benefits (7:46)

We quickly shifted from marveling at AI's potential to fixating on its risks — existential threats, job losses, and disinformation. But let's step back for a moment. Can you elaborate on why you see this as an exciting technology with significant benefits? It seems many people aren't fully aware of its upside potential.

That's a great question, this is really the reason why we at Google, too, we paused for a minute and kind of wanted to think about this. We're at a sector where enthusiasm is in no short supply, so what does it actually mean when we say that this is a pivotal moment in human history? What does it mean for economists? I think it really boils down to this question of: Is AI an ordinary technology, or is it really a general purpose technology? That is the term of art that economists use, and I think it's actually important to pause for a minute and think about that, because it's critical. A general purpose technology is not just pervasive in use, it is a technology that enables productivity-enhancing applications to be applied across all segments and entire economies in ways that are not just advancing and accelerating economic growth, but are also expanding the frontier of innovation and technology. It's a source of ongoing and continual innovations.

And if you think about it for a minute, if you think about the prior general purpose technologies that we've had, if it's electricity, if it's personal computers, or it's the steam engine, their impact was tremendous. And at the time that they were launched, I think nobody had the perfect vision of where . . . we of course knew where we started, in the very same way that we do today about AI, but it's really difficult, if not impossible, to know where we will end. The compounding nature of these technologies is immense, particularly when you're looking at a general use technology and multi-domain technology that can lead to applications on such a broad basis. I don't think that today we can envision what new occupations, new applications, new sectors will emerge as a result of AI. And I think the fact that it's not an ordinary technology, but rather a general purpose technology, that is important, that does imply that we're probably looking at the most profound economic transformation in our generation. That is huge.

It's relatively straightforward to assess AI's ability to replicate current human tasks. But predicting the new possibilities it might unlock, like accelerating scientific discovery, is far more challenging. These potential upsides are difficult to quantify or model economically.

While we can more easily grasp potential downsides like job automation (which isn't necessarily negative), the upsides are less tangible. They depend on entrepreneurs creating new businesses and scientists leveraging AI for breakthroughs. This makes it harder to definitively argue that the benefits will outweigh any drawbacks.

Oh my God, Jim, I cannot agree more, and I think that there's two issues, and you have written about this just recently, and I think that there's really two issues that come up, at least in my mind, as a reaction to some of the studies that really focus on the measurement. We're trying to really drill this question of, “What will be the productivity gain from AI over the next five or 10 years?” I don't want to dismiss that question —

And can you give it to me within three decimal points, right?

Exactly! But we're doing such a huge disservice as economists when we focus on that. I think it really pertains to two reasons that you brought up. The first one relates to measurement. We are really looking, these studies are primarily based on occupational exposure of existing work streams. Little do we know today about what new work streams, occupations, tasks, creativity, or human endeavors will actually be triggered by this new technology. In a way, we're really just looking under a flashlight rather than thinking about the broader issue, the broader economic benefits that will emerge, kind of like the unknown unknowns that we know today about this technology.

Just to put it in perspective, think about the printing press that led to a scientific revolution. Think about the steam engine that led to an industrial revolution, to an electronic circuit that led to the digital age. We are at that point with AI today, and to think that we're looking at SOC, standard occupational codes, to look at the future impact on productivity, I think minimizes the value of our profession.

The other point that you touched on, which I think is so incredibly important: We're missing the point. It's really not about the third decimal point of our estimates. It's about the fact that we can reshape technology. Rather than measuring its benefit, let's actually make sure that we can capitalize on the potential. That is far more important than anything else. And at some point, we'll go back to your other question about fragility, but there are genuine barriers that we need to address collectively as a society. And if we are not going to do it, other countries will, right? And I think economists have a role in that conversation. I think that is the critical issue that we need to focus on.

Barriers to a boom (14:27)

We have a technology that seems, right now, it's fast evolving, but it seems pretty darn important. It's hard to believe that we've only really been having the specific conversation about generative AI for maybe a year and a half or so publicly. So what are the barriers? If this turns out not to be an important technology that's widely diffused throughout the American economy, what went wrong? What are the barriers that concern you?

I think there are three main categories that we focus on. First and foremost, you need digital infrastructure. I think it's a misconception, and I think we will learn that very quickly over the next few years, that AI or digital infrastructure is limited to broadband. It is increasingly becoming more so about access to data, large data centers, and compute power. And I think not just the US, but many other countries, will realize, or are in the process of understanding very soon, that those are the type of investments that one needs to make in order to deploy the technology. That's one category.

Another one is the regulatory environment and the legal standards. You know Jim, and this is something you've of course written about a lot, I don't think that no single country deliberately chooses to fall behind, and I think that we often fail to recognize the long-term impact and unintended consequences of regulations. We, of course, have a duty to protect, and there are areas that raise concern, but we have to balance that duty to protect with the desire to capitalize on the potential, to foster innovation, and to make sure, at the end of the day, that we emerge as the global leaders of this technology, that we lead its deployment. I think that the legal ecosystem is incredibly important in that respect and an important current dimension of competition between countries and the future over the deployment of AI.

And the third one is our workforce readiness. We need a workforce transition strategy. Let me pause here for a minute. If our workforce is not ready for an AI transition, our employers and our companies would find it very difficult to actually implement and adopt AI. It's simple as that. And if our companies do not adopt AI applications or technologies, we will quickly find out that we will fall behind. If you're looking at the history of our labor markets, we have been not just resilient, consistently resilient in our institutions and labor market operations, but we've also been highly effective at transitioning individuals from low-productivity to high-productivity occupations.

We used to be a primarily agrarian economy in the 19th century. We transitioned successfully to manufacturing, which at some point was about 27 percent of our workforce, now it's below 10 percent. From there, we switched on to services. We absorbed women into the workforce in an effective way. We have highly effective labor markets, which is a competitive advantage when we're thinking about global competition.

At the very same time, it's not without a cost. And in a lot of ways, I do think that it can be a double-edged sword, because the competitiveness of our labor markets also implies, at least factually, that the relationships that we have between employer and employee tend to be less permanent than they are in other economies, and that implies that employers have less of an incentive to actually invest in employees. That may put us at a relative disadvantage compared to other countries that have longer relationships between employers and employees and can afford for actually employers to participate, to take part, whether it's through apprenticeships or training programs, in making sure that their workforce is ready for an AI transition. That, Jim, worries me. I think it's more than just making sure that individuals that may lose their job get reinstated in the workforce, it's really an economic strategic objective for us. Unless we take care of our workforce, we will find it exceedingly difficult to implement AI on an economy-wide basis.

Investing in employees (19:16)

While Washington isn't dictating data center construction, companies are investing heavily in this infrastructure. Shouldn't the same logic apply to workforce development? If understanding and working with AI technology is crucial for business survival, there's a strong private incentive to invest in employees' skills. This holds true even considering the unique structure of the American labor market compared to, say, Europe's.

Let me pause for a minute, take it back one step so that we can think about why is investment in worker training and vocational programs are so difficult? Why are they so challenging, and why do they perhaps create externalities, the way that we just discussed, more broadly? Why are they ultimately a strategic concern for the US economy? So look, these programs are exceptionally difficult to get right in an ordinary course of business. And we at Google have invested a tremendous amount of resources on these programs, which are not a core product for us. They're not even a monetizable product for us. And we're not the only ones. A lot of other tech companies have done the same, to be honest.

Now, what are the challenges with these programs? First and foremost, they have to be relevant and they have to provide education, skills, programs that are actually relevant, that keep up to date with the advancements in technology. That is something that's really difficult to do. You have to make sure that employers are actually buying in. We may have the best program, but unless it enables the individuals who graduated from the program to signal to employers that these are individuals with high qualifications because they went through a program, let's say at Google, the program is simply not going to work.

And then the third thing, think about it from the employee perspective: For an individual, it's not about giving somebody a pamphlet, “Hey, let's participate in this great program.” It's really about, can you take time off, at a tremendous opportunity cost of time with your family, career, and work, to invest in a serious program that would end up in an outcome which actually lands you at a better career, more stable job, that is better paid. Those are tremendously difficult in the ordinary course of business, let alone when we're going through a transition where we don't even know today how tasks and occupations will evolve. Now, as I mentioned, private tech firms, because of also market expertise and access to occupational data that is far better, in a lot of ways, than what the government data that we have on occupations are perfectly positioned to carry out those programs. The question is whether they can be actually carried out independently, unilaterally, without the collaboration of federal agencies, whether it’s local governments, state or federal, without the participation of employers, colleges and other institutions —

It sounds to me like employers will have to be part of this.

For sure. Jim, maybe let me just mention one thing that we don't want to do. We've been in this movie before. Following NAFTA (North American Free Trade Agreement), we had the trade adjustment programs where we invested a lot of money in rescaling and training employees, and the results were very minimal, at best. So I do think that this is the type of a grand challenge, if you will, that no single actor can really solve independently. And I know that we're naturally, as economists, we're naturally hesitant about government intervention, but what better role for a government can you think of other than identifying a market failure that is of strategic importance to the US economy, and in a thoughtful way, collaborating with other relevant constituents to come up with solutions that are effective. Scaling those programs to a national level is going to be a real challenge. And I do think that there's a role for governments to actually lead, in collaboration with other constituents, those efforts.

And of course you are aware of the sort of deep skepticism among people about these programs.

Yes.

So obviously we talk about innovation, technological innovation, we also need program innovation, education innovation here.

Jim, I’ve got to be perfectly honest here — and this is just my individual experience — as economists, I would be lying if I wouldn't say that I share that skepticism and concern. At the very same time, I think that we need to consider the other ramifications and what is truly an issue. We are at a certain disadvantage because of the lower incentives that our businesses have, our employers have, to invest in employees, and we see it. Apprenticeship programs is one example that is working phenomenally well in other places, but not in the US. So I do worry about that. In the paper, we didn't offer any descriptive solutions, but we really highlight the challenge here: How do we find market-based, thoughtful solutions to scaling and vocational programs so that our workforce can be ready for an AI transition?

Human-complimenting AI (25:29)

I'm skeptical of the idea that we can guide AI development through policy to ensure it complements rather than just automates human work. It's unclear what policy levers could effectively achieve this — tweaking the tax code seems unlikely to produce specific AI outcomes.

But where we can make a difference is in human capital development. If we want AI that complements human skills and enables new business creation, we need to ensure people understand this technology. Currently, many don't, given its novelty. Focusing on education and skill development seems a more practical approach to shaping AI's impact.

You know, Jim, it’s really interesting, so Chris Pissarides, the Nobel Prize winner from London School of Economics, has a phenomenal paper about this question. He comes up with a very interesting finding that countries that actually invest in right regulatory environments and legal standards, that have the right infrastructure, that have the right environment to foster innovation, ultimately witness less concerns about substitution because the technology that's being advanced tends to be more complimentary, and as an economist, that makes a lot of sense to me.

Let me pause for a minute and explain why: There is a genuine concern about whether AI is being deployed or used to substitute labor. And I think if you think about the Turing Trap that Eric Brynjolfsson has written about, this notion that you can come up with the most myopic, plug-and-play, cheapest AI application, put that in some individual function in your business, and replace existing work streams that are being done by humans, that is a genuine concern, particularly for firms that are looking for the highest rate of return, at the lowest cost, without really investing and transforming their business.

As an economist, I can understand why that happens, but keep in mind that when that happens, those businesses are actually really failing to leverage and capitalize on the full potential of the technology. They're really going for the plug-and-play, cheapest applications. That's not good for labor because it leads to substitution. But certainly that's not good to the business, itself. In a competitive market—and I think one thing that we need to stress is the importance of competition in our markets—you'd anticipate that firms that actually go through the effort to invest, to reform, to transform their businesses, to reinvent themselves, are the ones that will prevail.

And I would think that would be a very powerful lesson for other businesses if that is indeed the case, right?

Exactly. And the question is, how can we promote this broader, more meaningful, more valuable application and adoption of AI? And I think it goes back to the fundamentals: You need the AI infrastructure, the right legal institutions and regulatory standards, and ultimately a workforce that is ready to transform. And I think once you put those together, I do believe (and I’m deliberately saying belief, because I don't know that we can really study this explicitly) that that will lead to more complementarity and augmentation, and less substitution.

Historical precedent suggests that extreme job loss scenarios, like robots taking all jobs, are unlikely. While AI will undoubtedly cause disruption, do you believe it will follow the pattern of past technologies? That is, replacing some tasks, enhancing others, and creating entirely new job categories. Given the policies we've discussed, are you confident that this balanced outcome is achievable with AI, or do you have doubts?

Oh my God, that's a tough question, Jim. You saved it for last.

I'll add an addendum, I'll add a qualifier: within the next 20 years. I don't know what it'll look like in 100 years from now, but within our immediate lifetimes as workers, you and me.

Let me address it in this way. Yes, absolutely, I am, I want to say, cautiously optimistic, because if we learned one thing from the last century, a period of time that reflects the most advanced technological progress in human history, is that we didn't witness an increase in unemployment and we didn't witness a decline in labor participation. That leads me to be optimistic about the future of AI as well. Having said that — I think you alluded to this — history doesn't always repeat itself, and we've never faced a technology that can automate such a wide range of human tasks and activities. So that should be concerning for us. We also should mention that, even if AI does not lead to mass unemployment or to net loss of jobs, there will be significant occupational and sectoral shifts if we get this right, which I am optimistic about us doing so. So that will be something that we will need to consider as well. So I would say, optimistic: absolutely. Cautiously optimistic: that's probably more correct.

Share

Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.


Micro Reads

Business/ Economics

Policy/Politics

AI/Digital

Biotech/Health

Clean Energy/Climate

Robotics/AVs

Space/Transportation

Up Wing/Down Wing

Substacks/Newsletters

0 Comments
Faster, Please!
Faster, Please! — The Podcast
Welcome to Faster, Please! — The Podcast. Several times a month, host Jim Pethokoukis will feature a lively conversation with a fascinating and provocative guest about how to make the world a better place by accelerating scientific discovery, technological innovation, and economic growth.