My fellow pro-growth/progress/abundance Up Wingers,
The innovation landscape is facing a difficult paradox: Even as R&D investment has increased, productivity per dollar invested is in decline. In his recent co-authored paper, The next innovation revolution—powered by AI, Michael Chui explores AI as a possible solution to this dilemma.
Today on Faster, Please! — The Podcast, Chui and I explore the vast potential for AI-augmented research and the challenges and opportunities that come with applying it to the real-world.
Chui is a senior fellow at QuantumBlack, McKinsey’s AI unit, where he leads McKinsey research in AI, automation, and the future of work.
In This Episode
The R&D productivity problem (01:21)
The AI solution (6:13)
The business-adoption bottleneck (11:55)
The man-machine team (18:06)
Are we ready? (19:33)
Below is a lightly edited transcript of our conversation.
The R&D productivity problem (01:21)
All the easy stuff, we already figured out. So the low-hanging fruit has been picked, things are getting harder and harder.
Pethokoukis: Do we understand what explains this phenomenon where we seem to be doing lots of science, and we're spending lots of money on R&D, but the actual productivity of that R&D is declining? Do we have a good explanation for that?
I don't know if we have just one good explanation. The folks that we both know have been both working on what are the causes of this, as well as what are some of the potential solutions, but I think it's a bit of a hidden problem. I don't think everyone understands that there are a set of people who have looked at this — quite notably Nick Bloom at Stanford who published this somewhat famous paper that some people are familiar with. But it is surprising in some sense.
At one level, it's amazing what science and engineering has been able to do. We continue to see these incredible advances, whether it's in AI, or biotechnology, or whatever; but also, what Nick and other researchers have discovered is that we are producing less for every dollar we spend in R&D. That's this little bit of a paradox, or this challenge, that we see. What some of the research we've been trying to do is understand, can AI try to contribute to bending those curves?
. . . I'm a computer scientist by training. I love this idea of Moore's Law: Every couple of years you can double the number of transistors you can put on a chip, or whatever, for the same amount of money. There's something called “Eroom's Law,” which is Moore spelled backwards, and basically it said: For decades in the pharmaceutical industry, the number of compounds or drugs you would produce for every billion dollars of R&D would get cut in half every nine years. That's obviously moving in the wrong direction. That challenge, I don't think everyone is aware of, but one that we need to address.
I suppose, in a way, it does make sense that as we tackle harder problems, and we climb the tree of knowledge, that it's going to take more time, maybe more researchers, the researchers themselves may have to spend more time in school, so it may be a bit of a hidden problem, but it makes some intuitive sense to me.
I think there's a way to think about it that way, which is: All the easy stuff, we already figured out. So the low-hanging fruit has been picked, things are getting harder and harder. It's amazing. You could look at some of the early papers in any field and it have a handful of authors, right? The DNA paper, three authors — although it probably should have included Rosalyn Franklin . . . Now you look at a physics paper or a computer science paper — the author list just goes on sometimes for pages. These problems are harder. They require more and more effort, whether it's people's talents, or whether it's computing power, or large-scale experiments, things are getting harder to do. I think there's ways in which that makes sense. Are there other ways in which we could improve processes? Probably, too.
We could invest more in research, make it more efficient, and encourage more people to become researchers. To me, what’s more exciting than automating different customer service processes is accelerating scientific discovery. I think that’s what makes AI so compelling.
That is exactly right. Now, by the way, I think we need to continue to invest in basic research and in science and engineering, I think that's absolutely important, but —
That's worth noting, because I'm not sure everybody thinks that, so I'm glad you highlighted that.
I don't think AI means that everything becomes cheaper and we don't need to invest in both human talent as well as in research. That's number one.
Number two, as you said, we spend a lot of time, and appropriately so, talking about how AI can improve productivity, make things more efficient, do the things that we do already cheaper and faster. I think that's absolutely true. But we had the opportunity to look over history, and what has actually improved the human condition, what has been one of the things that has been necessary to improve the human condition over decades, and centuries, and millennia, is, in fact, discovering new ideas, having scientific breakthroughs, turning those scientific breakthroughs into engineering that turn into products and services, that do everything from expand our lifespans to be able to provide us with food, more energy. All those sorts of things require innovation, require R&D, and what we've discovered is the potential for AI, not only to make things more efficient, but to produce more innovation, more ideas that hopefully will lead to breakthroughs that help us all.
The AI solution (6:13)
I think that's one of the other potentials of using AI, that it could both absorb some of the experience that people have, as well as stretch the bounds of what might be possible.
I've heard described as an “IMI,” it's an invention that makes more invention. It's an invention of a method of invention. That sounds great — how's it going to do that?
There are a couple of ways. We looked at three different channels through which AI could improve this process of innovation and R&D. The first one is just increasing the volume, velocity, and variety of different candidates. One way you could think about innovation is you create a whole bunch of candidates and then you filter them down to the ones that might be most effective. Number one, you can just fill that funnel faster, better, and with greater variety. That's number one.
The candidates could be a molecule, it could be a drug, it could be a new alloy, it could be lots of things.
Absolutely, or a design for a physical product. One of the interesting things is, this quote-unquote “modern AI” — AI's been around for 70 years — is based on foundation models, these large artificial neural networks trained on huge amounts of data, and they produce unstructured outputs. In many cases, language, we talk about LLMs.
The interesting thing is, you can train these foundation models not just to generate language, but you can generate a protein, or a drug candidate, as you were saying. You can imagine the prompt being, “Please produce 10 drug candidates that address this condition, but without the following side effects.” That’s not exactly how it works, but roughly speaking, that's the potential to generate these things, or generate an electrical circuit, or a design for an air foil or an airframe that has these characteristics. Being able to just generate those.
The interesting thing is, not only can you generate them faster, but there's this idea that you can create more variety. We're usefully proud as humans about our creativity, but also, that judgment or that training that we have, that experience sometimes constrains it. The famous example was some folks created this machine called AlphaGo which was meant to compete against the world champion in this game called Go, a very complex strategic game. Famously, it beat the world champion, but one of the things it did is this famous Move 37, this move that everyone who was an expert at Go said, “That is nuts. Why would you possibly do that?” Because the machine was a little bit more unconstrained, actually came up with what you might describe as a creative idea. I think that's one of the other potentials of using AI, that it could both absorb some of the experience that people have, as well as stretch the bounds of what might be possible.
So you come up with the design, and then a variety of options, and then AI can help model and test them.
Exactly. So you generate a broader and more voluminous set of potential designs, candidates, whether it's molecules, or chemicals, or what have you. Now you need to narrow that down. Traditionally you would narrow it down either one, through physical testing — so put something into a wind tunnel or run it through the water if you're looking at a boat design, or something like that, or put it in an electromagnetic chamber and see how the antenna operates. You'd either test it physically, and then, of course, lots of people figured out how to use physics, mathematical equations, in order to create “digital twins.” So you have these long acronyms like CFD for computational fluid dynamics, basically a virtual wind tunnel, or what have you. Or you have finite element analysis, another way to model how a structure might perform, or computational electromagnetic modeling. All these ways that you can use physics to simulate things, and that's been terrific.
But some of those models actually take hours, sometimes days, to run these models. It might be faster than building the physical prototype and then modeling it — again, sometimes you just wait until something breaks, you're doing failure testing. Then you could do that in a computer using these models. But sometimes they take a really long time, and one of the really interesting discoveries in “AI” is you can use that same neural network that we've used to simulate cognition or intelligence, but now you use it to simulate physical systems. So in some ways it's not AI, because you're not creating an artificial intelligence, you're creating an artificial wind tunnel. It's just a different way to model physics. Sometimes these problems get even more complicated . . . If you're trying to put an antenna on an airplane, you need to know how the airflow is going to go over it, but you need to know whether or not the radio frequency stuff works out too, all that RF stuff.
So these multiphysics models, the complexity is even higher, and you can train these neural nets . . . even faster than these physics-based models. So we have these things called AI surrogate models. They're sort of surrogates. It's two steps removed, in some ways, from actual physical testing . . . Literally we've seen models that can run in minutes rather than hours, or an hour rather than a few days. That can accelerate things. We see this in weather forecasting in a number of different ways in which this can happen. If you can generate more candidates and then test them faster, you can imagine the whole R&D process really accelerating.
The business-adoption bottleneck (11:55)
We know that companies are using AI surrogates, deep learning surrogates, already, but is it being applied as many places as possible? No, it isn't.
Does achieving your estimated productivity increases depend more on further technological advances or does it depend more on how companies adopt and implement the technology? Is the bottleneck still in the tech itself, or is it more about business adaptation?
Mostly number two. The technology is going to continue to advance. As a technologist, I love all that stuff, but as usual, a lot of the challenges here are organizational challenges. We know that companies are using AI surrogates, deep learning surrogates, already, but is it being applied as many places as possible? No, it isn't. A lot of these things are organizational. Does it match your strategy, for instance? Do you have the right talent and organization in place?
Let me just give one very specific example. In a lot of R&D organizations we know, there's a separate organization for physical testing and a separate organization for simulations. Simulation, in many cases, us physics-based, but you add these deep-learning surrogates as well. That doesn't make sense at some level. I'm not saying physical testing goes away, but you need to figure out when you should physically test, when you should use which simulation methods, when you should use deep-learning surrogates or AI techniques, et cetera, and that's just one organizational difference that you could make if you were in an organization that was actually taking this whole testing regime seriously, where you're actually parsing out when the optimal amount of physical testing is versus simulation, et cetera. There's a number of things where that's true.
Even before AI, historically, there was a gap between novel, new technologies, what they can do in lab settings, and then how they’re applied in real-world research or in business environments. That gap, I would guess, probably requires companies to rewire how they operate, which takes time.
It is indeed, and it's funny that you use the word “rewiring.” My colleagues wrote a book entitled Rewired, which literally is about the different ways, together, that you need to, as you say, rewire or change the way an organization operates. Only one of those six chapters is around the tech stack. It's still absolutely important. You've got to get all that stuff right. But it is mostly all of the other things surrounding how you change and what organization operates in order to bring the full value of this together to reach scale.
We also talk about pilot purgatory: “We did this cool experiment . . .” but when is it good enough that the CFOs talks about it at the quarterly earnings report? That requires the organization to change the way it operates. That's the learning we've seen all the time.
We've been serving thousands of executives on their use of AI for seven years now. Nearly 80 percent of organizations say they're regularly using AI someplace in the business, but in a separate survey, only one percent say they're mature in that usage. There's this giant gap between just using AI and then actually having the value be created. And by the way, organizations that are creating that value are accelerating their performance difference. If you have a much more productive R&D organization that churns out products that are successful in the market, you're going to be ahead of your competitors, and that's what we're seeing too.
Is there a specific problem that comes up over and over again with companies, either in their implementation of AI, maybe they don't trust it, they may not know how to use it? What do you think is the problem?
Unfortunately, I don't think there's just one thing. My colleagues who do this work on Rewired, for instance — you kind of have to do all those things. You do have to have the right talent and organization in place. You have to figure out scaling, for instance. You have to figure out change management. All of those things together are what underpins outsized performance, so all those things have to be done.
So if companies are successful, what is the productivity impact you see? We're talking about basically the current technology level, give or take. We're not talking about human-level AI, superintelligence, we're talking about AI more or less as it exists today. Everybody wants to accelerate productivity: governments around the world, companies. So give me a feel for that.
There are different measures of productivity, but here what we're talking about is basically: How many new products, successful products, can you put out in the market? Our modeling says, depending on your industry, you could double your productivity, in other words, of R&D. In other words, you could put out double the amount of products and services — new products and services — that you have been previously.
Now, that's not true for every industry. By the way, the impact of that is different for different industries because for some industries you are dependent — In pharmaceuticals, the majority of your value comes from producing new products and services over time because eventually the patent runs out or whatever. There are other industries, we talk about science-based industries like chemicals, for instance. The new-product development process in chemicals is very, very close to the science of chemistry. So these levers that I just talked about — producing more candidates, being able to evaluate them more quickly, and all the other things that LLMs can do, in general, we could see potential doubling in the pace of which innovation happens.
On the other hand, the chemicals industry — let's leave out specialty chemicals, but the commodity chemicals — they'll still produce ethylene, right? So to a certain extent, while the R&D process can be accelerated a great deal, the EBIT [Earnings Before Interest and Taxes] impact on the industry might be lower than it is for pharmaceuticals, for instance. But still, it's valuable. And then, again, if you're in specialty chem, it means a lot to you. So depending on where you sit in your position in the market, it can vary, but the potential is really high.
The man-machine team (18:06)
At least for the medium term, we're not going to be able to get rid of all the people. The people are going to be absolutely important to the process.
Will future R&D look more like researchers augmented by AI or AI systems assisted by researchers? Who's the assistant in this equation? Who’s working for who?
It's “all of the above” and it depends on how you decide to use these technologies, but we even write in our paper that we need to be thoughtful about where you put the human in the loop. Every study, the conditions matter, but there are lots of studies where you say, look, the combination of machines and humans — so AI and researchers — is the most powerful combination. Each brings their respective strengths to it, but the funny thing is that sometimes the human biases actually decrease the performance of the overall system, and so, oh, maybe we should just go with machines. At least for the medium term, we're not going to be able to get rid of all the people. The people are going to be absolutely important to the process.
When is it that people either are necessary to the process or can be helpful? In many cases, it is around things like, when is it that you need to make a decision that's a safety-critical decision, a regulatory decision where you just have to have a person look at it? That's the sort of necessity argument for people in the loop. But also, there are things that machines just don't do well enough yet, and there's a little bit of that.
Are we ready? (19:33)
. . . AI is one of those things that can produce potentially more of those ideas that can underpin, hopefully, an improved quality of life for us and our children.
If we can get more productive R&D, and then businesses get better at incorporating this into their processes and they could potentially generate more products and services, do we have a government ready for that world of accelerated R&D? Can we handle that flow? My bias says probably not, but please correct me if I'm wrong.
I think one of the interesting things is people talk about AI regulation. In many of these industries, the regulations already exist. We have regulations for what goes out in pharmaceuticals, for instance. We have regulations in the aviation industry, we have regulations in the automobile industry, and in many ways, AI in the R&D process doesn't change that — maybe it should, people talk about, can you actually accelerate the process of approving a drug, for instance, but that wasn't the thing that we studied. In some ways, those processes are applied now, already, so that's something that doesn't necessarily have to change
That said, are some of these potential innovations gated by approval processes or clinical trials processes? Absolutely. In some of those cases, the clinical trials process gait is not necessarily a regulation, but we know there's a big problem just finding enough potential subjects in order to do clinical trials. That's not a regulatory problem, that's a problem of finding people who are good candidates for actually testing these drugs.
So yes, in some cases, even if we were able to double the amount of candidates that can go through the funnel on a number of these things, there will be these exogenous issues that would constrain society's ability to bring these to market. So that just says, you squeeze the balloon here and it opens up there, but let's go solve each of these problems, and one of the problems that we said that AI can help solve is increasing the number of things that you could potentially put into market if it can get past the other necessities.
For a general public where so much of what they're hearing about AI tends to be about job loss, or are they stealing copyrighted material, or, yeah, people talk about these huge advances, but they're not seeing them yet. What is your elevator optimistic pitch why you may be worried about the impact of AI, but here's why I'm excited about it? Why are you excited by it?
By the way, I think all those things are really important. All of those concerns, and how do we reskill the workforce, all those things, and we've done work on that as well. But the thing that I'm excited about is we need innovation, we need new ideas, we need scientific advancements, and engineering that turns them into products in order for us to improve their human condition, whether it's living longer lives, or living higher quality life, whether it's having the energy, whether it's to be able to support that in a way that doesn't cause other problems. All of those things, we need to have them, and what we've discovered is AI is one of those things that can produce potentially more of those ideas that can underpin, hopefully, an improved quality of life for us and our children.
On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were Promised
Micro Reads
▶ Economics
One Way to Ease the US Debt Crisis? Productivity - Bberg Opinion
▶ Business
Meta Pivots on AI Under the Cover of a Superb Quarter - Bberg Opinion
▶ Policy/Politics
▶ AI/Digital
▶ Biotech/Health
▶ Clean Energy/Climate
How Trump Rocked EV Charging Startups - Heatmap
▶ Robotics/Drones/AVs
Coal-Powered AI Robots Are a Dirty Fantasy - Bberg Opinion
▶ Up Wing/Down Wing
A Revolutionary Reflection - WSJ Opinion
Share this post