My fellow pro-growth/progress/abundance Up Wingers,
Once-science-fiction advancements like AI, gene editing, and advanced biotechnology have finally arrived, and they’re here to stay. These technologies have seemingly set us on a course towards a brand new future for humanity, one we can hardly even picture today. But progress doesn’t happen overnight, and it isn’t the result of any one breakthrough.
As Jamie Metzl explains in his new book, Superconvergence: How the Genetics, Biotech, and AI Revolutions will Transform our Lives, Work, and World, tech innovations work alongside and because of one another, bringing about the future right under our noses.
Today on Faster, Please! — The Podcast, I chat with Metzl about how humans have been radically reshaping the world around them since their very beginning, and what the latest and most disruptive technologies mean for the not-too-distant future.
Metzl is a senior fellow of the Atlantic Council and a faculty member of NextMed Health. He has previously held a series of positions in the US government, and was appointed to the World Health Organization’s advisory committee on human genome editing in 2019. He is the author of several books, including two sci-fi thrillers and his international bestseller, Hacking Darwin.
In This Episode
Unstoppable and unpredictable (1:54)
Normalizing the extraordinary (9:46)
Engineering intelligence (13:53)
Distrust of disruption (19:44)
Risk tolerance (24:08)
What is a “newnimal”? (13:11)
Inspired by curiosity (33:42)
Below is a lightly edited transcript of our conversation.
Unstoppable and unpredictable (1:54)
The name of the game for all of this . . . is to ask “What are the things that we can do to increase the odds of a more positive story and decrease the odds of a more negative story?”
Pethokoukis: Are you telling a story of unstoppable technological momentum or are you telling a story kind of like A Christmas Carol, of a future that could be if we do X, Y, and Z, but no guarantees?
Metzl: The future of technological progress is like the past: It is unstoppable, but that doesn't mean it's predetermined. The path that we have gone over the last 12,000 years, from the domestication of crops to building our civilizations, languages, industrialization — it's a bad metaphor now, but — this train is accelerating. It's moving faster and faster, so that's not up for grabs. It is not up for grabs whether we are going to have the capacities to engineer novel intelligence and re-engineer life — we are doing both of those things now in the early days.
What is up for grabs is how these revolutions will play out, and there are better and worse scenarios that we can imagine. The name of the game for all of this, the reason why I do the work that I do, why I write the books that I write, is to ask “What are the things that we can do to increase the odds of a more positive story and decrease the odds of a more negative story?”
Progress has been sort of unstoppable for all that time, though, of course, fits and starts and periods of stagnation —
— But when you look back at those fits and starts — the size of the Black Plague or World War II, or wiping out Berlin, and Dresden, and Tokyo, and Hiroshima, and Nagasaki — in spite of all of those things, it's one-directional. Our technologies have gotten more powerful. We've developed more capacities, greater ability to manipulate the world around us, so there will be fits and starts but, as I said, this train is moving. That's why these conversations are so important, because there's so much that we can, and I believe must, do now.
There’s a widely held opinion that progress over the past 50 years has been slower than people might have expected in the late 1960s, but we seem to have some technologies now for which the momentum seems pretty unstoppable.
Of course, a lot of people thought, after ChatGPT came out, that superintelligence would happen within six months. That didn’t happen. After CRISPR arrived, I’m sure there were lots of people who expected miracle cures right away.
What makes you think that these technologies will look a lot different, and our world will look a lot different than they do right now by decade’s end?
They certainly will look a lot different, but there's also a lot of hype around these technologies. You use the word “superintelligence,” which is probably a good word. I don't like the words “artificial intelligence,” and I have a six-letter framing for what I believe about AGI — artificial general intelligence — and that is: AGI is BS. We have no idea what human intelligence is, if we define our own intelligence so narrowly that it's just this very narrow form of thinking and then we say, “Wow, we have these machines that are mining the entirety of digitized human cultural history, and wow, they're so brilliant, they can write poems — poems in languages that our ancestors have invented based on the work of humans.” So we humans need to be very careful not to belittle ourselves.
But we're already seeing, across the board, if you say, “Is CRISPR on its own going to fundamentally transform all of life?” The answer to that is absolutely no. My last book was about genetic engineering. If genetic engineering is a pie, genome editing is a slice and CRISPR is just a tiny little sliver of that slice. But the reason why my new book is called Superconvergence, the entire thesis is that all of these technologies inspire, and influence, and are embedded in each other. We had the agricultural revolution 12,000 years ago, as I mentioned. That's what led to these other innovations like civilization, like writing, and then the ancient writing codes are the foundation of computer codes which underpin our machine learning and AI systems that are allowing us to unlock secrets of the natural world.
People are imagining that AI equals ChatGPT, but that's really not the case (AI equals ChatGPT like electricity equals the power station). The story of AI is empowering us to do all of these other things. As a general-purpose technology, already AI is developing the capacity to help us just do basic things faster. Computer coding is the archetypal example of that. Over the last couple of years, the speed of coding has improved by about 50 percent for the most advanced human coders, and as we code, our coding algorithms are learning about the process of coding. We're just laying a foundation for all of these other things.
That's what I call “boring AI.” People are imagining exciting AI, like there's a magic AI button and you just press it and AI cures cancer. That's not how it's going to work. Boring AI is going to be embedded in human resource management. It's going to be embedded just giving us a lot of capabilities to do things better, faster than we've done them before. It doesn't mean that AIs are going to replace us. There are a lot of things that humans do that machines can just do better than we are. That's why most of us aren't doing hunting, or gathering, or farming, because we developed machines and other technologies to feed us with much less human labor input, and we have used that reallocation of our time and energy to write books and invent other things. That's going to happen here.
The name of the game for us humans, there's two things: One is figuring out what does it mean to be a great human and over-index on that, and two, lay the foundation so that these multiple overlapping revolutions, as they play out in multiple fields, can be governed wisely. That is the name of the game. So when people say, “Is it going to change our lives?” I think people are thinking of it in the wrong way. This shirt that I'm wearing, this same shirt five years from now, you'll say, “Well, is there AI in your shirt?” — because it doesn't look like AI — and what I'm going to say is “Yes, in the manufacturing of this thread, in the management of the supply chain, in figuring out who gets to go on vacation, when, in the company that's making these buttons.” It's all these little things. People will just call it progress. People are imagining magic AI, all of these interwoven technologies will just feel like accelerating progress, and that will just feel like life.
Normalizing the extraordinary (9:46)
20, 30 years ago we didn't have the internet. I think things get so normalized that this just feels like life.
What you're describing is a technology that economists would call a general-purpose technology. It's a technology embedded in everything, it's everywhere in the economy, much as electricity.
What you call “boring AI,” the way I think about it is: I was just reading a Wall Street Journal story about Applebee's talking about using AI for more efficient customer loyalty programs, and they would use machine vision to look at their tables to see if they were cleaned well enough between customers. That, to people, probably doesn't seem particularly science-fictional. It doesn't seem world-changing. Of course, faster growth and a more productive economy is built on those little things, but I guess I would still call those “boring AI.”
What to me definitely is not boring AI is the sort of combinatorial aspect that you're talking about where you're talking about AI helping the scientific discovery process and then interweaving with other technologies in kind of the classic Paul Romer combinatorial way.
I think a lot of people, if they look back at their lives 20 or 30 years ago, they would say, “Okay, more screen time, but probably pretty much the same.”
I don't think they would say that. 20, 30 years ago we didn't have the internet. I think things get so normalized that this just feels like life. If you had told ourselves 30 years ago, “You're going to have access to all the world's knowledge in your pocket.” You and I are — based on appearances, although you look so youthful — roughly the same age, so you probably remember, “Hurry, it's long distance! Run down the stairs!”
We live in this radical science-fiction world that has been normalized, and even the things that you are mentioning, if you see open up your newsfeed and you see that there's this been incredible innovation in cancer care, and whether it's gene therapy, or autoimmune stuff, or whatever, you're not thinking, “Oh, that was AI that did that,” because you read the thing and it's like “These researchers at University of X,” but it is AI, it is electricity, it is agriculture. It's because our ancestors learned how to plant seeds and grow plants where you're stationed and not have to do hunting and gathering that you have had this innovation that is keeping your grandmother alive for another 10 years.
What you're describing is what I call “magical AI,” and that's not how it works. Some of the stuff is magical: the Jetsons stuff, and self-driving cars, these things that are just autopilot airplanes, we live in a world of magical science fiction and then whenever something shows up, we think, “Oh yeah, no big deal.” We had ChatGPT, now ChatGPT, no big deal?
If you had taken your grandparents, your parents, and just said, “Hey, I'm going to put you behind a screen. You're going to have a conversation with something, with a voice, and you're going to do it for five hours,” and let's say they'd never heard of computers and it was all this pleasant voice. In the end they said, “You just had a five-hour conversation with a non-human, and it told you about everything and all of human history, and it wrote poems, and it gave you a recipe for kale mush or whatever you're eating,” you'd say, “Wow!” I think that we are living in that sci-fi world. It's going to get faster, but every innovation, we're not going to say, “Oh, AI did that.” We're just going to say, “Oh, that happened.”
Engineering intelligence (13:53)
I don't like the word “artificial intelligence” because artificial intelligence means “artificial human intelligence.” This is machine intelligence, which is inspired by the products of human intelligence, but it's a different form of intelligence . . .
I sometimes feel in my own writing, and as I peruse the media, like I read a lot more about AI, the digital economy, information technology, and I feel like I certainly write much less about genetic engineering, biotechnology, which obviously is a key theme in your book. What am I missing right now that's happening that may seem normal five years from now, 10 years, but if I were to read about it now or understand it now, I’d think, “Well, that is kind of amazing.”
My answer to that is kind of everything. As I said before, we are at the very beginning of this new era of life on earth where one species, among the billions that have ever lived, suddenly has the increasing ability to engineer novel intelligence and re-engineer life.
We have evolved by the Darwinian processes of random mutation and natural selection, and we are beginning a new phase of life, a new Cambrian Revolution, where we are creating, certainly with this novel intelligence that we are birthing — I don't like the word “artificial intelligence” because artificial intelligence means “artificial human intelligence.” This is machine intelligence, which is inspired by the products of human intelligence, but it's a different form of intelligence, just like dolphin intelligence is a different form of intelligence than human intelligence, although we are related because of our common mammalian route. That's what's happening here, and our brain function is roughly the same as it's been, certainly at least for tens of thousands of years, but the AI machine intelligence is getting smarter, and we're just experiencing it.
It's become so normalized that you can even ask that question. We live in a world where we have these AI systems that are just doing more and cooler stuff every day: driving cars, you talked about discoveries, we have self-driving laboratories that are increasingly autonomous. We have machines that are increasingly writing their own code. We live in a world where machine intelligence has been boxed in these kinds of places like computers, but very soon it's coming out into the world. The AI revolution, and machine-learning revolution, and the robotics revolution are going to be intersecting relatively soon in meaningful ways.
AI has advanced more quickly than robotics because it hasn't had to navigate the real world like we have. That's why I'm always so mindful of not denigrating who we are and what we stand for. Four billion years of evolution is a long time. We've learned a lot along the way, so it's going to be hard to put the AI and have it out functioning in the world, interacting in this world that we have largely, but not exclusively, created.
But that's all what's coming. Some specific things: 30 years from now, my guess is many people who are listening to this podcast will be fornicating regularly with robots, and it'll be totally normal and comfortable.
. . . I think some people are going to be put off by that.
Yeah, some people will be put off and some people will be turned on. All I'm saying is it's going to be a mix of different —
Jamie, what I would like to do is be 90 years old and be able to still take long walks, be sharp, not have my knee screaming at me. That's what I would like. Can I expect that?
I think this can help, but you have to decide how to behave with your personalized robot.
That's what I want. I'm looking for the achievement of human suffering. Will there be a world of less human suffering?
We live in that world of less human suffering! If you just look at any metric of anything, this is the best time to be alive, and it's getting better and better. . . We're living longer, we're living healthier, we're better educated, we're more informed, we have access to more and better food. This is by far the best time to be alive, and if we don't massively screw it up, and frankly, even if we do, to a certain extent, it'll continue to get better.
I write about this in Superconvergence, we're moving in healthcare from our world of generalized healthcare based on population averages to precision healthcare, to predictive and preventive. In education, some of us, like myself, you have had access to great education, but not everybody has that. We're going to have access to fantastic education, personalized education everywhere for students based on their own styles of learning, and capacities, and native languages. This is a wonderful, exciting time.
We're going to get all of those things that we can hope for and we're going to get a lot of things that we can't even imagine. And there are going to be very real potential dangers, and if we want to have the good story, as I keep saying, and not have the bad story, now is the time where we need to start making the real investments.
Distrust of disruption (19:44)
Your job is the disruption of this thing that's come before. . . stopping the advance of progress is just not one of our options.
I think some people would, when they hear about all these changes, they’d think what you're telling them is “the bad story.”
I just talked about fornicating with robots, it’s the bad story?
Yeah, some people might find that bad story. But listen, we live at an age where people have recoiled against the disruption of trade, for instance. People are very allergic to the idea of economic disruption. I think about all the debate we had over stem cell therapy back in the early 2000s, 2002. There certainly is going to be a certain contingent that, what they're going to hear what you're saying is: you're going to change what it means to be a human. You're going to change what it means to have a job. I don't know if I want all this. I'm not asking for all this.
And we've seen where that pushback has greatly changed, for instance, how we trade with other nations. Are you concerned that that pushback could create regulatory or legislative obstacles to the kind of future you're talking about?
All of those things, and some of that pushback, frankly, is healthy. These are fundamental changes, but those people who are pushing back are benchmarking their own lives to the world that they were born into and, in most cases, without recognizing how radical those lives already are, if the people you're talking about are hunter-gatherers in some remote place who've not gone through domestication of agriculture, and industrialization, and all of these kinds of things, that's like, wow, you're going from being this little hunter-gatherer tribe in the middle of Atlantis and all of a sudden you're going to be in a world of gene therapy and shifting trading patterns.
But the people who are saying, “Well, my job as a computer programmer, as a whatever, is going to get disrupted,” your job is the disruption. Your job is the disruption of this thing that's come before. As I said at the start of our conversation, stopping the advance of progress is just not one of our options.
We could do it, and societies have done it before, and they've lost their economies, they've lost their vitality. Just go to Europe, Europe is having this crisis now because for decades they saw their economy and their society, frankly, as a museum to the past where they didn't want to change, they didn't want to think about the implications of new technologies and new trends. It's why I am just back from Italy. It's wonderful, I love visiting these little farms where they're milking the goats like they've done for centuries and making cheese they've made for centuries, but their economies are shrinking with incredible rapidity where ours and the Chinese are growing.
Everybody wants to hold onto the thing that they know. It's a very natural thing, and I'm not saying we should disregard those views, but the societies that have clung too tightly to the way things were tend to lose their vitality and, ultimately, their freedom. That's what you see in the war with Russia and Ukraine. Let's just say there are people in Ukraine who said, “Let's not embrace new disruptive technologies.” Their country would disappear.
We live in a competitive world where you can opt out like Europe opted out solely because they lived under the US security umbrella. And now that President Trump is threatening the withdrawal of that security umbrella, Europe is being forced to race not into the future, but to race into the present.
Risk tolerance (24:08)
. . . experts, scientists, even governments don't have any more authority to make these decisions about the future of our species than everybody else.
I certainly understand that sort of analogy, and compared to Europe, we look like a far more risk-embracing kind of society. Yet I wonder how resilient that attitude — because obviously I would've said the same thing maybe in 1968 about the United States, and yet a decade later we stopped building nuclear reactors — I wonder how resilient we are to anything going wrong, like something going on with an AI system where somebody dies. Or something that looks like a cure that kills someone. Or even, there seems to be this nuclear power revival, how resilient would that be to any kind of accident? How resilient do you think are we right now to the inevitable bumps along the way?
It depends on who you mean by “we.” Let's just say “we” means America because a lot of these dawns aren't the first ones. You talked about gene therapy. This is the second dawn of gene therapy. The first dawn came crashing into a halt in 1999 when a young man at the University of Pennsylvania died as a result of an error carried out by the treating physicians using what had seemed like a revolutionary gene therapy. It's the second dawn of AI after there was a lot of disappointment. There will be accidents . . .
Let's just say, hypothetically, there's an accident . . . some kind of self-driving car is going to kill somebody or whatever. And let's say there's a political movement, the Luddites that is successful, and let's just say that every self-driving car in America is attacked and destroyed by mobs and that all of the companies that are making these cars are no longer able to produce or deploy those cars. That's going to be bad for self-driving cars in America — it's not going to be bad for self-driving cars. . . They're going to be developed in some other place. There are lots of societies that have lost their vitality. That's the story of every empire that we read about in history books: there was political corruption, sclerosis. That's very much an option.
I'm a patriotic American and I hope America leads these revolutions as long as we can maintain our values for many, many centuries to come, but for that to happen, we need to invest in that. Part of that is investing now so that people don't feel that they are powerless victims of these trends they have no influence over.
That's why all of my work is about engaging people in the conversation about how do we deploy these technologies? Because experts, scientists, even governments don't have any more authority to make these decisions about the future of our species than everybody else. What we need to do is have broad, inclusive conversations, engage people in all kinds of processes, including governance and political processes. That's why I write the books that I do. That's why I do podcast interviews like this. My Joe Rogan interviews have reached many tens of millions of people — I know you told me before that you're much bigger than Joe Rogan, so I imagine this interview will reach more than that.
I'm quite aspirational.
Yeah, but that's the name of the game. With my last book tour, in the same week I spoke to the top scientists at Lawrence Livermore National Laboratory and the seventh and eighth graders at the Solomon Schechter Hebrew Academy of New Jersey, and they asked essentially the exact same questions about the future of human genetic engineering. These are basic human questions that everybody can understand and everybody can and should play a role and have a voice in determining the big decisions and the future of our species.
To what extent is the future you're talking about dependent on continued AI advances? If this is as good as it gets, does that change the outlook at all?
One, there's no conceivable way that this is as good as it gets because even if the LLMs, large language models — it's not the last word on algorithms, there will be many other philosophies of algorithms, but let's just say that LLMs are the end of the road, that we've just figured out this one thing, and that's all we ever have. Just using the technologies that we have in more creative ways is going to unleash incredible progress. But it's certain that we will continue to have innovations across the field of computer science, in energy production, in algorithm development, in the ways that we have to generate and analyze massive data pools. So we don't need any more to have the revolution that's already started, but we will have more.
Politics always, ultimately, can trump everything if we get it wrong. But even then, even if . . . let's just say that the United States becomes an authoritarian, totalitarian hellhole. One, there will be technological innovation like we're seeing now even in China, and two, these are decentralized technologies, so free people elsewhere — maybe it'll be Europe, maybe it'll be Africa or whatever — will deploy these technologies and use them. These are agnostic technologies. They don't have, as I said at the start, an inevitable outcome, and that's why the name of the game for us is to weave our best values into this journey.
What is a “newnimal”? (30:11)
. . . we don't live in a state of nature, we live in a world that has been massively bio-engineered by our ancestors, and that's just the thing that we call life.
When I was preparing for this interview and my research assistant was preparing, I said, “We have to have a question about bio-engineered new animals.” One, because I couldn't pronounce your name for these . . . newminals? So pronounce that name and tell me why we want these.
It's a made up word, so you can pronounce it however you want. “Newnimals” is as good as anything.
We already live in a world of bio-engineered animals. Go back 50,000 years, find me a dog, find me a corn that is recognizable, find me rice, find me wheat, find me a cow that looks remotely like the cow in your local dairy. We already live in that world, it's just people assume that our bioengineered world is some kind of state of nature. We already live in a world where the size of a broiler chicken has tripled over the last 70 years. What we have would have been unrecognizable to our grandparents.
We are already genetically modifying animals through breeding, and now we're at the beginning of wanting to have whatever those same modifications are, whether it's producing more milk, producing more meat, living in hotter environments and not dying, or whatever it is that we're aiming for in these animals that we have for a very long time seen not as ends in themselves, but means to the alternate end of our consumption.
We're now in the early stages xenotransplantation, modifying the hearts, and livers, and kidneys of pigs so they can be used for human transplantation. I met one of the women who has received — and seems to so far to be thriving — a genetically modified pig kidney. We have 110,000 people in the United States on the waiting list for transplant organs. I really want these people not just to survive, but to survive and thrive. That's another area we can grow.
Right now . . . in the world, we slaughter about 93 billion land animals per year. We consume 200 million metric tons of fish. That's a lot of murder, that's a lot of risk of disease. It's a lot of deforestation and destruction of the oceans. We can already do this, but if and when we can grow bioidentical animal products at scale without having all of these negative externalities of whether it's climate change, environmental change, cruelty, deforestation, increased pandemic risk, what a wonderful thing to do!
So we have these technologies and you mentioned that people are worried about them, but the reason people are worried about them is they're imagining that right now we live in some kind of unfettered state of nature and we're going to ruin it. But that's why I say we don't live in a state of nature, we live in a world that has been massively bio-engineered by our ancestors, and that's just the thing that we call life.
Inspired by curiosity (33:42)
. . . the people who I love and most admire are the people who are just insatiably curious . . .
What sort of forward thinkers, or futurists, or strategic thinkers of the past do you model yourself on, do you think are still worth reading, inspired you?
Oh my God, so many, and the people who I love and most admire are the people who are just insatiably curious, who are saying, “I'm going to just look at the world, I'm going to collect data, and I know that everybody says X, but it may be true, it may not be true.” That is the entire history of science. That's Galileo, that's Charles Darwin, who just went around and said, “Hey, with an open mind, how am I going to look at the world and come up with theses?” And then he thought, “Oh shit, this story that I'm coming up with for how life advances is fundamentally different from what everybody in my society believes and organizes their lives around.” Meaning, in my mind, that's the model, and there are so many people, and that's the great thing about being human.
That's what's so exciting about this moment is that everybody has access to these super-empowered tools. We have eight billion humans, but about two billion of those people are just kind of locked out because of crappy education, and poor water sanitation, electricity. We're on the verge of having everybody who has a smartphone has the possibility of getting a world-class personalized education in their own language. How many new innovations will we have when little kids who were in slums in India, or in Pakistan, or in Nairobi, or wherever who have promise can educate themselves, and grow up and cure cancers, or invent new machines, or new algorithms. This is pretty exciting.
The summary of the people from the past, they're kind of like the people in the present that I admire the most, are the people who are just insatiably curious and just learning, and now we have a real opportunity so that everybody can be their own Darwin.
On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were Promised
Micro Reads
▶ Economics
AI Hype Is Proving to Be a Solow's Paradox - Bberg Opinion
Who Needs the G7? - PS
Economic Sentiment and the Role of the Labor Market - St. Louis Fed
▶ Business
AI valuations are verging on the unhinged - Economist
▶ Policy/Politics
▶ AI/Digital
Is the Fed Ready for an AI Economy? - WSJ Opinion
Exploring the Capabilities of the Frontier Large Language Models for Nuclear Energy Research - Arxiv
▶ Biotech/Health
▶ Clean Energy/Climate
The AI Boom Can Give Rooftop Solar a New Pitch - Bberg Opinion
▶ Robotics/Drones/AVs
OpenExo: An open-source modular exoskeleton to augment human function - Science Robotics
▶ Space/Transportation
▶ Up Wing/Down Wing
We Need More Millionaires and Billionaires in Latin America - Bberg Opinion
▶ Substacks/Newsletters
State Power Without State Capacity - Breakthrough Journal
Share this post