Faster, Please!
Faster, Please! — The Podcast
🌎 My chat (+transcript) with political scientist Francis Fukuyama on technological change and liberal democracy. Some sci-fi, too!
0:00
-35:02

🌎 My chat (+transcript) with political scientist Francis Fukuyama on technological change and liberal democracy. Some sci-fi, too!

🚀 Faster, Please! — The Podcast #37

More than 20 years ago, the political scientist Francis Fukuyama characterized the Information Technology revolution as "benign" but cautioned that "the most significant threat posed by contemporary biotechnology is the possibility that it will alter human nature and thereby move us into a post-human stage of history." From Twitter to CRISPR to ChatGPT, a lot has changed since then. In this episode of Faster, Please! — The Podcast, Dr. Fukuyama shares his thoughts on those developments and the recent advances in generative AI, as well as the cultural importance of science fiction.

Dr. Fukuyama is the Olivier Nomellini Senior Fellow at Stanford University's Freeman Spogli Institute for International Studies. His books include The End of History and the Last Man, Our Posthuman Future, and 2022's Liberalism and Its Discontents, among many others. Other writings can be found at American Purpose.

In This Episode

  • The consequences of the IT revolution (1:37)

  • Can government competently regulate AI? (8:14)

  • AI and liberal democracy (17:29)

  • The cultural importance of science fiction (24:16)

  • Silicon Valley’s life-extension efforts (31:11)

Below is an edited transcript of our conversation

Share

The consequences of the IT revolution

James Pethokoukis: In Our Posthuman Future more than 20 years ago, you wrote, “The aim of this book is to argue that [Aldous] Huxley was right [in Brave New World], that the most significant threat posed by contemporary biotechnology is the possibility that it will alter human nature and thereby move us into a ‘posthuman’ stage of history. This is important, I will argue, because human nature exists, is a meaningful concept, and has provided a stable continuity to our experience as a species.” But then you added, “It may be that, as in the case of 1984” — and, I think, parenthetically, information technology — “we will eventually find biotechnology’s consequences are completely and surprisingly benign.” After 20 years, and the advent of social media, and now it seems like possibly a great leap forward in AI, would you still characterize the IT revolution as “benign”?

Francis Fukuyama: That's obviously something that's changed considerably since I wrote that book because the downside of IT has been clear to everybody. When the internet was first privatized in the 1990s, most people, myself included, thought it would be good for democracy because information was power, and if you made information more widely available, that would distribute power more democratically. And it has done that, in fact. A lot of people have access to information that they can use to improve their lives, to mobilize, to agitate, to push for the protection of their rights. But I think it's also been weaponized in ways that we perhaps didn't anticipate back then.

And then, there was this more insidious phenomenon where it turns out that the elimination of hierarchies that controlled information, that we celebrated back then, actually turned out to be pretty important. If you had a kind of legacy media that cared about journalistic standards, you could trust the information that was published. But the internet really undermined those legacy sources and replaced it with a world in which anyone can say anything. And they do. Therefore, we have this cognitive chaos right now where conspiracy theories of all sorts get a lot of credibility because people don't trust these hierarchies that used to be the channels for information. Clearly, we’ve got a big problem on our hands. That doesn't mean that the biotech is not still going to be a big problem; it's just that I think the IT part has moved ahead very rapidly. But I think the biotech will get there in time.

While I think most of the concern that I've heard expressed about AI, in particular, has been about these science fiction-like existential risks or job loss, obviously your concern has more to do, as with in Our Posthuman Future, how it will affect our liberal democracy. And you point out some of the downsides of the IT revolution that weren't obvious 30 years ago but now seeing plainly obvious today.

To me, the coverage of AI has been really very, very negative, and we've had calls for an AI pause. Do you worry that maybe we've overlearned that lesson? That rather than going into this with kind of a Pollyannaish attitude, we're immediately going into this AI with deep concerns. Is there a risk of overcorrecting?

The short answer is, yes. I think that because of our negative experience with social media and the internet lately, we expect the worst from technology. But I think that the possibilities for AI actually making certain social problems much better are substantial. I think that the existential worries about AI are just absurd, and I really don't see scenarios under which the human species is going to face extinction. That seems to be this Terminator, killer, Skynet scenario, and I know very few serious experts in this area that think that that's ever likely to materialize. The bigger fears, I think, are more mundane ones about job loss as a result of advancing technology. And I think that's a very complicated issue. But it does seem to me that, for example, generative AI could actually end up complementing human skills and, in fact, could complement the skills of lower-skilled or lower-educated workers in a way that will actually increase economic equality.

Up till now, I think most economists would blame the advance of computer technology for having vastly increased social inequality, because in order to take advantage of existing technologies, if you have a better education, you're going to have a higher income and so forth. But it's entirely possible that generative AI will actually slow that trend because it will give people with lower levels of education the ability to do useful things that they weren't able to do previously. There's actually some early empirical work that suggests that that's already been a pattern. So, yes, I think you're right that we've kind of overreacted. I just think in general, predicting where this technology is going to go in the next 50 years is a fool's errand. It's sort of like in the 1880s asking somebody, “Well, what's this newfangled thing called electricity going to do in 50 years?” Anything that was said back then I think would've been overtaken by events very, very rapidly.

Can government competently regulate AI?

Anyone who has sat through previous government hearings on social media has been underwhelmed at the ability of Congress to understand these issues, much less come up with a vast regulatory structure. Are you confident in the ability of government to regulate AI, whether it's to regulate deep fakes or what have you — why should I be confident in their ability to do that?

I think you've got to decompose the regulatory challenge a little bit. I've been involved here at Stanford, we have a Cyber Policy Center, and we've been thinking about different forms of IT regulation. It's a particular challenge for regulators for a number of reasons. One of the questions you come up with in regulatory design is, “Is this something that actually can be undertaken by existing agencies, or do you actually need a new type of regulator with special skills and knowledge?” And I think, to me, pretty clearly the answer to that is yes. But that agency would have to be designed very differently, because the standard regulatory design, the agency has a certain amount of expertise in a particular sector and they use that expertise to write rules that then get written into law, and then things like the Administrative Procedure Act begins to apply. That's what's been going on, for example, with something like net neutrality, where the FCC put the different regulations up for notice and comment, and you go through this very involved procedure to write the new rules and so forth. I think in an area like AI, that's just not going to work, because the thing is moving so quickly. And that means that you're actually going to have to delegate more autonomy and discretionary power to the regulatory agency, because otherwise, they're simply not going to be able to keep up with the speed at which the technology advances. In normative terms, I have no problem with that. I think that governments do need to exercise social control over new technologies that are potentially very disruptive and damaging, but it has to be done in a proper way.

Can you actually design a regulatory agency that would have any remote chance of keeping up with the technology? The British have done this. They have a new digital regulator that is composed of people coming out of the IT industry, and they've relaxed the civil service requirements to be able to hire people with the appropriate knowledge and backgrounds. In the United States, that's going to be very difficult because we have so many cumbersome HR requirements for hiring and promotion of people that go into the federal civil service. Pay, for one thing, is a big issue because we don't pay our bureaucrats enough. If you're going to hire some hotshot tech guy out of the tech sector and offer him a job as a GS-14, it just isn't going to work. So I don't think that you can answer the question, “Can we regulate adequately or not?” in a simple way. I think that there are certain things you would have to do if you were going to try to regulate this sector. Can the United States do that given the polarization in our politics, given all of these legacy institutions that prevent us from actually having a public sector that is up to this task? That I don't know. As you can tell, I've got certain skepticism about that.

Is it a worthwhile critique of this regulatory process to think of AI as this discreet technology that you need a certain level of expertise to understand? If it is indeed a general-purpose technology that will be used by a variety of sectors, all sectors perhaps, can you really have an AI regulator that doesn't de facto become an economy regulator?

No, you probably can't. This is another challenge, which is that, as you say, AI in general is so broad. It's already being used in virtually every sector of the economy, and you obviously don't want a “one size fits all” effort to govern the use of this technology. So I think that you have to be much more specific about the areas where you think potential harms could exist. There's also different approaches to this other than regulation. In 2020, I chaired a Stanford working group on platform scale, which was meant to deal with the old — at that point it was a kind of contemporary problem­ — but now it seems like an old problem of content mediation on the internet. So how do you deal with this problem that Elon Musk has now revealed to be a real problem: You don't want everything to be available on social media platforms, but how do you actually control that content in a way that serves a kind of general democratic public interest? As we thought about this in the course of this working group deliberation, we concluded that straightforward regulation is not going to work. It won't work in the United States because we're way too polarized. Just think about something like reviving the old fairness doctrine that the FCC used to apply to legacy broadcast media. How are you going to come up with something like that? What's “fair and balanced” coverage of vaccine denialism? It's just not going to happen.

And what we ended up advocating was something we called “middleware,” where you would use regulation to create a competitive ecosystem of third-party media content regulators so that when you use the social media platform, you the user could buy the services or make use of the services of a content regulator that would tailor your feed or your search on Google to criteria that you specified in advance. So if you tended progressive, you could get a progressive one. If you only like right-wing media, you could get a content regulator that would deliver what you want. If you wanted to buy only American-made products, you could get a different one. The point is that you would use competition in this sphere because the real threat, as we saw it, was not actually so much this compartmentalization as the power of a single big platform. There's really only three of them. It's Google, Meta, and now X, or the formerly Twitter, that really had this kind of power. The danger to a democracy was not that you could say anything on the internet, the danger was the power of a single big platform owned by a private, for-profit company to have an outsized role over political discourse in the United States. Elon Musk and Twitter is a perfect example of that. He apparently has his own foreign policy, which is not congruent with American foreign policy, but as a private owner of this platform, he's got the power to pursue this private foreign policy. So that was our idea.

In that particular case, you could use competition as an alternative to state regulation, because what you really wanted to do was to break up this concentrated power that was exercised by the platforms. So that's one approach to one aspect of digital regulation. It doesn't deal with AI. I don't know whether there's an analog in the AI sphere, but I think it's correct that what you don't want is a single regulator that then tries to write broad rules that apply to what is actually just an enormously broad technology that will apply in virtually every sector of the economy.

Our Posthuman Future: Consequences of the Biotechnology Revolution:  Fukuyama, Francis: 9780312421717: Amazon.com: Books

AI and liberal democracy

In response to the call for a six-month "AI pause," critics of that idea pointed to competition with China. They suggested that given the difficulties of regulating AI, we might risk losing the "AI race" to the Chinese. Do you think that's a reasonable criticism?

This is a general problem with technologies. Certain technologies distribute power and other technologies concentrate it. So the old classic 19th-century coal- and steel- and fossil fuel–based economy tended to concentrate power. And certainly nuclear weapons concentrate power because you really need to be a big entity in order to build a nuclear weapon, in order to build all the uranium processing and so forth. But other technologies, like biotech, actually do not concentrate power. Any high school student can actually now use CRISPR to do genetic engineering. And they make biotech labs that will fit in individual shipping containers. So the regulatory problem is quite different.

Now, the problem with AI is that it appears that these large language models really require a lot of resources. In fact, it's interesting, because we used to think the problem was actually having big data sets. But that's actually not the problem; there's plenty of data out there. It's actually building a parallel computer system that's powerful enough to process all the words on the internet, and that's been the task that only the largest companies can do. I think that it's correct that if we had told these companies not to do this, we would be facing international competitive pressures that would make that a bad decision. However, I do think that it's still a risk to allow that kind of power to be not subject to some form of democratic control. If it's true that you need these gigantic corporations to do this sort of thing, those corporations ought to be serving American national interests.

And again, I hate to keep referring to Elon Musk, but we're seeing this right now with Starlink. It turns out Starlink is extremely valuable militarily, which has been demonstrated very clearly in Ukraine. Should the owner of Starlink be allowed to make important decisions as to who is going to use this technology on the battlefield and where that technology can be used? I don't think so. I don't think that one rich individual should have that kind of power. And actually, I'm not quite sure, I thought that the Defense Department had actually agreed to start paying Musk for the Ukrainian use of Starlink. I think that's the actual appropriate answer to that problem, so that it should not be up to Elon Musk where Starlink can be used. It should be up to the people that make American foreign policy: the White House and the State Department and so forth. And so, I think by analogy, if you develop this technology that requires really massive scale and big corporations to develop it, it should nonetheless be under some kind of state control such that it is not the decision of some rich individual how it's going to be applied. It should be somehow subject to some kind of democratic control.

On a normative level, I think that's very clear, but the specific modalities by which you do that are complicated. For example, let's say there's a gigantic corporation that is run by some lunatic that wants to use it for all sorts of asocial reasons, proliferating deep fakes or trying to use it to undermine general social trust in institutions and so forth. Is that okay? Is that a decision that should be up to a private individual or isn't there some public interest in controlling that in some fashion? I hate speaking about this in such general terms, but I think you have to settle this normative question and then you can get into the narrower technical question of, is it possible to actually exert that kind of control and how would you do that?

You've questioned in your previous writings whether liberal democracy could survive a world with both humans and posthumans and where we’re manipulating human nature. Can it survive in a world where there are two different intelligences? If we had a human intelligence and we had an artificial general intelligence, would such an entity pose a challenge our civilization, to a democratic capitalist civilization?

It's hard to answer that question. You can imagine scenarios where it obviously would pose a challenge. One of the big questions is whether this general intelligence somehow escapes human control, and that's a tough one. I think that the experts that I trust think that that's not going to happen. That ultimately, human beings are going to be able to control this thing and use it for their own purposes. So again, the whole Skynet scenario is really not likely to happen. But that doesn't solve the problem, because even if it's under human control, how do you make sure it's the right humans, right? Because if this falls into the wrong hands, it could be very, very destructive. And that then becomes a political question. I'm not quite sure how you're going to want to answer it.

The cultural importance of science fiction

You mentioned Skynet from the Terminator franchise. Do you worry that we're too steeped in dystopian science fiction? It seems like we can only see the downside when we're presented with a new technology like a biotechnology breakthrough or an AI breakthrough. Is that how it seems to you?

I actually wrote a blog post about this. I really read a lot of science fiction. I have my whole life. There's a big difference between the sorts of stories that you saw back in the 1950s and ‘60s and the stuff that has come out recently. It's hard to generalize over such a vast field, but space odysseys and space travel was very common, and a lot of that was extremely optimistic: that human beings would colonize Mars and then the distant planets and you'd have a warp drive that would take you out of the solar system and so forth. And it was kind of a paean to unlimited human possibilities. Whereas I do think that, especially with the rise of environmentalism, there was a greater consciousness of the downsides of technological advance. So you got more and more dystopian kinds of imaginings. Now, it is not a universal thing. For example, I also wrote a blog about two kind of global warming–related recent science-fiction books. One is TheMinistry for the Future by Kim Stanley Robinson. And that actually is a very optimistic take on global warming, because it's set in the 2050s and basically the human race has figured out how to deal with global warming. They do it, I think, through a bunch of very implausible political scenarios, but there's a ministry for the future that wisely…

That book seems a little too comfortable with violence and compulsion for my taste.

The other one is Neal Stephenson's Termination Shock: Basically, there's a single rich oligarch in Texas that takes it upon himself to put all this sulfur dioxide in the upper atmosphere to cool the earth, and he succeeds, and it then changes the climate in China and India. I don't know whether that's optimistic or pessimistic. But I actually do think that it's very useful to have this kind of science fiction, because you really do have to imagine to yourself what some of the both upsides and downsides will be. So it's probably the case that there's more dystopian fiction, but I do think that if you didn't have that, you wouldn't have a concrete idea of what to look for.

If you think about both 1984 and Brave New World, these were the big dystopian futures that were imagined in the 1950s. And both of them came true in many ways. It gave us a vocabulary, like, “Big Brother,” the “Telescreen,” or “Epsilons,” and “Gammas,” and “Alphas,” and so forth, by which we can actually kind of interpret things in the present. I think if you didn't have that vocabulary, it would be hard to have a discussion about what is it that we're actually worried about. So yes, I do think that there is a dystopian bias to a lot of that work that's done, but I think that you’ve got to have it. Because you do have to try to imagine to yourself what some of these downsides are.

You mentioned a couple of books. Are there any films or television shows that you've watched that you feel provide a plausible optimistic vision?

I don't know whether it's optimistic. One of my favorite book series and then TV series was The Expanse, written by a couple of guys that go by a pseudonym. It's not optimistic, in the sense that it projects all of our current geopolitical rivalries forward into a future in which human beings have colonized, not just the outer planets, but also intergalactically, figured out how to move from one place to another, and they're still having these fights between rich and poor and so forth. But I guess the reason that I liked it, especially the early parts of that series, when you just had an Epstein Drive, I mean, it was just one technological change that allowed you to move. It’s sort of like the early days of sailing ships, where you could get to Australia, but it would take you six months to get there. So that was the situation early on in the book, and that was actually a very attractive future. All of a sudden, human beings had the ability to mine the asteroid belt, they could create gigantic cities in space where human beings could actually live and flourish. That's one of the reasons I really liked that: because it was very human. Although there were conflicts, they were familiar conflicts. There were conflicts that we are dealing with today. But it was, in a way, hopeful because it was now done at this much larger scale that gave hope that human beings would not be confined to one single planet. And actually, one of the things that terrifies me is that the idea that in 100 years, we may discover that we actually can't colonize even Mars or the Moon. That the costs of actually allowing human beings to live anywhere but on earth just make it economically impossible. And so we're kind of stuck on planet Earth and that's the human future.

I wrote a small essay about The Expanse where I talked about having a positive vision. As I saw it, this is several hundred years in the future, and we're still here. We've had climate change, but we're still here. We've expanded throughout the universe. If an asteroid should hit the earth, there's still going to be humanity. And people were angry about that essay, because this is a future but there's still problems. Yes, because we're still part of that future: human beings.

Silicon Valley’s life-extension efforts

Getting back to biotechnology and transhumanism and living forever, these things you wrote about in Our Posthuman Future: What do you make of the efforts by folks in Silicon Valley to try to extend lifespans? From a cultural perspective, from your perspective as a political scientist, what do you make of these efforts?

I think they're terrible. I actually wrote about this and have thought about this a lot, about life extension. In fact, I think human biomedicine has produced a kind of disastrous situation for us right now because by the time you get to your mid-80s, roughly half of the population that's that old has some kind of long-term, chronic, degenerative disease. And I think that it was actually a much better situation when people were dying of heart attacks and strokes and cancer when they were still in their 70s. It's one of those things where life extension is individually very desirable because no individual wants to die. But socially, I think the impact of extending life is bad. Because quite frankly, you're not going to have adaptation unless you have generational turnover. There's a lot of literature now, Neil Howe has just written a new book on this about how important generations are. There's this joke that economists say, that the field of economics progresses one funeral at a time. Because, basically, you're born into a certain age cohort, and to the end of your life, you're going to retain a lot of the views of people that were born going through the same kind of life experiences. And sometimes they're just wrong. And unless that generation dies off, you're just not going to get the kind of social movement that’s necessary.

We've already seen a version of this with all these dictators like Franco and Castro that refuse to die, and modern medicine keeps them alive forever. And as a result, you're stuck with their kind of authoritarian governments for way too long. And so I think that, socially, there's a good reason why under biological evolution you have population turnover and we humans don't live forever. What’s the advantage of everybody being able to live 200 years as opposed to let's say 80 or 90 years? Is that world going to be better? It's going to have all sorts of problems, right? Because you're going to have all of these 170-year-old people that won't get out of the way. How are you going to get tenure if all the tenured people are 170 years old and there's no way of moving them out of the system? I think that these tech billionaires, it's a kind of selfishness that they've got the money to fund all this research so that they hope that they can keep themselves alive, because they are afraid of dying. I think it's going to be a disaster if they're ever successful in bringing about this kind of population-level life extension. And I think we're already in a kind of disastrous situation where a very large proportion of the human population is going to be of an age where they're going to be dependent on the rest of the society to keep them alive. And that's not good economically. That's going to be very, very hard to sustain.

Share


Micro Reads

IBM Tries to Ease Customers’ Qualms About Using Generative A.I. - Steve Lohr, NYT |

Six Months Ago Elon Musk Called for a Pause on AI. Instead Development Sped Up - Will Knight, WIRED |

AI is getting better at hurricane forecasting - Gregory Barber, Ars Technica |

The promise — and peril — of generative AI - John Thornhill, FT |

Uber Freight Taps AI to Help Compete in Tough Cargo Market - Thomas Black, Bloomberg |

Why AI Doesn’t Scare Me - Gary Hoover, Profectus |

A top economist who studies AI says it will double productivity in the next decade: ‘You need to embrace this technology and not resist it’ - Geoff Colvin, Yahoo! Finance |

Meta is putting AI chatbots everywhere - Alex Heath, Verge |

The Big AI Risk We’re Not Talking About - Brent Skorup, Discourse |

Mark Zuckerberg can’t quit the metaverse - Laura Martins, Verge |

This robotic exoskeleton can help runners sprint faster - Rhiannon Williams, MIT Technology Review |

The bizarre new frontier for cell-cultivated meat: Lion burgers, tiger steaks, and mammoth meatballs - Jude Whiley, Vox |

A power grab against private equity threatens the US economy - Drew Maloney, FT |

Risks Are Growing of a Double-Dip ‘Vibecession’ - Jonathan Levin, Bloomberg |

NSF partners with the Institute for Progress to test new mechanisms for funding research and innovation - NSF |

It’s Too Easy to Block a Wind Farm in America - Robinson Meyer, Heatmap |

Can we finally reverse balding with these new experimental treatments? - Joshua Howgego, NewScientist |

Share

0 Comments
Faster, Please!
Faster, Please! — The Podcast
Welcome to Faster, Please! — The Podcast. Several times a month, host Jim Pethokoukis will feature a lively conversation with a fascinating and provocative guest about how to make the world a better place by accelerating scientific discovery, technological innovation, and economic growth.