My fellow pro-growth/progress/abundance Up Wingers,
Artificial intelligence may prove to be one of the most transformative technologies in history, but like any tool, its immense power for good comes with a unique array of risks, both large and small.
Today on Faster, Please! â The Podcast, I chat with Miles Brundage about extracting the most out of AIâs potential while mitigating harms. We discuss the evolving expectations for AI development and how to reconcile with the technologyâs most daunting challenges.
Brundage is an AI policy researcher. He is a non-resident fellow at the Institute for Progress, and formerly held a number of senior roles at OpenAI. He is also the author of his own Substack.
In This Episode
Setting expectations (1:18)
Maximizing the benefits (7:21)
Recognizing the risks (13:23)
Pacing true progress (19:04)
Considering national security (21:39)
Grounds for optimism and pessimism (27:15)
Below is a lightly edited transcript of our conversation.
Setting expectations (1:18)
It seems to me like there are multiple vibe shifts happening at different cadences and in different directions.
Pethokoukis: Earlier this year I was moderating a discussion between an economist here at AEI and a CEO of a leading AI company, and when I asked each of them how AI might impact our lives, our economists said, âWell, I could imagine, for instance, a doctorâs productivity increasing because AI could accurately and deeply translate and transcribe an appointment with a patient in a way thatâs far better than whatâs currently available.â So that was his scenario. And then I asked the same question of the AI company CEO, who said, by contrast, âWell, I think within a decade, all human death will be optional thanks to AI-driven medical advances.â On that rather broad spectrum â more efficient doctor appointments and immortality â how do you see the potential of this technology?
Brundage: Itâs a good question. I donât think those are necessarily mutually exclusive. I think, in general, AI can both augment productivity and substitute for human labor, and the ratio of those things is kind of hard to predict and might be very policy dependent and social-norm dependent. What I will say is that, in general, it seems to me like the pace of progress is very fast and so both augmentation and substitutions seem to be picking up steam.
Itâs kind of interesting watching the debate between AI researchers and economists, and I have a colleague who has said that the AI researchers sometimes underestimate the practical challenges in deployment at scale. Conversely, the economists sometimes underestimate just how quickly the technology is advancing. I think thereâs maybe some happy middle to be found, or perhaps one of the more extreme perspectives is true. But personally, I am not an economist, I canât really speak to all of the details of substitution, and augmentation, and all the policy variables here, but what I will say is that at least the technical potential for very significant amounts of augmentation of human labor, as well as substitution for human labor, seem pretty likely on even well less than 10 years â but certainly within 10 years things will change a lot.
It seems to me that the vibe has shifted a bit. When I talk to people from the Bay Area and I give them the Washington or Wall Street economist view, to them I sound unbelievably gloomy and cautious. But it seems the vibe has shifted, at least recently, to where a lot of people think that major advancements like superintelligence are further out than they previously thought â like we should be viewing AI as an important technology, but more like what weâve seen before with the Internet and the PC.
Itâs hard for me to comment. It seems to me like there are multiple vibe shifts happening at different cadences and in different directions. It seems like several years ago there was more of a consensus that what people today would call AGI was decades away or more, and it does seem like that kind of timeframe has shifted closer to the present. There thereâs still debate between the ânext few yearsâ crowd versus the âmore like 10 yearsâ crowd. But that is a much narrower range than we saw several years ago when there was a wider range of expert opinions. People who used to be seen as on one end of the spectrum, for example, Gary Marcus and François Chollet who were seen as kind of the skeptics of AI progress, even they now are saying, âOh, itâs like maybe 10 years or so, maybe five years for very high levels of capability.â So I think thereâs been some compression in that respect. Thatâs one thing thatâs going on.
Thereâs also a way in which people are starting to think less abstractly and more concretely about the applications of AI and seeing it less as this kind of mysterious thing that might happen suddenly and thinking of it more as incremental, more as something that requires some work to apply in various parts of the economy that thereâs some friction associated with.
Both of these arenât inconsistent, theyâre just kind of different vibe shifts that are happening. So getting back to the question of is this just a normal technology, I would say that, at the very least, it does seem faster in some respects than some other technological changes that weâve seen. So I think ChatGPTâs adoption going from zero to double-digit percentages of use across many professions in the US and in a matter of high number of months, low number of years, is quite stark.
Would you be surprised if, five years from now, we viewed AI as something much more important than just another incremental technological advance, something far more transformative than technologies that have come before?
No, I wouldnât be surprised by that at all. If I understand your question correctly, my baseline expectation is that it will be seen as one of the most important technologies ever. Iâm not sure that thereâs a standard consensus on how to rate the internet versus electricity, et cetera, but it does seem to me like itâs of the same caliber of electricity in the sense of essentially converting one kind of energy into various kinds of useful economic work. Similarly, AI is converting various types of electricity into cognitive work, and I think thatâs a huge deal.
Maximizing the benefits (7:21)
Thereâs also a lot of value being left on the table in terms of finding new ways to exploit the upsides and accelerate particularly beneficial applications.
However you want to define society or the aspect of society that you focus on â government businesses, individuals â are we collectively doing what we need to do to fully exploit the upsides of this technology over the next half-decade to decade, as well as minimizing potential downsides?
I think we are not, and this is something that I sometimes find frustrating about the way that the debate plays out is that thereâs sometimes this zero-sum mentality of doomers versus boomers â a term that Karen Hao uses â and this idea that thereâs this inherent tension between mitigating the risks and maximizing the benefits, and there are some tensions, but I donât think that we are on the Pareto frontier, so to speak, of those issues.
Right now, I think thereâs a lot of value being left on the table in terms of fairly low-cost risk mitigations. Thereâs also a lot of value being left on the table in terms of finding new ways to exploit the upsides and accelerate particularly beneficial applications. Iâll give just one example, because I write a lot about the risk, but I also am very interested in maximizing the upside. So Iâll just give one example: Protecting critical infrastructure and improving the cybersecurity of various parts of critical infrastructure in the US. Hospitals, for example, get attacked with ransomware all the time, and this causes real harm to patients because machines get bricked, essentially, and they have one or two people on the IT team, and theyâre kind of overwhelmed by these, not even always that sophisticated, but perhaps more-sophisticated hackers. Thatâs a huge problem. It matters for national security in addition to patientsâ lives, and it matters for national security in the sense that this is something that China and Russia and others could hold at risk in the context of a war. They could threaten this critical infrastructure as part of a bargaining strategy.
And I donât think that thereâs that much interest in helping hospitals have a better automated cybersecurity engineer helper among the Big Tech companies â because there arenât that many hospital administrators. . . Iâm not sure if it would meet the technical definition of market failure, but itâs at least a national security failure in that itâs a kind of fragmented market. Thereâs a water plant here, a hospital administrator there.
I recently put out a report with the Institute for Progress arguing that philanthropists and government could put some additional gasoline in the tank of cybersecurity by incentivizing innovation that specifically helps these under-resourced defenders more so than the usual customers of cybersecurity companies like Fortune 500 companies.
Iâm confident that companies and entrepreneurs will figure out how to extract value from AI and create new products and new services, barring any regulatory slowdowns. But since you mentioned low-hanging fruit, what are some examples of that?
I would say that transparency is one of the areas where a lot of AI policy experts seem to be in pretty strong agreement. Obviously there is still some debate and disagreement about the details of what should be required, but just to give you some illustration, it is typical for the leading AI companies, sometimes called frontier AI companies, to put out some kind of documentation about the safety steps that theyâve taken. Itâs typical for them to say, hereâs our safety strategy and hereâs some evidence that weâre following this strategy. This includes things like assessing whether their systems can be used for cyber-attacks, and assessing whether they could be used to create biological weapons, or assessing the extent to which they make up facts and make mistakes, but state them very confidently in a way that could pose risks to users of the technology.
That tends to be totally voluntary, and there started to be some momentum as a result of various voluntary commitments that were made in recent years, but as the technology gets more high-stakes, and thereâs more cutthroat competition, and thereâs maybe more lawsuits where companies might be tempted to retreat a bit in terms of the information that they share, I think that things could kind of backslide, and at the very least not advance as far as I would like from the perspective of making sure that thereâs sharing of lessons learned from one company to another, as well as making sure that investors and users of the technology can make informed decisions about, okay, do I purchase the services of OpenAI, or Google, or Anthropic, and making these informed decisions, making informed capital investment seems to require transparency to some degree.
This is something that is actively being debated in a few contexts. For example, in California thereâs a bill that has that and a few other things called SB-53. But in general, weâre at a bit of a fork in the road in terms of both how certain regulations will be implemented such as in the EU. Is it going to become actually an adaptive, nimble approach to risk mitigation or is it going to become a compliance checklist that just kind of makes big four accounting firms richer? So there are questions then there are just âdoes the law pass or not?â kind of questions here.
Recognizing the risks (13:23)
. . . Iâm sure thereâll be some things that we look back on and say itâs not ideal, but in my opinion, itâs better to do something that is as informed as we can do, because it does seem like there are these kind of market failures and incentive problems that are going to arise if we do nothing . . .
In my probably overly simplistic way of looking at it, I think of two buckets and you have issues like, are these things biased? Are they giving misinformation? Are they interacting with young people in a way thatâs bad for their mental health? And I feel like we have a lot of rules and we have a huge legal system for liability that can probably handle those.
Then, in the other bucket, are what may, for the moment, be science-fictional kinds of existential risks, whether itâs machines taking over or just being able to give humans the ability to do very bad things in a way we couldnât before. Within that second bucket, I think, it sort of needs to be flexible. Right now, Iâm pretty happy with voluntary standards, and market discipline, and maybe the government creating some benchmarks, but I can imagine the technology advancing to where the voluntary aspect seems less viable and there might need to be actual mandates about transparency, or testing, or red teaming, or whatever you want to call it.
I think thatâs a reasonable distinction, in the sense that there are risks at different scales, there are some that are kind of these large-scale catastrophic risks and might have lower likelihood but higher magnitude of impact. And then there are things that are, I would say, literally happening millions of times a day like ChatGPT making up citations to articles that donât exist, or Claud saying that it fixed your code but actually it didnât fix the code and the userâs too lazy to notice, and so forth.
So there are these different kinds of risks. I personally donât make a super strong distinction between them in terms of different time horizons, precisely because I think things are going so quickly. I think science fiction is becoming science fact very much sooner than many people expected. But in any case, I think that similar logic around, letâs make sure that thereâs transparency even if we donât know exactly what the right risk thresholds are, and we want to allow a fair degree of flexibility and what measures companies take.
It seems good that they share what theyâre doing and, in my opinion, ideally go another step further and allow third parties to audit their practices and make sure that if they say, âWell, we did a rigorous test for hallucination or something like that,â that thatâs actually true. And so thatâs what I would like to see for both what you might call the mundane and the more science fiction risks. But again, I think itâs kind of hard to say how things will play out, and different people have different perspectives on these things. I happen to be on the more aggressive end of the spectrum
I am worried about the spread of the apocalyptic, high-risk AI narrative that we heard so much about when ChatGPT first rolled out. That seems to have quieted, but I worry about it ramping up again and stifling innovation in an attempt to reduce risk.
These are very fair concerns, and I will say that there are lots of bills and laws out there that have, in fact, slowed down innovation and certain contexts. The EU, I think, has gone too far in some areas around social media platforms. I do think at least some of the state bills that have been floated would lead to a lot of red tape and burdens to small businesses. I personally think this is avoidable.
There are going to be mistakes. I donât want to be misleading about how high quality policymakersâ understanding of some of these issues are. There will be mistakes, even in cases where, for example, in California there was a kind of blue ribbon commission of AI experts producing a report over several months, and then that directly informing legislation, and a lot of industry back and forth and negotiation over the details. I would say thatâs probably the high water mark, SB-53, of fairly stakeholder/expert-informed legislation. Even there, Iâm sure thereâll be some things that we look back on and say itâs not ideal, but in my opinion, itâs better to do something that is as informed as we can do, because it does seem like there are these kind of market failures and incentive problems that are going to arise if we do nothing, such as companies retrenching and holding back information that makes it hard for the field as a whole to tackle these issues.
Iâll just make one more point, which is adapting to the compliance capability of different companies: How rich are they? How expensive are the models theyâre training, I think is a key factor in the legislation that I tend to be more sympathetic to. So just to make a contrast, thereâs a bill in Colorado that was kind of one size fits all, regulate all the kind of algorithms, and that, I think, is very burdensome to small businesses. I think something like SB-53 where it says, okay, if you can afford to train an AI system for a $100 million, you can probably afford to put out a dozen pages about your safety and security practices.
Pacing true progress (19:04)
. . . some people . . . kind of wanted to say, âWell, things are slowing down.â But in my opinion, if you look at more objective measures of progress . . . thereâs quite rapid progress happening still.
Hopefully Grok did not create this tweet of yours, but if it did, well, there we go. You wonât have to answer it, but I just want to understand what you meant by it: âA lot of AI safety people really, really want to find evidence that we have a lot of time for AGI.â What does that mean?
What I was trying to get at is that â and I guess this is not necessarily just AI safety people, but I sometimes kind of try to poke at people in my social network who Iâm often on the same side of, but also try to be a friendly critic to, and that includes people who are working on AI safety. I think thereâs a common tendency to kind of grasp at what I would consider straws when reading papers and interpreting product launches in a way that kind of suggests, well, weâve hit a wall, AI is slowing down, this was a flop, who cares?
Iâm doing my kind of maybe uncharitable psychoanalysis. What I was getting at is that I think one reason why some people might be tempted to do that is that it makes things seem easier and less scary: âWell, we donât have to worry about really powerful AI enabled cyber-attacks for another five years, or biological weapons for another two years, or whatever.â Maybe, maybe not.
I think the specific example that sparked that was GPT-5 where there were a lot of people who, in my opinion, were reading the tea leaves in a particular way and missing important parts of the context. For example, at GPT-5 wasnât a much larger or more expensive-to-train model than GPT-4, which may be surprising by the name. And I think OpenAI did kind of screw up the naming and gave people the wrong impression, but from my perspective, there was nothing particularly surprising, but to some people it was kind of a flop that they kind of wanted to say, âWell, things are slowing down.â But in my opinion, if you look at more objective measures of progress like scores on math, and coding, and the reduction in the rate of hallucinations, and solving chemistry and biology problems, and designing new chips, and so forth, thereâs quite rapid progress happening still.
Considering national security (21:39)
I want to avoid a scenario like the Cuban Missile Crisis or ways in which that could have been much worse than the actual Cuban Missile Crisis happening as a result of AI and AGI.
Iâm not sure if youâre familiar with some of the work being done by former Google CEO Eric Schmidt, whoâs been doing a lot of work on national security and AI, and his work, it doesnât use the word AGI, but it talks about AI certainly smart enough to be able to have certain capabilities which our national security establishment should be aware of, should be planning, and those capabilities, I think to most people, would seem sort of science fictional: being able to launch incredibly sophisticated cyber-attacks, or be able to improve itself, or be able to create some other sort of capabilities. And from that, Iâm like, whether or not you think thatâs possible, to me, the odds of that being possible are not zero, and if theyâre not zero, some bit of the bandwidth of the Pentagon should be thinking about that. I mean, is that sensible?
Yeah, itâs totally sensible. Iâm not going to argue with you there. In fact, Iâve done some collaboration with the Rand Corporation, which has a pretty heavy investment in what they call the geopolitics of AGI and kind of studying what are the scenarios, including AI and AGI being used to produce âwonder weaponsâ and super-weapons of some kind.
Basically, I think this is super important and in fact, I have a paper coming out that was in collaboration with some folks there pretty soon. I wonât spoil all the details, but if you search âMiles Brundage US China,â youâll see some things that Iâve discussed there. And basically my perspective is we need to strike a balance between competing vigorously on the commercial side with countries like China and Russia on AI â more so China, Russia is less of a threat on the commercial side, at least â and also making sure that weâre fielding national security applications of AI in a responsible way, but also recognizing that there are these ways in which things could spiral out of control in a scenario with totally unbridled competition. I want to avoid a scenario like the Cuban Missile Crisis or ways in which that could have been much worse than the actual Cuban Missile Crisis happening as a result of AI and AGI.
If you think that, again, the odds are not zero that a technology which is fast-evolving, that we have no previous experience with because itâs fast-evolving, could create the kinds of doomsday scenarios that thereâs new books out about, people are talking about. And so if you think, okay, not a zero percent chance that could happen, but it is kind of a zero percent chance that weâre going to stop AI, smash the GPUs, as someone who cares about policy, are you just hoping for the best, or are the kinds of things weâve already talked about â transparency, testing, maybe that testing becoming mandatory at some point â is that enough?
Itâs hard to say whatâs enough, and I agree that . . . I donât know if I give it zero, maybe if thereâs some major pandemic caused by AI and then Xi Jinping and Trump get together and say, okay, this is getting out of control, maybe things could change. But yeah, it does seem like continued investment and a large-scale deployment of AI is the most likely scenario.
Generally, the way that I see this playing out is that there are kind of three pillars of a solution. Thereâs kind of some degree of safety and security standards. Maybe we wonât agree on everything, but we should at least be able to agree that you donât want to lose control of your AI system, you donât want it to get stolen, you donât want a $10 billion AI system to be stolen by a $10 million-scale hacking effort. So I think there are sensible standards you can come up with around safety and security. I think you can have evidence produced or required that companies are following these things. That includes transparency.
It also includes, I would say, third-party auditing where thereâs kind of third parties checking the claims and making sure that these standards are being followed, and then you need some incentives to actually participate in this regime and follow it. And I think the incentives part is tricky, particularly at an international scale. What incentive does China have to play ball other than obviously they donât want to have their AI kill them or overthrow their government or whatever? So where exactly are the interests aligned or not? Is there some kind of system of export control policies or sanctions or something that would drive compliance or is there some other approach? I think thatâs the tricky part, but to me, those are kind of the rough outlines of a solution. Maybe thatâs enough, but I think right now itâs not even really clear what the rough rules of the road are, whoâs playing by the rules, and weâre relying a lot on goodwill and voluntary reporting. I think we could do better, but is that enough? Thatâs harder to say.
Grounds for optimism and pessimism (27:15)
. . . it seems to me like there is at least some room for learning from experience . . . So in that sense, Iâm more optimistic. . . I would say, in another respect, Iâm maybe more pessimistic in that I am seeing value being left on the table.
Did your experience at OpenAI make you more or make you more optimistic or worried that, when we look back 10 years from now, that AI will have, overall on net, made the world a better place?
I am sorry to not give you a simpler answer here, and maybe think I should sit on this one and come up with a kind of clearer, more optimistic or more pessimistic answer, but Iâll give you kind of two updates in different directions, and I think theyâre not totally inconsistent.
I would say that I have gotten more optimistic about the solvability of the problem in the following sense. I think that things were very fuzzy five, 10 years ago, and when I joined OpenAI almost seven years now ago now, there was a lot of concern that it could kind of come about suddenly â that one day you donât have AI, the next day you have AGI, and then on the third day you have artificial superintelligence and so forth.
But we donât live to see the fourth day.
Exactly, and so it seems more gradual to me now, and I think that is a good thing. It also means that â and this is where I differ from some of the more extreme voices in terms of shutting it all down â it seems to me like there is at least some room for learning from experience, iterating, kind of taking the lessons from GPT-5 and translating them into GPT-6, rather than it being something that we have to get 100 percent right on the first shot and there being no room for error. So in that sense, Iâm more optimistic.
I would say, in another respect, Iâm maybe more pessimistic in that I am seeing value being left on the table. It seems to me like, as I said, weâre not on the Pareto frontier. It seems like there are pretty straightforward things that could be done for a very small fraction of, say, the US federal budget, or very small fraction of billionairesâ personal philanthropy or whatever. That in my opinion, would dramatically reduce the likelihood of an AI-enabled pandemic or various other issues, and would dramatically increase the benefits of AI.
Itâs been a bit sad to continuously see those opportunities being neglected. I hope that as AI becomes more of a salient issue to more people and people start to appreciate, okay, this is a real thing, the benefits are real, the risks are real, that there will be more of a kind of efficient policy market and people take those opportunities, but right now it seems pretty inefficient to me. Thatâs where my pessimism comes from. Itâs not that itâs unsolvable, itâs just, okay, from a political economy and kind of public-choice perspective, are the policymakers going to make the right decisions?
On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were Promised












