Faster, Please!
Faster, Please! — The Podcast
🤖🧠 My chat (+transcript) with Google DeepMind's Séb Krier on AGI and public policy
0:00
-23:33

🤖🧠 My chat (+transcript) with Google DeepMind's Séb Krier on AGI and public policy

Faster, Please! — The Podcast #55

In a world of Artificial General Intelligence, machines would be able to match, and even exceed, human cognitive abilities. AGI might still be science fiction, but Séb Krier sees this technology as not only possible, but inevitable. Today on Faster, Please! — The Podcast, I chat with Krier about how our public policy should facilitate AGI’s arrival and flourishing.

Krier is an AI policy expert, adviser, and attorney. He currently works in policy development and strategy at Google DeepMind. He previously served as Head of Regulation for the UK Government’s Office for Artificial Intelligence and was a Senior Tech Policy Researcher at Stanford’s Cyber Policy Center.

In This Episode

  • The AGI vision (1:24)

  • The risk conversation (5:15)

  • Policy strategy (11:25)

  • AGI: “if” or “when”? (15:44)

  • AI and national security (18:21)

  • Chatbot advice (20:15)

Below is a lightly edited transcript of our conversation


Pethokoukis: Séb, welcome to the podcast.

Krier: Thank you. Great to be here.

The AGI vision (1:24)

Let's start with a bit of context that may influence the rest of the conversation. What is the vision or image of the future regarding AI — you can define it as machine learning or generative AI — that excites you, that gets you going in the day, that you feel like you're part of something important? What is that vision?

I think that's a great question. In my mind, I think AI has been going on for quite a long time, but I think the aim has always been artificial general intelligence. And in a sense, I think of this as a huge deal, and the vision I have for the future is being able to have a very, very large supply of cognitive resources that you can allocate to quite a wide range of different problems, whether that's energy, healthcare, governance, there's many, many ways in which this technology can be applied as a general purpose technology. And so I guess my vision is seeing that being used to solve quite a wide range of problems that humans have had for decades, centuries, millennia. And I think you could go into so many different directions with that, whether it's curing diseases, or optimizing energy grids, and more. But I think, broadly, that’s the way I think about it. So the objective, in a sense, is safe AGI [Artificial General Intelligence], and from that I think it can go even further. And I think in many ways, this can be hugely beneficial to science, R&D, and humanity as a whole. But of course, that also comes with ways in which this could be misused, or accidents, and so on. And so huge emphasis on the safe development of AGI.

So you're viewing it as a tool, as a way to apply intelligence across a variety of fields, a variety of problems, to solve those problems, and of course, the word in there doing a lot of lifting is “safely.” Given the discussion over the past 18 months about that word, “safely,” is, one, I think someone who maybe only pays passing attention to this issue might think that it's almost impossible to do it safely without jeopardizing all those upside benefits, but you're confident that those two things can ultimately be in harmony?

Yeah, absolutely, otherwise I wouldn't be necessarily working on an AGI policy. So I think I'm very confident this can be done well. I think it also depends what we mean by “safety” and what kind of safety we have in mind. Any technology, we will have costs and trade-offs, but of course the upside here is enormous, and, in my mind, very much outweighs potential downsides.

However, I think for certain risks, things like potentially catastrophic risks and so on, there is an argument in treading some careful path and making sure this is done scientifically with a scientific method in mind, and doing that well. But I don't think there's fundamentally a necessary tension, and I think, in fact, what many people sometimes underestimate is how AI itself, as a technology, will be helpful in mitigating a lot of the risks we're foreseeing and thinking about. There's obviously ways in which AI can be used for cyber offense, but many ways in which you can also use that for defense, for example. I'm cautiously optimistic about how this can be developed and used in the long run

The risk conversation (5:15)

Since these large language models and chatbots were rolled out to public awareness in late 2022, has the safety regulatory debate changed in any way? It seems to me that there was a lot of talk early on about these existential risks. Now I seem to hearing less about that and more about issues about, maybe it's disinformation or bias. From your perspective, has that debate changed and has it changed for the better, or worse?

I think it has evolved quite a lot over the past — I've been working in AI policy since 2017 and there's been different phases, and at first a lot of skepticism around AI even being useful, or hype, and so on, and then seeing more and more of what these general models could do, and I think, initially, a lot of the concerns were around things like bias, and discrimination, and errors. So even things like, early-on, facial-recognition technologies were very problematic in many ways: not just ways in which they were applied, but they would be prone to a lot of errors and biases that could be unfair, whereas they're much better now, and therefore the concern now is more on misuse than it accidentally misidentifying someone, I would say. So I think, in that sense, these things have changed. And then a lot of the discourse around existential risk and so on, there was a bit of a peak a bit last year, and then this switched a bit towards more catastrophic risks and misuse.

There's a few different things. Broadly, I think it's good that these risks are taken seriously. So, in some sense, I'm happy that these have taken more space, in a way, but I think there's also been a lot of alarmism and unnecessary doomerism, of crying wolf a little bit too early. I think what happens is that sometimes people also conflate a capability of a system and how that fits within a wider risk or threat model, or something; and the latter is often under-defined, and there's a tendency for people to often see the worst in technology, particularly in certain regions of the world, so I think sometimes a lot has been a little bit exaggerated or overhyped.

But, having said that, I think it’s very good there's lots of research going on on the many ways in which this could potentially be harmful, certainly on the research side, the evaluation side, there’s a lot of great work. We've published some papers on sociotechnical evaluations, dangerous capabilities, and so on. All of that is great, but I think there has also been some more polarized parts calling for excessive measures, whether regulatory, or pausing AI, and so on, that I think have been a little bit too trigger-happy. So I'm less happy about these bits, but there's been a lot of good as well.

And much of the debate about policy has been about the right sort of policy to prevent bad things from happening. How should we think about policy that maximizes the odds of good things happening? What should policymakers do to help promote AI to reshape science, to help promote AI diffusing as efficiently as possible throughout an economy? How do we optimize the upside through policy rather than just focusing on making sure the bad things don't happen?

I think the very first thing is not having rushed regulation. I'm not personally a huge fan of the Precautionary Principle, and I think that, very often, regulations can cause quite a lot of harm downstream, and they're very sticky, hard to remove.

The other thing that you can do beyond avoiding bad policy is I think a lot of the levers to making sure that the development goes well aren't necessarily all directly AI-related. So it'll be things like immigration: attracting a lot of talent, for example, I think will be very important, so immigration is a big one. Power and energy: you want there to be a lot more — I'm a big fan of nuclear, so I think that kind of thing is also very helpful in terms of the expected needs for AI development in the future. And then there are certain things governments could potentially do with some narrow domains like Advance Market Commitments, for example, although that's not a panacea.

Commitments to do what?

Oh, Advance Market Commitments like pull mechanisms to create a market for a particular solution. So like Operation Warp Speed, but you could have an AI equivalent for certain applications, but of course there's a lot of parameters in doing that well, and I wouldn't want a large industrial-policy-type approach to AI. But I think generally it's around ensuring that all the enablers, all the different ingredients and factors of a rich research and development ecosystem continue to thrive. And so I think, to a large extent, avoiding bad regulation and ensuring that a lot of things like energy, immigration, and so on go well is already a huge part of the battle.

How serious of a potential bottleneck is the energy issue? It seems to me like it's a serious issue that's coming fast, but the solutions seem like they'll take more time, and I'm worried about the mismatch between the problem and finding a solution to the problem.

I suspect that, over the coming years, we will see more and more of these AI systems being increasingly useful, capable, and then integrated into economic systems, and I think as you start seeing these benefits more and more, I think it'll be easier to make the case for why you need to solve some of these kind of policy issues a bit faster.

And I also think these solutions aren't that difficult, ultimately. So I think there’s a lot that can be done around nuclear, and wind, and solar, and so on, and many regulatory processes that could be simplified, and accelerated, and improved to avoid the vetocracy system we're in at the moment. So I don't think the solutions are that difficult, I think mustering the political will might be right now, but I expect that to be less of a challenge in the coming years with AI showing more and more promise, I think.

Policy strategy (11:25)

Speaking of vetocracy, whatever the exact substance of the regulation, I am concerned, at least in the United States, that we have 50 states, and perhaps even more bodies if you look at cities, who all have a lot of ideas about AI regulation, and I'm extremely concerned that that sort of fractured policy landscape will create a bottleneck.

Can we get to where we need to go if that's the regulatory environment we are looking at, at least in the United States? And does, ultimately, there need to be a federal . . . I think the technical word is “preemption” of all those efforts? So there's a federal approach, and there aren't a federal approach, plus a 50-state approach, plus a 175-city approach to regulation. Because if it's going to be what I just described, that seems like a very difficult environment to deal with.

 I'm not wildly optimistic around a patchwork of different state-level regulatory systems. I think that will come with various externalities, you'll have distortionary effects. It will be a very difficult environment, from a commercial perspective, to operate in smoothly. I think I'm a lot more open to something at a federal level at some point, rather than a big patchwork of city-level or state-level regulation. Now, it depends on exactly what we're talking about. There might be specific domain, and context, and application-specific regulations that might make sense in some state and not another, but in general, from a first principles level at least, I think that would probably not be desirable.

A second regulatory concern — and maybe this is dissipating as policy makers learn more, especially at the federal level, maybe, learn more about AI — is that, at least initially, it seems to me that whatever your policy idea was for social media, or about content moderation or what have you, you just kind of took that policy framework and applied it to AI because that was what you had. You pulled that baby right off the shelf. Are we still seeing that, or are people beginning to think, “This is its own thing, and my ideas for social media may be fine for social media, but I need to think differently about AI”? Obviously the technology is different; also, I think both the risks and potential rewards are very different.

Yeah, totally. I think that has been an issue. Now, I wouldn't say that's the case for everyone. There's been some groups and some institutions doing some very careful work that really think about AI, and AGI, and so on in careful, more calibrated ways; but also I’ve seen quite a lot of reports where you could have easily imagined the same text being about social media platforms, or some kind of other policy issue, or blockchain, or something just being repurposed for AI. And there's a lot of stuff out there that's just very high level, and it's hard to disagree with at a high level, but it’s far harder to apply and look at from an operational or practical perspective.

So I've been seeing quite a lot of that; however, I think over time, the field is maturing more and more, and you're seeing better thinking around AI, what it really is, what's appropriate at the model level versus at the application level and the existing landscape of laws and regulation and how these might apply as well, which is often that's something that's forgotten, or you have lots of academics coming in and just trying to re-regulate everything from first principles, and then you're like, “Well, there's tort law, and there's this and that over there.” You got to do your gap analysis first before coming out with all this stuff.

But I think we are seeing the field of AI governance and policy maturing in that space, and I expect it to continue, but I still, of course, see a lot of bad heuristics and poor thinking here, and particularly an underestimation of the benefits of AI and AGI. I think there's a tendency to always think of the worst for everything, and it's necessary, you need to do that too, but few are really internalizing how significant AGI would be for growth, for welfare, and for solving a lot of the issues that we've been talking about in the first place.



The Conservative Futurist: How To Create the Sci-Fi World We Were Promised

Image


AGI: “if” or “when”? (15:44)

Is AGI an “if” issue, or is it a “when” issue, and if it's a “when,” when? And I say this with the caveat that predictions are difficult, especially about the future.

In my mind, it's definitely a “when” question. I see no real strong reason why it would be an “if,” and that being completely impossible. And there's been many, many, many examples over the last 10 years of people saying, “Well, this is not possible with neural networks,” and then 10 minutes later, it is proven to be possible. So that's a recurring theme, and that may not be sufficient to think that AGI is feasible and possible, but I'm pretty confident for a variety of reasons. About AGI, by the way, I think there's an excellent paper by Morris and others on Levels of AGI [for] Operationalizing Progress on the Path of AGI, and I think it's a very good paper to [frame one’s thinking about AGI].

And that goes back to one point I made earlier in that, at some point, you'll have systems that will be capable of quite a lot of things and can do probably anything that your average human can do, starting at least virtually, remotely, to start with, and eventually to the physical world, but I think they'll be capable in that sense. Now, there's a difference between these systems being capable in an individual lab setting or something and then them being actually deployed and used in industrial processes, commercial applications, in ways that are productive, add value, create profits, and so on, and I think there's a bit of a gap here. So I don't think we'll have a day where we'll wake up and say, “Oh, that's it, today we have AGI.” I think it'll be more of a kind of blurry spectrum, but gradually I think it'll be harder and harder people to deny that we have reached AGI, and as this stuff gets integrated into production systems, I think the effects on growth and the economy will speak for themselves.

As to when exactly, I would think that, at least the capabilities, I would expect that in the next five years you could easily see a point where people could make a very confident claim that, yeah, we've got systems now that are AGI-level. They’re generally capable, and they are pretty competent, or even expert-level to at least 90th percentile of skilled adults, and then the challenge will be then operationalizing that and integrating that into a lot of systems. But in my mind, it's definitely not an “if,” and I would say the next five to 10 years is the kind of relevant period I have in mind, at least. It could be longer, and I think the tech community has a tendency to sometimes over-index, particularly on the R&D side.

AI and national security (18:21)

Do you have any thoughts, and maybe you don't have any thoughts, about the notion that, as perhaps AGI seems closer, and maybe the geopolitical race intensifies, that this becomes more of a national security issue, and the government takes a greater role, and maybe the government makes itself a not-so-silent partner with tech companies, and it really becomes almost like a Manhattan Project kind of deal to get there first. Leopold Aschenbrenner wrote this very long, very long paper — is that an issue that you have any thoughts on? Is it something that you discuss, or does it seem just science fictional to you?

Yeah, I do do a lot of thinking on that, and I've read Leopold’s report, and I think there's a lot of good things in there. I don't necessarily agree with everything. I think things like security are really critical, I think thinking about things like alignment, and so on, is important. One thing I really agree with with Leopold’s report that I'm glad he emphasized was the need to secure and cement liberal democracy, “the free world must prevail” kind of thing. I think that is indeed true, and people tend to underestimate the implication on that front. Now, what that looks like, what that means and requires in practice is not fully clear to me yet. I think people talk about a Manhattan Project, but there are many other potential blueprints or ways to think about that. There could be just normal procurement partnerships, there could be different models for this. At some point, something like that could be defensible, but it's very hard to predict that in advance, given particularly. . . well, how hard it is to predict anything with AI to start with. And secondly, there's loads of trade-offs with all these different options, and some might be a lot better than others, so I think certainly more work might be needed there. But, in principle, the idea doesn't seem completely crazy or science fiction to me.

Chatbot advice (20:15)

You recently posted on X that you were baffled at how many people don't use these language models or chatbots daily. I think a lot of people don't know what they would use them for. Do you have any recommendations for ways that people who are not in your line, who are not coders, that people can use them? Do you use them in ways that are applicable to the way regular people might use them?

Yeah, I think so, and under the post, I gave a few examples of how I use it. Now admittedly, most of these wouldn't be something that anyone would do, but I thought about this last weekend when I was seeing my parents and I was trying to get them to understand what Claude or Gemini is and how to think about it, what kind of questions are worth asking, and what kind of questions are not worth asking, and it's very hard to come up with a very crisp way of sharing these intuitions. I think the first piece of advice I'd give is probably to just take one of these models and have a very long conversation with it about some sort of topic, like try to poke holes, try to contradict, and I think that starts giving you maybe a few better intuitions about what this can do, as opposed to just treating it as some sort of question-and-answer Oracle-type search engine, which I think is not the right use case.

That is probably the most unsatisfying way to look at it, and just treat it as a better Google search engine. I mean really that sort of conversational, curious aspect, rather than saying like, “Find me a link.” “Find me a link” isn't a great use.

Exactly, and people will often do that. We'll do a thing, we'll get some incorrect answer, or hallucination, or whatever, and then we'll say, “Oh, these things are not good, they're not accurate,” and we'll stop using it, and to me, that is just crazy. It is very fundamentally incurious, and I think there's ways of using them and thinking of them that is very useful. So what have I done recently? I'm trying to think of an example. . .

I had some papers that I couldn't understand very well, and I would just ask it for better analogies, explanations, try to dig into certain concepts and ideas and just play around with them until the insights and intuitions were easier for me to internalize and understand. And I think you could do that at different levels, and regular people also want to understand things, so I think that might be potentially an example. But the very first thing I would do is simply long, protracted conversations to really get a sense of how far the model can really go, and then, as you do that, you'll find things that are a bit more creative than, “Can you please rewrite this email for me? Can you find typos?” or “Can you fill in my tax report?” or something. I think one way a friend used it — and of course, there are obvious limitations to that, get a lawyer and everything — but he had a legal contract that someone sent to him, and he couldn't afford a lawyer straight away, so he just said, “Can you help me find potential issues and errors in here? Here's who I am in this kind of contract. Here's what I'm concerned with.” And it's a first starting point. It can be useful. It gives you interesting insights. It doesn't mean it replaces a lawyer straight away, but it is one potential interesting way that everyday people could use.

Share

Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Discussion about this podcast

Faster, Please!
Faster, Please! — The Podcast
Welcome to Faster, Please! — The Podcast. Several times a month, host Jim Pethokoukis will feature a lively conversation with a fascinating and provocative guest about how to make the world a better place by accelerating scientific discovery, technological innovation, and economic growth.