🤖 How should we regulate AI without strangling it?
A long-read conversation with technology policy analyst Dean Ball
Quote of the Issue
"The Founding Fathers had their eyes on the future; we have ours on the past, when not absolutely averted in shame. We tend to think less about where we are going than about where we have been, about simpler times, and about opportunities missed." - Michael Kammen, People of Paradox: An Inquiry Concerning the Origins of American Civilization
The Conservative Futurist: How To Create the Sci-Fi World We Were Promised
“With groundbreaking ideas and sharp analysis, Pethokoukis provides a detailed roadmap to a fantastic future filled with incredible progress and prosperity that is both optimistic and realistic.”
The Essay
There’s so much we don’t yet understand about AI, which makes policymaking tricky. We can, however, learn from our past mistakes. Highly regulating nuclear energy, housing, and the natural environment may have seemed like the “safe” choices at the time … only to lead to innovation bottlenecks and unforeseeable problems down the line. In the future, we might learn that taking the “safe” route on AI policy had dangerous repercussions. I sat down with Dean Ball to talk about smart AI policy — including existential risks, future AI capabilities, proactive vs reactive regulation, divergent state regulatory approaches, and the appropriate federal role in AI policy.
Ball is a research fellow at George Mason University’s Mercatus Center, where his research focuses on AI, emerging technologies, and government. He also runs his own Substack, Hyperdimensional. For more information about Ball’s view on what how legislators should approach artificial intelligence, check out his article, “What Good AI Policy Looks Like.”
James Pethokoukis: To what degree are concerns about existential risk continued to influence policymakers as they begin to think about regulation?
Dean Ball: I think less and less over time. I think when ChatGPT and other generative AI applications first came out in 2022, that was very prominent, and I think it's gone down over time as people have gotten more experience hands-on with the technology and as a broader diversity of voices have entered the conversation.
One of the interesting things about AI is that it seems to me to be — maybe with the exception of nuclear weapons — it's the first major technology that people have been worrying about and fretting about since before it was really real. And that's actually part of the problem there with the existential risk conversation. A lot of the people who theorized about that stuff, people like Eliezer Yudkowsky, sort of the East Bay rationalist community, where those concerns come from, they were theorizing about these things far before the deep learning revolution of 2012. They were talking about this stuff in the early 2000s, and what they were really imagining were these abstract machines that existed in a realm of pure reason. One of Eliezer’s famous predictions is a system that could not have experienced any aspect of the real world and then see three frames of an apple falling from a tree, and from that predict not just gravity, but also relativity. That is not anything like what the language models that we have now are. … A lot of these existential risk concerns contemplated the idea of a system that was a pure reasoning engine without any human priors, necessarily. It would sort of deduce everything from first principles in a super intelligent way. Instead, it's getting all of our first principles because we're imbuing it with our values via all the text that humans have ever written.
There's not a clear technical path to the kinds of capabilities that the existential risk theorists have been articulating.
Do existential risk concerns, such as AI becoming uncontrollable and causing catastrophic harm, remain more relevant for the very latest "frontier" AI models? The worry, as I see, is that even if current state-of-the-art systems seem safe, a cutting-edge model we don't fully understand could become self-aware, spread through the internet, and wreak havoc on critical infrastructure or something. If those concerns aren't valid even for frontier models, why is there still so much worry about them?
There's not a clear technical path to the kinds of capabilities that the existential risk theorists have been articulating. A lot of the characteristics that such a system would have are not just not the characteristics of current language models. It's not obvious, just from looking at the technical architecture of those things, how we would get from here to there on this approach. There might be other approaches that we take in the future that are more like creating something that has its own sense of agency and whatnot, but I don't really see that as being a risk that … that's not a risk that's predominant for me.
Now, are there other risks that models at the frontier could possibly present? The next generation of models will probably have agentic capabilities. So instead of you just asking ChatGPT, “Tell me about particle physics” or something, instead what you might do is say, “I want you to go research a paper” in the way that you might kind of say to it like a research assistant, and it will go out on the internet and it might fill out some forms and it might download PDFs and analyze them and sort of make decisions in an agentic way.
The core of that word is “agent,” right?
Agent, yes. You are the principal and it's the agent, just like that. I think they'll be able to act as assistants in that way. I think we'll take baby steps. I think that they're not going to be doing very sophisticated things at first, but I think we'll start to see—we've already seen some of that—I think we'll see more of capabilities like that come on line.
What are the dynamics when millions or billions of people have access to that kind of capability, to an AI agent that can actually go out on the internet and take action for you? Well, eventually the AI agents interact with one another, and there's all kinds of strange dynamics that you can think of emerging from that. I wouldn't categorize any of them as existential risks, necessarily, but they are things that I think are worth considering. I don't really know how much from a policy perspective we can do about that right now, because I think it's just sort of hard to say what would happen. But I think that there will be all kinds of governance mechanisms that come into place as that takes off. But no, from an existential risk perspective, I just don't see a clear technical path.
Are there technologists who believe there is a realistic path by which AI could pose an existential threat, and who are therefore cited by those pushing for urgent AI regulation?
Yeah, Geoffrey Hinton would be a great example, one of the founders of the deep learning revolution. Yoshua Bengio is another example of that. They do. They have that opinion. A lot of other preeminent figures in the scientific community do not. I think frankly, the scientific consensus is probably more in favor of the people who don't really see that path coming. At the technology level, certainly companies like OpenAI, Anthropic, the leaders of those companies have said things in the past about existential risk. I think they take that seriously. I think they ought to.
I do think to a certain extent there was a bid for regulation that would sort of entrench their own positions.
Should we give special credence to technologists who predict that future AI systems, which don't yet exist, could pose an existential risk? While their speculation may be more informed than the average person's, they are still ultimately making predictions about the capabilities of technologies that have not been developed.
It was roughly a year ago now that Sam Altman and Dario Amodei from Philanthropic and a couple other people testified before the Senate, and they played up some of these concerns. I think you hear them talking about it less now. I do think to a certain extent there was a bid for regulation that would sort of entrench their own positions. Very unclear what the long-term business model is going to look like for these model makers. Will it be a highly competitive business where all the margin just gets competed away? That could happen. That happens a lot in software.
Are the AI companies pushing for regulation doing so purely out of self-interest, or is there also genuine concern motivating them? The theory is that they're "talking their own book"— influencing regulation to benefit themselves and disadvantage smaller competitors who can't handle the compliance costs. While regulatory capture like this is common, do you sense that there's also some legitimate worry behind their push for AI regulation, rather than it being driven entirely by anti-competitive strategizing?
Keep reading with a 7-day free trial
Subscribe to Faster, Please! to keep reading this post and get 7 days of free access to the full post archives.