⚠ The risk of preemptively tackling AI risk
We don't know what we don't know, and we shouldn't risk delaying the benefits of tech progress under the arrogant pretence of knowledge
✈ A quick note: I will be traveling through the middle of July and will be posting a bit less than usual, and perhaps a bit shorter than usual.
There’s a pro-regulation argument about artificial intelligence that goes like this: Comprehensively and strictly regulating AI today — generative AI and large language models, specifically — is somehow pro-innovation. How so? Because tough regulation today would supposedly prevent problems that could lead to harsh regulation tomorrow. One version of this theory was presented to me by New York Times columnist Ezra Klein when I appeared on his podcast back in May.
Here’s Klein:
A disagreement I have with the people who call themselves A.I. accelerationists, the people who are just like, let it rip, is, I think they’re the real decelerationists. I think if you let the Marc Andreessens of the world and so on in charge of A.I., that is a perfect recipe to get very aggressive, very early regulation. Because, one, terrible things are going to happen, but two, people are not going to trust them, whereas, in fact, that [Sam] Altman and Demis Hassabis and Dario Amodei and a bunch of the others seem very cautious and seem very concerned about what could go wrong, is almost paradoxically leading to less regulation. And I somewhat know this from reporting on these meetings they’re having with members of Congress. Because the members of Congress trust that they’re going to be careful and that they’re sort of harm-aware. Now, whether or not that proves to be true, I don’t know. But I do think that there’s a much more complicated relationship between wise regulation and the social tolerance for innovation and innovative risk than people sometimes give credit for.
I replied to Klein with skepticism. As I told him on the podcast, it was unclear to me why he was confident that we would get regulation right at such an early stage in GenAI’s progress.
Consider the National Environmental Policy Act: Shortly after its passage, significant problems became apparent with the law, including its impact on nuclear power expansion. Yet we failed to address those issues a half-century ago and continue to do so. Given this history, why should we expect better results now?
We shouldn’t. This precautionary, “C’mon, safety first!” approach suffers lots of defects. Let’s run through a few:
⏩ First, the AI Safetyist approach assumes we can accurately predict and regulate against future risks with a fast-evolving technology embedded in a complex AI ecosystem of universities, companies, local governments, and national entities. My AEI colleague Bronwyn Howell notes that new regulations in the EU mostly focus on known risks and specific ways AI is used. But they don't address the unpredictable nature and surprising behaviors that can emerge from complex AI systems. A more thorough approach to AI governance is needed, one that would recognize the limitations of current rules and bring together many different groups (like researchers, companies, and policymakers) to work on unforeseen challenges as they come up. As Dean Ball, a tech policy analyst at the Mercatus Center told me, “I think if we could identify the law that would allow us to get all the good things and minimize all the harms, if we could just a priori say what that is, we would do it, but we don't know.”
⏩ Second, the AI Safetyist approach overlooks that safety and innovation in technology are intrinsically linked, not opposing forces. While new technologies can cause accidents, companies typically integrate safety into their design and engineering processes. Market incentives drive firms to prioritize safety for product success. Ball said, “An iPhone is an electrical product that could explode, and Apple doesn't need a regulator overseeing the electrical circuitry of the iPhone . . . Instead, that's just heavily integrated into Apple's own engineering process because good engineering is safe engineering.”
In the AI industry, particularly with language models, companies already focus heavily on safety measures without external regulation. This suggests market forces and internal practices may be more effective at ensuring safety than external regulation in rapidly evolving fields like AI. AEI economist Michael Strain adds,
It's important to recognize there's an enormous financial incentive for businesses to figure out how to use AI technology to defend against those threats. And I think we should have confidence that businesses will figure out ways to counter the harms of AI technology using AI technology.
Keep reading with a 7-day free trial
Subscribe to Faster, Please! to keep reading this post and get 7 days of free access to the full post archives.