🤖 A Quick Q&A with … AI policy analyst Jennifer Huddleston
'There's a possibility that we could see something that's very good AI policy.'
Quote of the Issue
“Society is most likely to oppose a new technology if it perceives that the risks are likely to occur in the short run and the benefits will only accrue in the long run. Technological tensions are often heightened by perceptions that the benefits of innovation will accrue only to small sections of society, while the risks will be more widely distributed.” - Calestous Juma, Innovation and Its Enemies: Why People Resist New Technologies
I have a book out: The Conservative Futurist: How To Create the Sci-Fi World We Were Promised is currently available pretty much everywhere. I’m very excited about it! Let’s gooooo! ⏩🆙↗⤴📈
Q&A
🤖 A Quick Q&A with … AI policy analyst Jennifer Huddleston
In the wake of the President Joe Biden’s executive order on artificial intelligence, it’s important to remember the global policy ecosystem in which the US exists. In my conversation with Jennifer Huddleston, we address the surprising influence of European regulation on US policy, the pressure applied by the tech race with China, and the power of American policymakers at a pivotal moment in technological innovation. Huddleston is a technology policy research fellow at the Cato Institute. She recently a great blog post on AI regulation, “What Might Good AI Policy Look Like? Four Principles for a Light Touch Approach to Artificial Intelligence.”
1/ What does good AI policy look like? It's an evolving technology. Do we even know what that policy might look like?
It's hard to know because AI is a general-purpose technology in a way that we haven't really been experiencing since the internet, in that we've seen this huge disruption in so many different areas at once. So when we're talking about “good AI policy,” that's really hard because it can be so broad. What I often suggest is that we take a step back and, first off, we ask ourselves the question, “What is it we're actually concerned about? What is it that we actually think there needs to be policy around?” Probably two things:
The first is, how do we maximize all of the benefits that can come with AI, these wonderful applications, and then secondly, if there are harms that we're specifically concerned about, are there places where we need to see those addressed? But rather than jumping to the idea that we need new laws, new policies, in many cases on the harm side, we already have laws that address the things that bad actors may do, whether it’s concerns about discrimination or concerns about fraud, there are already laws on the books that didn't disappear just when AI gained popularity. We want to prevent policy that would take away those benefits without being clearly addressed to harms.
I think there are some wonderful opportunities to get this right. When we look at what happened that really allowed the US to be a leader in innovation during the internet age, we're seeing those same opportunities arise around AI. Part of the idea there is we don't know what all the potential applications are, so when we're looking at this from a policy side, we want to be open to that uncertainty in many ways and only see policy interventions when absolutely necessary.
2/ When we think about AI, the internet does not seem to be the example that I'm seeing folks in Washington talk about. What I'm seeing is, “Let's think about social media and these platforms and our missed opportunity to regulate these early.” That seems to be what [policymakers] are hearkening back to. Why is that right or wrong as the model for AI regulation?
Keep reading with a 7-day free trial
Subscribe to Faster, Please! to keep reading this post and get 7 days of free access to the full post archives.