Washington’s initial thinking about AI regulation has evolved from a knee-jerk fear response to a more nuanced appreciation of its capabilities and potential risks. Today on Faster, Please! — The Podcast, I talk with technology policy expert Neil Chilson about national competition, defense, and federal vs. state regulation in this brave new world of artificial intelligence.
Chilson is the head of AI policy at the Abundance Institute. He is a lawyer, computer scientist, and former chief technologist at the Federal Trade Commission. He is also the author of “Getting Out of Control: Emergent Leadership in a Complex World.”
In This Episode
The AI risk-benefit assessment (1:18)
AI under the new Trump Administration (6:31)
An AGI Manhattan Project (12:18)
State-level overregulation (15:17)
Potential impact on immigration (21:15)
AI companies as national champions (23:00)
Below is a lightly edited transcript of our conversation.
The AI risk-benefit assessment (1:18)
Pethokoukis: We're going to talk a bit about AI regulation, the future of regulation, so let me start with this: Last summer, the Biden administration put out a big executive order on AI. I assume the Trump administration will repeal that and do their own thing. Any idea what that thing will be?
We have a lead on the tech, we have the best companies in the world. I think a Trump administration is really going to amp up that rhetoric, and I would expect the executive order to reflect the need to keep the US and the lead on AI technology.
Chilson: The Biden executive order, repealing it is actually part of the GOP platform, which does not say a lot about AI, but it does say that it's definitely going to get rid of the Biden executive order. I think that's the first order of business. The repeal and replace process . . . the previous Trump administration actually had a couple of executive orders on AI, and they were very big-picture. They were not nearly as pro-regulatory as the Biden executive order, and they saw a lot of the potential.
I'd expect a shift back towards a vision of AI as a force for good, I'd expect to shift towards the international dynamics here, that we need to keep ahead of China in AI. We have a lead on the tech, we have the best companies in the world. I think a Trump administration is really going to amp up that rhetoric, and I would expect the executive order to reflect the need to keep the US and the lead on AI technology.
That emphasis differs from the Biden emphasis in what way?
The Biden emphasis, when you read the executive order, it has some nice language up top about how this is a great new technology, it's very powerful, but overwhelmingly the Biden executive order is directed at the risk of AI and, in particular, not existential risk, more the traditional risks that academics who have talked about the internet have had for a long time: these risks of bias, or risks to privacy, or risks to safety, or deepfakes. And to be honest, there are risks to all of these technologies, but the Biden executive order to really pounded that home, the emphasis was very much on what are the problems that this tech could cause and what do we as the federal government need to do to get in here and make sure it's safe for everybody?
I would expect that would be a big change. I don't see, especially on the bias front, I don't see a Trump administration emphasizing that as a primary thing that the federal government needs to fix about AI. In fact, with people like Elon Musk having the ear of the president, I would expect maybe to go in the opposite direction, that these ideas around bias are inflated, that these risks aren't really real, and, to the extent that they are, that it's no business of the federal government to step in and tell companies how to bias or de-bias their products.
One thing that sort of confuses me on the Elon Musk angle is that it seemed that he was — at least used to be — very concerned about these somewhat science-fictional existential risks to AI. I guess my concern is that we'll get that version of Musk again talking to the White House and maybe he says, “I'm not worried about bias, but I'm still worried about it killing us all.” Is there any concern there, that that theme, which I think seems to have faded a little bit from the public conversation (maybe I'm wrong) that that will reemerge.
I agree with you that I think that theme has faded. The early Senate hearings were very much in that vein, they were about the existential risk, and some of that was the people who were up there talking. This is something that's been on the mind of some of the leaders of the cutting edge of the tech space, and it's part of the reason why they got into it. There's always been a tension there. There is some sort of dynamic here where they're like, “This stuff is super dangerous and super powerful, so I need to be the one creating it and controlling it.” I think Musk still kind of falls in that bucket, so I share a little bit of that concern, but I think you're right that Congress has said, “Oh, those things seem really farfetched. That's not how we're going to focus our time.” I would expect that to continue even with a Musk-influenced administration.
I actually don't think that there is necessarily a big tension between that and a pushback against the sort of red-tape regulatory approach to AI that was kind of the more traditional pessimistic, precautionary approach to technology, generally. I think Musk is a guy who hates red tape. I think he's seen it in his own businesses, how it's slowed down launches of all sorts. I think you can hate red tape and be worried about this existential risk. It's not necessarily in intentioned, but it'll be interesting to see how those play out, how Musk influences the policy of the Trump administration on AI.
AI under the new Trump Administration (6:31)
One issue that seemed to be coming up over and over again is differing opinions among technologists, venture capitalists, about the open-source issue. How does that play out heading into a Trump administration? When I listen to the Andreessen Horowitz podcast, those guys seem very concerned.
They're going to get this software. They're going to develop it themselves. We can't out-China China. We should lean into what we're really good at, and that is a dynamic software-development environment, of which open source is a key component.
So there's a lot of disagreements about how open source plays out. Open source, it should be pointed out first, is a core technology across everything that people who develop software use. Most websites run on open source software. Most development tools have a huge open source component, and one of the best ways to develop and test technology is by sharing it with people and having people build on it.
I do think it is a really important technology in the AI space. We've seen that already, people are building smaller models, doing new things in open source that it costs a lot of money to do in the first instance, maybe in a closed source.
The concerns that people raise is that this, especially in the national security space or the national competition, that this sort of exposes our best research to other countries. I think there's a couple of responses to that.
The first one is that closed source is no guarantee that those people don't have that technology as well. In fact, most of these models fit on the thumb drive. Most of these AI labs are not run like nuclear facilities, and it's much easier to smuggle a thumb drive out than it is to smuggle a gram of plutonium or something like that. They're going to get this software. They're going to develop it themselves. We can't out-China China. We should lean into what we're really good at, and that is a dynamic software-development environment, of which open source is a key component.
It also offers, in many ways, an alternative to centralized sources of artificial intelligence models, which can offer a bunch of user interface-based benefits. They're just easier to use. It's much easier to log into OpenAI and use their ChatGPT than it is to download and build your own model, but it is really nice as a competitive gap filler to have thousands and thousands of other that might do something specific, or have a specific orientation, which you can train on your own. And those exist because of the open source ecosystem. So I think it solves a lot of problems, probably a lot more than it creates.
So what would you expect — let's focus on the federal level — for this congress, for the Trump administration, to do other than broadly affirm that we love AI, we hope it continues? Will there be any sort of regulatory rule, any sort of guidance, that would in any way constrain or direct this technology? Maybe it's in the area of the frontier models, I don't know.
I think we're likely to see a lot of action at the use level: What are the various uses of various applications and how does AI change that? So in transportation and healthcare . . . this is a general purpose technology, and so it's going to be deployed in lots of spaces, and a lot of these spaces already have a lot of regulatory frameworks in place, and so I think we'll see lots of agencies looking to see, “Hey, this new technology, does it really change anything about how we regulate medical devices? If it does, how do we need to accommodate that? What are the unique risks? What are the unique opportunities that maybe the current framework doesn't really allow for?”
I think we'll see a lot of that. I think, once you get up to the abstract model level, it's much harder to figure out what problem both are we trying to solve at the model level and do we have the capability to solve at the model level. If we're worried about people developing bio weapons with this technology, is making sure the model doesn't allow that, is that useful? Is it even possible? Or should we focus those attentions maybe down on, people can't secure the components that they need to execute a biohazard? Would that be a more productive place? I don't see a lot of action, honestly, at the model level.
Maybe there'll be some reporting requirements or training requirements. The executive order had those, although they used something called the Defense Production Act — I think probably unconstitutionally, how they use that. But that's going to go away. If that gets filled in by Congress, that there's some sort of reporting regime — maybe that's possible, but Congress doesn't seem to be able to get those types of really high-level tech regulations across the line. They haven't done it with privacy legislation for a long time and everybody seems to think that would be a good idea.
I think we'll continue to see efforts at the agency level. One thing Congress might do is they might spend some money in this space, so maybe there will be some new investment or maybe the national laboratories will get some money to do additional AI research. That has its own challenges, but most of them are financial challenges, they're not so much whether or not it's going to impede the industry, so that's kind of how I think it'll likely play out at the federal level.
An AGI Manhattan Project (12:18)
Listen to this episode with a 7-day free trial
Subscribe to Faster, Please! to listen to this post and get 7 days of free access to the full post archives.