✨ Getting AI policy right: A Quick Q&A with … economist Samuel Hammond
'We will need permitting reform, grid modernizations, and a broader supply-side liberalization to stay ahead in AI over the medium term'
My fellow pro-growth/progress/abundance Up Wingers in the USA and around the world:
AI policy in 2025 is a delicate balancing act: Regulation should simultaneously facilitate diffusion, mitigate potential risks, and keep the technology globally competitive. In these early days, while the technology is still young but rapidly accelerating, it’s imperative that policymakers create the best possibly ecology for a bright AI future. I asked Samuel Hammond a few quick questions about the state of play in AI policy, and what we can learn from the competition.
Hammond is chief economist at the Foundation for American Innovation, where his research centers on AI and emerging technology more broadly. He is also a senior fellow at the Niskanen Center, where he was formerly the director for social policy.
Agentic commerce and end-to-end AI-run corporations will spawn totally new institutional forms, just as the agricultural revolution gave rise to city states. The basic machinery of government will need to rapidly co-evolve or be supplanted.
1/ How can policymakers attempt to keep regulations at pace with this rapidly-evolving technology?
One possible answer is that we simply don’t. The US government is slow-moving under the best of circumstances, given protracted interagency processes, lengthy judicial reviews, and rule-makings that take multiple years by default. Even when an agency can move quickly in principle, deeper structural reforms are foreclosed by statute, making Congress the ultimate bottleneck.
That’s what animates my worry that transformative AI risks a broader institutional regime change, akin to how Uber and Lyft rapidly displaced regulated taxi commissions. If we see an explosion in AI-assisted drug discovery and engineering, for instance, the legacy drug approval process may simply collapse under a kind of Distributed Denial of Service attack.
I support efforts to accelerate clinical trials and inject AI tools into the FDA, but an AI co-pilot is no substitute for changing the fundamental process, which again requires Congress. In lieu of radical structural reform, we could see system failure and the emergence of strategies to end-run institutional bottlenecks, from Right to Try-style laws to innovation simply moving off-shore.
The FDA’s bottlenecks are just a microcosm of the sorts of dynamics I think the advent of artificial general intelligence will drive for the whole of government. Agentic commerce and end-to-end AI-run corporations will collapse transaction costs dramatically and spawn totally new institutional forms, just as the agricultural revolution gave rise to city states, or the printing press presaged the modern nation state. The basic machinery of government will need to rapidly co-evolve or be supplanted.
To date, my primary policy thesis for addressing the existential risks from AGI and superintelligence is intensive oversight of the frontier AI companies.
2/ How do you think about regulation of frontier models and existential risk?
I sometimes distinguish between two dimensions of AI acceleration: horizontal and vertical. Accelerating along the horizontal axis means accelerating the adoption and diffusion of AI models and applications throughout the real economy, while the vertical axis captures model scaling and emergent capabilities.
Policy needs to carefully distinguish between these two dimensions of acceleration, as reaping the benefits of AI will require driving adoption and diffusion, not just innovating at the frontier. Diffusing existing AI capabilities into every corner of the economy is also relatively low risk, while the sorts of capabilities that keep AI safety researchers up at night are largely still under development.
In particular, the most catastrophic AI threat models are largely downstream of model autonomy. That includes both speculative scenarios where AI goes rogue and we lose control and the more conventional Chemical, Biological, Radiological, and Nuclear (CBRN)-style risks. It is one thing for an AI to give you a written recipe for a nerve agent; another if it can control chemistry equipment directly or indirectly and execute all the steps on your behalf. How many terror attacks are implicitly prevented by the static friction of human laziness? I don’t necessarily want to find out.
The breakthroughs in reinforcement learning that gave rise to reasoning models and agents are now driving true exponential progress in model autonomy. METR’s research suggests the task-horizon of frontier models is doubling every four to seven months. If this trend holds, AIs that can today perform engineering tasks that take humans two hours to complete will soon be able to perform tasks that take a typical eight-hour work day, and only keep doubling in their agency from there.
Nor do I discount the inherent risks from building superintelligence per se. It is worth remembering that this is the explicit goal of companies like OpenAI and Anthropic. Everything we’ve seen to date is in some sense a research preview of vastly more powerful capabilities still to come. While progress happens along a continuum, we are starting to see glimmers of recursive self-improvement through AI coding agents like OpenAI’s Codex and Anthropic’s Claude Code helping accelerate AI research and development. As we inch towards closing the R&D loop and letting AIs discover the breakthroughs needed to build their own successors, the leap in capabilities could become quite discontinuous, leaving us little time to react.
Whether or not books like If Anyone Builds It, Everyone Dies overstate the direness of the situation, building an artificial superintelligence is simply intrinsically dangerous. I’m fond of Jack Clark’s expression, “appropriate fear.” Imagine a system like DeepMind’s AlphaZero that has roughly double the chess rating of the current World Chess Champion but which can master any arbitrary domain. Even if we can sandbox superintelligence and steer its goals, unleashing millions of such beings into the world would be seriously destabilizing.
To date, my primary policy thesis for addressing the existential risks from AGI and superintelligence is intensive oversight of the frontier AI companies. It is imperative to national security that the US government have real-time knowledge of where capabilities stand, what new models are being trained, and what security measures are in place. The leading companies should be able to easily report incidents without liability concerns, and have a confidential means to share and collaborate on safety research without running afoul of antitrust law.
At minimum, robust oversight of frontier AI companies would give policymakers an early warning into the sorts of capabilities they can expect to proliferate over time as secrets leak and training costs come down. Radical transparency is also prerequisite if we ever need to intervene to stop a destabilizing system from being trained or deployed. I don’t think this is a full solution, but it follows the simple heuristic that, under conditions of immense uncertainty, it is prudent to retain your option value.
Beyond oversight at the frontier, I suspect most other sources of catastrophic risk will be managed in a far more distributed fashion. So long as the cost of training compute continues to fall at an exponential rate, we should expect many worrying capabilities to eventually proliferate by default. This will create various new collective action problems and risk vectors that we will need to adapt to through institutional change, defensive forms of AI, and societal hardening. Rather than regulate for biorisk at the model level, for example, it may be more tractable to invest in preventative medicines and early detection systems while mandating that DNA/RNA synthesis companies screen their customers.
We will need permitting reform, grid modernizations, and a broader supply-side liberalization to stay ahead in AI over the medium term.
3/ Are we going to have enough energy to power our AI ambitions?
Unclear. On the one hand, China is out-building us on new energy generation by an astonishing amount—adding an incremental 300–400 gigawatts every year—while new power generation in the US has essentially flatlined for over a decade. Advanced nuclear is a great long-term solution but will take years to come online. Natural gas is a great bridge solution, allowing new data centers to come online quickly without straining the grid, but has resulted in a four- to five-year waitlist for new gas turbines.
We will need permitting reform, grid modernizations, and a broader supply-side liberalization to stay ahead in AI over the medium term. By one estimate, AI data centers globally will need more power than all of California (117 GW) by as soon as 2028. By 2030, it’s plausible that US data centers will alone add over 50 GW of new power demand, which is approaching five percent of our total generation capacity.
It’s possible that these energy constraints can be lessened if we get creative. My colleague Dean Ball, for example, has argued that we can unlock tens of gigawatts of new data center capacity with existing power generation through “demand response” techniques, like modulating data center workloads during periods of peak demand. Additionally, the seam along the Eastern and Western Interconnections happens to be in an ideal location for multi-gigawatt data center projects, and could allow projects to arbitrage energy across each grid.
These solutions may be sufficient to accommodate data center build-outs for the next five years or so, but that’s just one part of the AI stack. If we hope to also power millions of autonomous EVs, automated factories, armies of humanoid robot nannies, and all other cool tech AI will enable, we will need massive upgrades to the electrical grid. Meanwhile, if energy remains a constraint, our current lead over China in frontier models and hardware will quickly evaporate.
The biggest lesson from China’s rise is the importance of large-scale productive capacity.
4/ Is there anything we can learn or avoid in Chinese industrial policy?
There is much to learn from Chinese industrial policy, including from their mistakes. As Dan Wang emphasises in his great new book, Breakneck, China is the engineering state par excellence. They are simply amazing at building lots and lots of stuff — roads, trains, bridges, manufacturing, and all manner of infrastructure. The US political class, in contrast, is dominated by lawyers whose primary output is litigation against such projects.
On the flip side, China’s engineering mindset can also go too far, leading to various forms of overproduction and moral catastrophes like Zero-COVID or the One Child Policy. We can nonetheless afford to move in China’s direction on the margin, whether by attracting more engineers to work in government, or by limiting judicial review and making it much easier to build.
The biggest lesson from China’s rise is the importance of large-scale productive capacity. If — God forbid — we ever go to war with each other, the winner won’t be the one with the best killer drone but the one that can produce the next best drone by the millions. For too long, we’ve discounted applied R&D and scaled-up production in favor of basic science, intellectual property, and marketing and design, as these are ostensibly higher value-added. In reality, much of innovation is driven by a “learning-by-doing” process that benefits from co-locating researchers with manufacturers.
China developed its productive capacity less through central planning than through a kind of economic gardening. China’s Special Economic Zones provide a lightly regulated, geographically concentrated area for building robust manufacturing ecosystems, which are complemented by coordinated investments in human capital, technology transfers, and low-cost inputs. Companies are then forced to compete aggressively, both against each other and in global markets. Export markets help sort the wheat from the chaff, accelerating creative destruction, and allowing the resources of the laggard companies to be recycled into scaling capital for the winners.
China engages in classical forms of central planning as well. But I see their main successes as reflecting a form of state-backed hyper-capitalism, not a rejection of capitalism or the market per se. That messes with the way we usually think about things in the West, as our political tradition tends to sharply distinguish between the public and private sectors.
. . . there are myriad areas where accelerating diffusion will require giving a greenlight to AI-powered applications through wholesale deregulation or the use of regulatory sandboxes to test new approaches on a smaller scale.
5/ What would be your top policy recommendations to the US right now to remain the AI leader?
Retaining our AI leadership will require the US to hold the line on chip and SME export controls while investing in scalable enforcement mechanisms. As we get closer to power AGI-like systems, geopolitical power will be increasingly proxied by the distribution of AI computing resources. We should thus work to ensure the US maintains a decisive edge in AI hardware, and promote the diffusion of the US tech stack around the world, ex-China.
Yet limiting China’s access to the most advanced hardware just buys us time. As I mentioned earlier, reaping the benefits of AI will ultimately require leading on diffusion and adoption. It’s not enough to have the world’s best model sitting idle on a server somewhere.
To that end, there are myriad areas where accelerating diffusion will require giving a greenlight to AI-powered applications through wholesale deregulation or the use of regulatory sandboxes to test new approaches on a smaller scale. Even the most beneficial regulations tend to codify practices for a particular technological paradigm, market structure, or mode of production. So whether or not someone is “pro-” or “anti-” regulation in their politics, there’s a case to be made for a comprehensive regulatory reset — a regulatory jubilee — to facilitate AI diffusion as a general purpose technology.
Imagine a future where AI-powered medical clinics are allowed to pop up and provide low-cost medical services in underserved parts of the country. These could easily be blocked in the short-run by various forms of regulation, or at a minimum require licensed practitioners to be on site and sign-off on diagnoses and prescriptions. But if AI surpasses doctors on every benchmark, can control basic medical equipment, and can’t be easily jailbroken, why not allow the AI to write prescriptions directly? You could even have a roboticized pharmacy dispense the prescription in real time. These are the sorts of questions policymakers will soon have to grapple with for everything, everywhere, all at once.
So while some are worried about AI-specific regulations dampening the industry, I worry even more about how all the existing laws and regulations from the pre-AI era will bind on diffusion, especially in highly regulated sectors like finance, health, and education. Solving this will require policymakers at every level of government to get engaged.
On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were Promised




Tell congress that AI has condemned Israel's GENOCIDE. They will act so fast that your head will spin.
PROBLEM SOLVED!