🔁 The negative feedback loop that could ‘doom’ AI
'The prevailing regulatory and legal responses to generative AI will limit or even negate its benefits'
Quote of the Issue
“The only way to discover the limits of the possible is to go beyond them into the impossible.” - Arthur C. Clarke
The Conservative Futurist: How To Create the Sci-Fi World We Were Promised
“With groundbreaking ideas and sharp analysis, Pethokoukis provides a detailed roadmap to a fantastic future filled with incredible progress and prosperity that is both optimistic and realistic.”
The Essay
🔁 The negative feedback loop that could ‘doom’ AI
It’s easy to find examples of technological progress and economic growth being stifled by regulation, from environmental reviews stalling clean-energy transmission projects … to European data privacy rules suppressing venture capital investment … to NIMBY housing rules restricting density in some of America’s highest productivity regions. (Oh, and by the way, despite a supposed need for vast amounts of abundant clean energy, there are no reactors currently under construction in the US.)
Even worse: some of these laws go back decades with little to no effort to improve them even after their downsides became obvious. Take the National Environmental Policy Act, the federal environmental review law that took effect in 1970. Although there have been recent efforts to reform NEPA, by both the Trump and Biden administrations, it was obvious almost from the very beginning that law would be serious sand in the gears of the American economic engine. As I write in The Conservative Futurist:
One of the earliest NEPA controversies was the 1973 discovery of the snail darter, an endangered species, during construction of the Tellico Dam on the Little Tennessee River. Construction had begun in 1967, before NEPA became law. But due to NEPA delays, the dam was not completed until 1979. The absurdist nature of the dispute—a big dam being held up by this little creature—made it big news at the time. NEPA also led to big delays in building the 1970s Trans-Alaska Pipeline, which led Herman Kahn to complain that “it is difficult to find any proposed project related to the important area of new energy supplies that has not been so affected, [including] nuclear power, thermal electric power, transmission lines, pipelines, refineries, petroleum and natural gas, and even geothermal power.
Forgetting the lessons of history
Still, here we are, a half-century later, only nibbling at NEPAs’ edges. So, there’s a history of regulation causing negative unintended consequences and a record showing our inability to reform these rules. What’s more, we have a powerful (and no-so-distant) example of a fantastic outcome from a light regulatory hand on a new technology: Washington adopting a largely hands-off approach to regulating the internet in the 1990s. As a result of that decision, “America does not have a Federal Computer Commission for computing or the internet but instead relies on the wide variety of laws, regulations, and agencies that existed long before digital technologies came along,” explains tech policy analyst Adam Thierer.
Despite all that, there seems to be unwarranted confidence that today’s effort to regulate fast-evolving AI will be far more successful. Indeed, as Eric Goldman, a Santa Clara University law professor notes in the essay “Generative AI is Doomed,” regulators today are intervening early and aggressively to regulate Generative AI:
In the mid-1990s, regulators could not anticipate or predict all of the Internet’s uses that have emerged over the last three decades — or how those developments have benefited society. Had regulators hard-coded their limited and myopic 1990s conceptions of the Internet into law, the Internet never could have achieved those outcomes, and I think the world would be poorer for it. But mid-1990s regulators frequently admitted their myopia and unusually chose regulatory forbearance. Generative AI will not get a similar reception from regulators. Regulators are intervening now, acting on their unenlightened 2020s conceptions of what Generative AI does. Because we can’t anticipate what Generative AI is capable of and how new innovative uses will emerge over time, the interventions taking place today will unavoidably restrict Generative AI’s potential upside.
Goldman offers four reasonable hypotheses for this stark, ‘90s-versus-‘20s contrast in regulatory responses:
Decades of dystopian depictions of AI in media have conditioned the public to fear it, while the Internet debuted without such negative baggage;
What Goldman calls the “Generative AI Epiphany” occurred during an era of "techlash" and techno-pessimism, in contrast to the techno-optimism of the ‘90s;
In today's hyperpartisan environment, any new technology for publishing content will face immediate accusations of political bias from all sides;
GenAI is dominated by large tech incumbents who may actually favor regulation to cement their position, while the early Internet lacked entrenched players.
Check, check, check, and check. It’s a persuasive argument that GenAI is in a precarious political and policy position. This, especially, is a really good point from Goldman: The early internet benefited from Section 230, which still shields websites from liability for user-generated content, and from Supreme Court rulings that granted the internet strong First Amendment protections. But there’s no equivalent law for GenAI that would serve as a legal shield and an incentive against overregulation, particularly on the state level. As a result, Goldman thinks GenAI will likely face a wide range of regulations, many likely ill-conceived or politically motivated:
First, ignorant regulations. Regulators will pass laws that misunderstand the technology or are driven by moral panics instead of the facts. Second, censorial regulations. Without strong First Amendment protections for Generative AI, regulators will seek to control and censor outputs to favor their preferred narratives. We can preview this process from recent state efforts to regulate the Internet. Despite the First Amendment and Section 230, regulators nevertheless are actively seeking to dictate every aspect of Internet services’ editorial discretion and operations. Those efforts might fail in court. However, if Generative AI never receives strong Constitutional protection, regulators will embrace the most invasive and censorial approaches. Third, partisan regulations. One particularly pernicious form of censorship would be to steer Generative AI outputs for partisan motivations. Outside of the Generative AI context, we’re already seeing widespread regulatory efforts to control public discussions on partisanized topics, such as vaccines, transgender issues, and abortion. All of those culture wars will hit Generative AI hard, especially if there’s only a weak Constitutional shield.
Here’s comes the doom loop
Which brings us to what “doom” looks like to Goldman: pretty much what economic history would teach us to expect, it turns out. Goldman sees the GenAI sector consolidating into a few large players due to high regulatory compliance costs. And that likely means less innovation and dynamism from less competition, along with higher costs for consumers from less choice. And so it goes. Goldman: “The incumbents will have so much power that regulators will feel pressure to keep intervening. This creates a negative regulatory feedback loop. The increased interventions raise costs, further consolidating power into a smaller number of players, which necessitates more regulatory interventions.”
Interregnum: In The Conservative Futurist, I describe a sort of negative feedback loop that syncs nicely with Goldman’s. I suggest that while America's popular art and culture have long shown an interest in the future, the optimistic visions of the 1960s — as seen in the works of Isaac Asimov, Arthur C. Clarke, and various TV shows and films such as Star Trek and 2001: A Space Odyssey — were gradually overshadowed by more cynical and dystopian portrayals such as Planet of the Apes and Soylent Green. At the same, the US economy suffered a Great Downshift in productivity growth. The result: a self-reinforcing doom loop or “idea trap” in which bad ideas and bad stories lead to bad policy, bad policy leads to bad growth, and bad growth cements bad ideas and encourages more bad stories.
Goldman concludes:
… I have no good ideas of how we can achieve a better outcome. Calling more attention to the problem is a start, but it won’t move the needle against the decades-long socialization to fear AI and how incumbents will coopt regulators to erect regulatory barriers. In a hypothetical timeline, with a different Overton Window, Congress might enact statutory immunities for Generative AI analogous to Section 230. This would delay the regulatory tsunami and preserve industry dynamism longer. Unfortunately, in the timeline we occupy, the idea that regulators today would take any affirmative step to shield Generative AI is ivory-tower fantasy.
Perhaps our best hope is that speed of AI progress will wildly outpace the speed of regulatory advancement. Faster, please!
Micro Reads
▶ Business/ Economics
Generative AI is Doomed - SSRN
Microsoft CEO Pledges $2.2 Billion in Latest Asian AI Investment - Bberg
Microsoft’s OpenAI investment was triggered by Google fears, emails reveal - The Verge
Should AI stay or should AI go: The promises and perils of AI for productivity and growth - Vox EU
Sam Altman says helpful agents are poised to become AI's killer function - MIT
The Unsexy Future of Generative AI Enterprise Apps - Wired
Immigration is surging with big economic consequences - Economist
Immigration’s Effect on US Wages and Employment Redux - NBER
Generative AI and the Future of Work: Augmentation or Automation? - SSRN
People Worry That AI Will Replace Workers. But It Could Make Some More Productive - UToronto
▶ Policy
Strongest U.S. Challenges to Big Tech’s Power Nears Climax in Google Trial - NYT
‘Europe is falling behind’ says Swedish unicorn Einride — as it ramps up in the Gulf - Sifted
What Makes a Society More Resilient? Frequent Hardship. - NYT
We risk a lost decade for the world’s poor - FT
Fears of destructive protectionism are overdone - FT
▶ AI/Digital
The Last Stock Photographers Await Their Fate Under Generative AI - WSJ
How good is OpenAI’s Sora video model — and will it transform jobs? - FT
Apple targets Google staff to build artificial intelligence team - FT
How to Actually Implement a Policy - Statecraft
The AI-Generated Population Is Here, and They’re Ready to Work - WSJ
These Models Gave Up Photoshoots to Sell Their AI Likenesses - WSJ
Generative AI Usage and Academic Performance - arXiv
AI Can’t Reject Your No Good, Very Bad Idea - Bberg Opinion
Better & Faster Large Language Models via Multi-token Prediction - arXiv
AI-driven race cars test limits of autonomous driverless technology - NS
Hey, A.I. Let’s Talk - NYT
Amazon Gets More Fuel for AI Race - WSJ
AI Startup Anthropic Debuts Claude Chatbot as an iPhone App - Bberg
Amazon’s Top-to-Bottom Spending on AI Is Paying Off - Bberg Opinion
▶ Biotech/Health
mRNA Cancer Vaccine Reprograms Immune System to Tackle Glioblastoma - Precision Medicine
Can biotech startups upstage Eli Lilly and Novo Nordisk? - Economist
▶ Clean Energy
A Massive U.S. Nuclear Plant Is Finally Complete. It Might Be the Last of Its Kind. - WSJ
Experimental antibody drug prevents and even reverses diabetic onset - New Atlas
Is climate change accelerating after a record year of heat? - NS
Carbon emissions are dropping—fast—in Europe - Economist
Carbon-negative cement can be made with a mineral that helps catch CO2 - NS
▶ Space/Transportation
New space company seeks to solve orbital mobility with high delta-v spacecraft - Ars
Let’s Look Back at Boeing’s 10-Year Struggle to Launch Humans on Starliner - Gizmodo
Flying taxi start-up Volocopter in crunch talks with investors - FT
Elon Musk vs. Jeff Bezos Is America’s New Moon Race - Bberg
▶ Up Wing/Down Wing
Kill the sun! How wild thought experiments drive scientific discovery - NS
Drink the Kool-Aid all you want, but don’t call AI an existential threat - Bulletin of the Atomic Scientists
How Fix Cultural Drift? - Overcoming Bias
Killer Asteroid Hunters Spot 27,500 Overlooked Space Rocks - NYT
▶ Substacks/Newsletters
Visiting the world’s most expensive nuclear station - Notes on Growth
What Good AI Policy Looks Like - Hyperdimensional
Age of Invention: The Second Soul, Part II - Age of Invention
There's also a _good_ reason why the regulation is different - the whole "death by paperclip maximizer". (The argument is not that the current models are capable of that but that without acting now, we won't be able to catch the future ones that are in time.)