✋ The risk of slowing down AI progress
The are massive opportunity costs that must be considered
Is ChatGPT like a nuclear weapon or deadly pathogen such as COVID-19? To Vox writer Sigal Samuel, both provide constructive analogies for thinking about generative AI, as argued in her new piece, “The case for slowing down AI: Pumping the brakes on artificial intelligence could be the best thing we ever do for humanity.”
What’s her case for slowing down AI progress instead of racing to develop more advanced and powerful AI systems? It boils down to this: It might kill us all. From Samuel’s piece:
What if researchers succeed in creating AI that matches or surpasses human capabilities not just in one domain, like playing strategy games, but in many domains? What if that system proved dangerous to us, not because it actively wants to wipe out humanity but just because it’s pursuing goals in ways that aren’t aligned with our values? That system, some experts fear, would be a doom machine — one literally of our own making. … Imagine that we develop a super-smart AI system. We program it to solve some impossibly difficult problem — say, calculating the number of atoms in the universe. It might realize that it can do a better job if it gains access to all the computer power on Earth. So it releases a weapon of mass destruction to wipe us all out, like a perfectly engineered virus that kills everyone but leaves infrastructure intact. Now it’s free to use all the computer power! In this Midas-like scenario, we get exactly what we asked for — the number of atoms in the universe, rigorously calculated — but obviously not what we wanted.
And if AI indeed poses some sort of existential threat to humanity's survival, or at least modern civilization, Samuel concludes we should “slow things down” and “flatten the curve of AI progress.” She notes that AI research has been moving from academia to industry due to the need for a lot of computing power and top technical talent. That’s supposedly a problem since businesses have a profit motive to accelerate progress. She notes: “By one estimate, the size of the generative AI market alone could pass $100 billion by the end of the decade — and Silicon Valley is only too aware of the first-mover advantage on new technology.”
One solution, according to Samuel, would be to give more resources to academic researchers. Another way to shift incentives is by stigmatizing certain types of AI work. Companies care about their reputations which affect their bottom line. Creating a public consensus that some AI work is unhelpful or unhelpfully fast could change companies’ decisions. Samuel also recommends exploring regulation that would change incentives and altering the publishing system to reduce research dissemination in some cases.
Then there’s the Tech Cold War China: “Maybe you think the US would be foolish to slow down AI progress because that could mean losing an arms race with China.” But perhaps the US could strike an arms control agreement with Beijing just as it used to down with Soviet Russia over nuclear weapons. But I’m not sure even Samuel thinks China is likely to downshift its AI ambitions. She recounts a meeting with Microsoft executives after the Bing AI launch.
I was told we can’t afford to because we’re in a two-horse race between the US and China. “The first question people in the US should ask is, if the US slows down, do we believe China will slow down as well?” the top Microsoft executive said. “I don’t believe for a moment that the institutions we’re competing with in China will slow down simply because we decided we’d like to move more slowly. This should be looked at much in the way that the competition with Russia was looked at” during the Cold War. … For those who are pessimistic that coordination or diplomacy with China can get it to slow down voluntarily, there is another possibility: forcing it to slow down by, for example, imposing export controls on chips that are key to more advanced AI tools. The Biden administration has recently shown interest in trying to hold China back from advanced AI in exactly this way. This strategy, though, may make progress on coordination or diplomacy harder.
The China piece of this may be the most important. No American policymaker wants to be in the position of President Eisenhower when the Soviets successfully launched the Sputnik satellite in 1957. And such a technological surprise would be far more significant if it involved China not only taking a clear AI lead but also developing the first human-level artificial general intelligence. As former Google CEO Eric Schmidt recently wrote in Foreign Affairs:
Even more powerful than today’s artificial intelligence is a more comprehensive technology—for now, given current computing power, still hypothetical—called “artificial general intelligence,” or AGI. Whereas traditional AI is designed to solve a discrete problem, AGI should be able to perform any mental task a human can and more. Imagine an AI system that could answer seemingly intractable questions, such as the best way to teach a million children English or to treat a case of Alzheimer’s disease. The advent of AGI remains years, perhaps even decades, away, but whichever country develops the technology first will have a massive advantage, since it could then use AGI to develop ever more advanced versions of AGI, gaining an edge in all other domains of science and technology in the process. A breakthrough in this field could usher in an era of predominance not unlike the short period of nuclear superiority the United States enjoyed in the late 1940s.
Then there’s this: Samuel writes that the fight against “genetic modification of foods” may provide a model or at least inspiration for what she’s trying to accomplish: the need to “balance substantial potential benefits and economic value with very real risk.” I find that worrisome.
Here are two ways of thinking about how society should deal with technological progress, especially those advances that look like big leaps forward. One approach is to employ the Precautionary Principle, which the European Commission (of course) defines this way: “Where there is uncertainty as to the existence or extent of risks to human health, the institutions may take protective measures without having to wait until the reality and seriousness of those risks become fully apparent.”
The de facto withdrawal of societal license from nuclear power is one example of the better-safe-than-sorry, guilty-until-proven-innocent Precautionary Principle. Another is the multi-decade resistance to Golden Rice, a type of rice developed by German researchers in the early 2000s that was genetically modified to produce beta-carotene, a substance that our body can convert into vitamin A, in order to prevent blindness and death in underdeveloped countries. This from Golden Rice: The Imperiled Birth of a GMO Superfood on how the Precautionary Principle affected regulation of the rice:
Those regulations, which cover plant breeding, experimentation, and field trials, among other things, are so oppressively burdensome that they make compliance inordinately time-consuming and expensive. Such regulations exist because of irrational fears of GMOs, ignorance of the science involved, and overzealous adherence to the precautionary principle. Ingo Potrykus, one of the co-inventors of Golden Rice, has estimated that compliance with government regulations on GMOs caused a delay of up to ten years in the development of his final product.
Ironically, in view of all the good that Golden Rice could have been doing in ameliorating vitamin A deficiency, blindness, and death during those ten years, it was precisely the government agencies that were supposed to protect people’s health that turned out to be the major impediments to faster development of this life-saving and sight-saving superfood. As it was, countless women and children died or went blind in those intervening years as a result of government-imposed regulatory delays. While that is not a 'crime against humanity,' it is nevertheless a modern tragedy.
(For a deeper dive into Golden Rice and other Precautionary Principle examples, check out out “How Many Lives Are Lost Due to the Precautionary Principle?” by Adam Thierer of the Mercatus Center. It’s from where I took the above quote.)
A more pro-progress approach to invention and innovation is the Proactionary Principle. It views risk first as an opportunity for learning and improvement, rather than a threat to be avoided or minimized. If you’re too safe, you might be sorry! Here’s one definition: “Encourage innovation that is bold and proactive; manage innovation for maximum human benefit; think about innovation comprehensively, objectively, and with balance.” The Proactionary Principle keeps the concept of opportunity cost — the cost involved in any decision consists of the sacrifices of alternatives required by that decision — front of mind in decision-making.
An example: The Human Genome Project was a massive scientific endeavor that aimed to map and sequence the entire human DNA. It was launched in 1990 and completed in 2003, despite many ethical and social concerns about the implications of such knowledge for human dignity, privacy, identity, health, and diversity. The project was driven by the Proactionary Principle of advancing human understanding and potential through genetic research.
One of the operating assumptions of this newsletter is that the U.S. and other advanced economies have experienced too many public policy decisions driven by the Precautionary rather than Proactionary Principle. And it now looks like we may see this conflict play out in real time thanks to the rise of generative AI. But think about the opportunity costs of delay. This is about a lot more than the ability to quickly summarize academic papers and create cool art with a short prompt. From a new essay by Microsoft co-founder Bill Gates:
The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. It will change the way people work, learn, travel, get health care, and communicate with each other. Entire industries will reorient around it. Businesses will distinguish themselves by how well they use it. … AIs will dramatically accelerate the rate of medical breakthroughs. The amount of data in biology is very large, and it’s hard for humans to keep track of all the ways that complex biological systems work. There is already software that can look at this data, infer what the pathways are, search for targets on pathogens, and design drugs accordingly. Some companies are working on cancer drugs that were developed this way.
America is already very good at slowing down progress, as we’ve seen for the past half century. Let’s see if we can manage some thoughtful acceleration as well.
Obviously, I thoroughly agree with you, James. Thanks for linking to the Proactionary Principle. Soon, on my own Substack, I'll be publishing a chapter "The Perils of Precaution" which comes from an unfinished book on the Proactionary Principle.
Agree. In addition, I think it’s quite likely unaligned LLMs will be available on the dark web any time now. Cartels will be the putting resources behind them. Our best bet is to continue the development of friendly AI.