"Whenever there has been progress, there have been influential thinkers who denied that it was genuine, that it was desirable, or even that the concept was meaningful. They should have known better." - David Deutsch
The Essay
π€ No to the AI Pause
Item: More than 1,000 technology leaders and researchers, including Elon Musk, have urged artificial intelligence labs to pause development of the most advanced systems, warning in an open letter that A.I. tools present βprofound risks to society and humanity.β A.I. developers are βlocked in an out-of-control race to develop and deploy ever more powerful digital minds that no one β not even their creators β can understand, predict or reliably control,β according to the letter, which was released Wednesday by the nonprofit group Future of Life Institute. Others who signed the letter include Steve Wozniak, a co-founder of Apple; Andrew Yang, an entrepreneur and candidate in the 2020 U.S. presidential election; and Rachel Bronson, the president of the Bulletin of the Atomic Scientists, which sets the Doomsday Clock. - The New York Times, 03/29/23.
I wonder how the past three years mightβve gone differently if in the late 2010s there had been on βpauseβ on research into a radical new vaccine technology called mRNA? Or how a 1930s pause of atomic weapons research might have meant the War in the Pacific continuing into 1946? (Indeed, just the opposite happened. In August 1939, Albert Einstein sent a letter β written by Einstein and physicist LeΓ³ SzilΓ‘rd β to President Franklin D. Roosevelt. It warned that Nazi Germany might be working on an atomic fission weapon and urged the US to immediately begin its own nuclear weapons research.)
It was justΒ on Monday, you might recall, that I wrote about the potential of generative AI hugely increasing US productivity growth. No more Long Stagnation. The half-century downshift might soon be over. And the point here isnβt about boosting economic statistics. Itβs about accelerating technological progress and economic growth to create a wealthier, healthier, and more resilient country of greater opportunity. Then fewer than 48 hours later come calls for a research pause.
And it was just last week, in response to a Vox piece that advocated βpumping the brakesβ on AI progress, that I made my case for acceleration. I wonβt apologize for making geopolitics central to my pro-progress case. As former Google CEO Eric Schmidt recently wrote in Foreign Affairs:
Even more powerful than todayβs artificial intelligence is a more comprehensive technologyβfor now, given current computing power, still hypotheticalβcalled βartificial general intelligence,β or AGI. β¦ The advent of AGI remains years, perhaps even decades, away, but whichever country develops the technology first will have a massive advantage, since it could then use AGI to develop ever more advanced versions of AGI, gaining an edge in all other domains of science and technology in the process. A breakthrough in this field could usher in an era of predominance not unlike the short period of nuclear superiority the United States enjoyed in the late 1940s.
But letβs take a step back from big economic and military issues. Iβm not an AI technologist. And I take the Pause case at face value, not as some attempt by tech laggards to slow down tech leaders. (Sam Altman, CEO of OpenAI, did not sign the letter.) Even on its own terms, however, I have problems with the Pause β whether or not such a delay is workable across companies and countries, including China. I fear that embedded within the Pause is the better-safe-than-sorry Precautionary Principle that will one day push for a permanent pause with humanity well short of artificial general intelligence. That, whether for concerns economic or existential, would deprive humanity of a potentially powerful tool for human flourishing. I think the Pausers miss that. But Captain James T. Kirk understood a thing or two about humanity and risk:
They used to say if man could fly, he'd have wings. But he did fly. He discovered he had to. Do you wish that the first Apollo mission hadn't reached the moon, or that we hadn't gone on to Mars and then to the nearest star? β¦ Doctor McCoy is right in pointing out the enormous danger potential in any contact with life and intelligence as fantastically advanced as this. But I must point out that the possibilities, the potential for knowledge and advancement is equally great. Risk. Risk is our business. That's what the starship is all about. That's why we're aboard her.
5QQ
π‘ 5 Quick Questions for β¦ Daniel Castro on regulating AI
Back in February, the Information Technology and Innovation Foundationβs Daniel Castro published a report titled βTen Principles for Regulation That Does Not Harm AI Innovation.β The title itself highlights an important β yet often neglected β public policy concern: how regulators should balance legitimate concerns surrounding artificial intelligence with the danger of stifling innovation. Itβs critical we donβt throw the productivity-enhancing baby with the AI-worrying bathwater.
1/ Whatβs your quick reaction to the call for an "AI pause" put forward by Elon Musk and others?
Fears about out-of-control AI are not new, but the letter from Elon Musk and his acolytes shows that fears about AI are out-of-control. The letter goes so far as to compare large-language models (LLMs), like GPT-4, to human cloning and eugenics. Their argument is that because past LLMs have emergent features that their creators did not predict, such as being able to perform arithmetic or answer questions, future LLMs might have dangerous emergent features that their creators will not be able to control. Again, this fear is not new. Almost a decade ago, Elon Musk said that AI research risked βsummoning the demonβ and labeled AI an existential threat to humanity. They was wrong then, and they are wrong now. AI is neither alive nor magic. And there is a big difference between steady advancements in computing technology, and computers becoming self-aware.
Perhaps the bigger issue is that stopping AI development would be a mistake. First, under no scenario is China going to stop its development of the technology, so other countries unilaterally halting their development of AI only puts them at a disadvantage. Second, there is no reason that efforts to address risks from AI cannot proceed in parallel with the development of the technology itself. Indeed, the two should go hand-in-hand. Third, the biggest impact of slowing the development of AI would be delaying the deployment of the technology in areas such as health care, education, and more. Policymakers should ignore these calls and focus on accelerating the development and deployment of AI.
2/ Are policymakers' fears about AI legitimate? Or are they based on misconceptions?
People tend to fear what they do not understand, and it is not a stretch to say that most policymakers do not comprehend even the basics about AI. Many critics tout AI as portending technological doom. If the robots arenβt killing us, they are destroying our jobs, invading our privacy, and controlling our lives. These fears are not new. Technology panics have accompanied many major scientific advancements, from gas lighting to railroads to electricity. But it is important for policymakers to go beyond the negative headlines, and understand where the real risks β and real opportunities β will be in the years ahead.
For example, one common fear is that AI will be biased. The concern itself is legitimate since training systems on data reflecting real-world biases will likely capture those biases. But this risk is well-known and so companies building products and services can take steps to address it. And most businesses have a strong incentive to do so because biased results are inaccurate results. For example, if a bank is making biased lending decisions, it is either overestimating risks (and leaving money on the table) or underestimating risks (and costing itself money). Moreover, critiques about bias in AI systems ignore that when these systems make decisions, even when they are not perfect, they may still be better than human ones. And eradicating bias in machines is much easier than in people.
3/ How can developers fix biases within AI?
The first step to fixing bias is identifying it. Disparate impact assessments can help developers understand where there are problems. And working closely with subject-matter experts can help developers understand the context of how their AI systems are used. But to prevent and fix bias developers need better data. Better data means data that is more accurate, more representative, and more timely. Better data leads to better AI. Unfortunately, too much of the conversation about data is about privacy, with proposals to minimize data collection and limit how data can be used. But the bigger risk for some people is not that too much data is going to be collected about them, but rather that too little data will be collected about them. Without data, some people and communities are invisible in datasets and may miss out on some of the benefits of the emerging AI economy. This is a problem called the data divideβand just like the digital divide, will require a multi-pronged, long-term effort to address. Creating better training datasets for various AI applications will be one of the biggest challenges in the years ahead. Just as software can improve over time, with bug reports and code updates, datasets will need similar maintenance. But this will require investing in data as essential infrastructure for the digital economy.
4/ Could you explain the concept of "pro-human bias" in AI law? Do you see this diminishing as AI utilization increases?
Pro-human biases in law put the use of AI to complete a task at a disadvantage compared to a human. For example, a law might require that only humans can practice law or write prescriptions, which therefore eliminates the potential for using AI to provide similar services, even if AI systems can perform the same task as effectively or better. There are going to be winners and losers in the AI economy, and those who stand to lose economically will likely use laws to protect their interests, such as proposals to tax robots. Weβve seen this movie before. The Internet created similar problems and many affected parties scrambled to protect themselves, such as enacting laws to prohibit buying cars and wine online, to protect auto dealers and wine wholesalers, respectively.
5/ What industries do you think would benefit the most from more efficient AI regulations?
Some sectors of the economy, such as health care, education, and transportation stand to benefit the most from AI, yet may be the slowest to change. Part of the challenge is the pacing problem: technology moves faster than regulation, and so highly regulated sectors face the biggest obstacles. Another problem is that fear towards technology can make policymakers hesitant to adopt technology. For example, even if autonomous and semi-autonomous vehicles reduce overall accidents, deaths, and injuries, policymakers still fear headlines about accidents from these vehicles. The same is true of using AI in healthcare or education, where mistakes in using the technology are inevitable. Until policymakers fear headlines about the opportunity cost of delayed deployment of AI, hesitancy will have the upper hand.
Micro Reads
βΆ In Sudden Alarm, Tech Doyens Call for a Pause on ChatGPT - Will Knight and Paresh Dave, Wired |
βΆ Elon Musk, Other AI Experts Call for Pause in Technologyβs Development - Deepa Seetharaman, WSJ |
βΆ"Godfather of artificial intelligence" weighs in on the past and potential of AI - CBS News |
βΆ Bacterial βNanosyringeβ Could Deliver Gene Therapy to Human Cells - Ingrid Wickelgren, Scientific American |
βΆ How long can humans live? We may not have hit the limit yet - Clare Wilson, NewScientist |
βΆ Inside the cozy but creepy world of VR sleep rooms - Tanya Basu, MIT Tech Review |
βΆ The Metaverse Is Quickly Turning Into the Meh-taverse - Meghan Bobrowsky, WSJ |
The 6 month ban could easily become a one year ban, and then further extended. Like the student loan repayment pause, or the β30 days to flatten the curveβ.
Scott Alexander is giving a really unfair take on Tyler Cowen's anti-pause piece:
https://astralcodexten.substack.com/p/mr-tries-the-safe-uncertainty-fallacy
Alexander is misrepresenting the argument as "The future of AI is very uncertain, therefore it's safe." He is calling this "The Safe Uncertainty Fallacy". It's very much a straw man argument.