⤴ Up Wing ideas to boost America's productive capacity
A Quick Q&A with … AI researcher Steve Byrnes on different AGI scenarios
Up Wing isn't just a pro-progress attitude and outlook — or producing more techno-optimistic films and TV shows — it's also a pro-progress policy agenda, one in the service of increasing human flourishing through faster technological progress and economic growth. Reminder: My theory of Up Wing progress views the American economy as a techno-organic supercomputer constructed from interconnected human networks that rearranges matter into increasingly complex, valuable configurations.
In short, Up Wing economics is pro-complexity, pro-network, and pro-connection, aiming to ensure each node of these networks — cities, companies, universities, entrepreneurs — operates at maximum creativity and productivity. A successful Up Wing economy is one where an educated, healthy populace can generate and exchange knowledge through various networks to produce what physicist César Hidalgo calls “crystals of imagination”: the sophisticated products that embody economic value and drive progress.
Dig deeper into Up Wing economics — as I do in my 2023 book, The Conservative Futurist: How To Create the Sci-Fi World We Were Promised — and you’ll see the importance of something called “total factor productivity growth,” which measures the efficiency and technological progress of an economy. In the book, I rename TFP “technologically futuristic productivity growth.”
And with good reason. As I write:
TFP pushes forward the frontier of what an economy can be capable of tomorrow. From 1948 through 1973, TFP accounted for two-thirds of overall productivity growth. TFP is a key piece of the arrow of prosperity: tech progress and innovation (factories shifting to electric motors from steam, jet engines, atomic reactors, the shipping container, the microchip) drive TFP growth → TFP growth drives labor productivity growth → productivity growth drives economic growth → and economic growth drives higher incomes for everyone.
Then came what I call the Great Downshift, the fiftieth anniversary of which no one should celebrate, that statistically began in the middle of 1973. Economic growth downshifted, labor productivity downshifted, and, most importantly, TFP growth downshifted. Over the next quarter century, TFP grew at just one-fourth of the rate that it did during the previous quarter century, 0.5 percent versus 2.1 percent. TFP growth then surged to 1.8 percent during the 1995–2004 tech boom before sinking back to its sluggish post-1973 pace.
Figuring out ways to boost productivity growth, especially TFP, is the subject of an article in the new issue of International Economy magazine, “America’s Productivity Disappointment.” A number of experts are asked for their pro-growth ideas, with a bunch of those ideas getting mentioned over and over. Among the repeat winners:
Immigration reform and expanding immigration, especially for high-skilled workers. "Allowing in more immigrants, including stapling a green card to all STEM degrees, would make a big contribution." - Jason Furman
Investing in education, skills training, and human capital development. "So policies that promote the development of talent—for example, by ensuring broad access to advanced STEM education for those that cannot afford it—should form the basis for any long-run productivity-enhancing effort." - Stephen G. Cecchetti and Kermit L. Schoenholtz
Increasing government funding for research and development, especially basic research. "Ultimately the laws of physics may impose diminishing returns on the quest for revolutionary new technologies. But the quest must continue, because in the long run a rising standard of living depends almost entirely on technology-driven productivity growth." - Michael Lind
Providing the right incentives and environment for business investment, such as favorable tax policies, regulatory reform, and macroeconomic stability. "Taxes that increase the after-tax costs of capital and deter business investment—either in the form of new taxes or expiration of existing cuts—should be avoided." - Mickey D. Levy
Fostering competition, dynamism and openness in the economy, including reducing trade barriers. "The force of competition remains a fundamental driver of efficiency. Instead, the next president appears destined to rely on mish-mash industrial policies and silent prayers for an artificial intelligence miracle." - Gary Clyde Hufbauer
The good news here: All those ideas (and many, many more) can be found in my book! I must be on the right track. Faster, please!
🤖A Quick Q&A with … AI researcher Steve Byrnes on different AGI scenarios
Generative artificial intelligence made its public debut in 2022 with ChatGPT, the first widely-available chatbot. Many techno-optimists now eagerly anticipate (and pessimists dread) the coming of Artificial General Intelligence: human-level AI capable of reasoning, creative problem-solving, and beyond. AGI may be the next logical step up from the current technology, but what it will look like in practice lacks consensus.
Steve Byrnes recently published an essay, “Four visions of Transformative AI success,'' that explores several possible outcomes of a future in which AGI is made possible, each with their own set of assumptions and associated risks. According to Byrnes, the technology may take the form of “Helper AIs” (AIs that follow human commands), “Autonomous AIs” (AIs that operate autonomously), “Supercharged biological human brains” (human intelligence enhanced by or merged with AI), or might be banned altogether. Each scenario results in an entirely different set of possibilities for humanity. To elaborate on his outlook for the future, I asked Byrnes a few quick questions.
Byrnes is an AGI safety researcher. He currently conducts his research at the Astera Institute, where he uses neuroscience for AGI safety development.
1/ What are your best realistic cases and worst realistic cases about how AI and its impact on our society progress in the coming years?
As context, when people talk about “big impacts of AI,” what does “big” mean? There’s “big” as in “the internet is having a big impact on our planet,” and then there’s “big” as in “the human species is having a big impact on our planet.” “Transformative AI” (TAI) (a.k.a. Artificial General Intelligence, AGI) is the latter—a much much bigger deal than the industrial revolution.
Maybe imagine the arrival on our planet of a fast-breeding species of extraordinarily competent and ambitious intelligent aliens. That sounds like sci-fi! But heavier-than-air flight and nuclear weapons were likewise sci-fi tropes before they got invented. TAI hasn’t been invented yet, but it is almost certainly technologically possible. After all, think of everything that human brains can do — inventing brilliant new scientific theories, scouting out business opportunities and founding companies, winning allies through charisma, and so on. Humans don’t do those things via some magic forever beyond the reach of science. They do them via algorithms in their brains, interacting with the world. And chips can run those same algorithms too. … Well, not yet, because nobody knows the right algorithms. But people are trying to figure them out.
So a very realistic case for the coming years — maybe even the coming decades — is that TAI doesn’t happen (yet). AI will still have a “big impact,” for both better and worse, but within the range of normal innovations like the internet. Nobody knows whether TAI will happen in the next decade, but if it does, hoo boy. Again, imagine the arrival on our planet of a fast-breeding species of extraordinarily competent and ambitious aliens. The worst case is that they have callous disregard for human welfare, and they accumulate hard power, leading to human extinction or permanent disempowerment. The best case is unprecedented growth and prosperity, wildly beyond anything ever seen.
2/ Why do you believe that TAI cannot be permanently delayed?
I don’t expect TAI to require rare materials, nor impossible amounts of chips or power. I think it just requires AI knowledge — the invention of certain kinds of algorithms. This kind of knowledge seems hard to govern. For example, in the past, the USA has tried to regulate access to cryptographic algorithms, or access to pirated music, and those efforts mostly failed.
We’re used to thinking of scientific knowledge and know-how as moving inexorably forward. If that continues indefinitely, it would follow that sooner or later people will figure out TAI algorithms and share them widely. I don’t know how soon, but “never” is a long time. Granted, it’s not inevitable that scientific knowledge will move inexorably forward. A giant meteor could hit Earth tomorrow. Europe had the Dark Ages. Of course, these are not things that I expect to happen, and certainly not things that anyone would hope for!
3/ Is it even possible to prevent “Helper AIs” from evolving into “Autonomous AIs,” if regulators decided to do so?
An important background question is governability. Some activities, like making cryptographic algorithms, or inventing board games, are basically ungovernable. Large numbers of people can mess around with these things in their basements. Governments basically can’t stop that, even if they wanted to. Other things, like building nuclear weapons, are much more governable. It involves facilities that show up on satellites, and requires rare materials and parts that can only be sourced from a few places in the world.
When people think about the governability of technologies, they tend to stamp it as either always and everywhere Good or Bad. (“Good” if you’re leftist, “Bad” if you’re libertarian.) I disagree—I think it depends on the technology. I’m very happy to live in a world where cryptography is ungovernable. If politicians had the power to micromanage cryptography, I would expect them to make terrible decisions, so it’s nice that they can’t.
But on the other hand, I’d be very unhappy to live in a world where building nuclear weapons was ungovernable. Thank God that hobbyists can’t make nuclear bombs in their basements. I think it’s insufficiently appreciated that “it didn’t have to be that way.” Imagine a world where enriching uranium was as easy as distilling alcohol, and where uranium was as widespread in the ground as quartz. It’s too terrifying to imagine. Lucky that things didn’t turn out that way!
Above, I talked about future Transformative AI as like the arrival on our planet of a fast-breeding species of extraordinarily competent and ambitious aliens. It seems quite possible for researchers to figure out how to make such AI, while remaining stumped about how to make these AIs care about humans. If so, ungovernability seems bad. But ungovernability might happen anyway. Indeed, my guess is that algorithms will improve to a point where future powerful AI won’t require giant data centers, but rather small numbers of consumer chips.
I think that keeping AIs under close human supervision would be nice, but competition (market forces, international competition, etc.) will be pushing strongly in the opposite direction. So we’ll only keep AIs under supervision if the technology is governable. It might or might not be, but I think we should be pushing in that direction.
4/ You advocate for more technical work towards contingency-planning for Vision 2 (Autonomous AIs). Can you outline what specific technical work you believe is most urgent and why?
If we want autonomous AIs that care about human welfare — and/or about friendship, beauty, or whatever we’re hoping for in the distant future — then there are both technical challenges, and societal challenges.
The technical part is: Somebody needs to write an instruction manual for making AIs with those motivations (so to speak). I think that’s an unsolved problem in AI, and is what I mostly work on myself.
The societal part is: Whoever is programming AIs needs to actually follow those instructions — or, we need our society to be resilient to the possibility that some people won’t. This may require outreach, education, and governance interventions, and I think it’s a much, much harder problem than it sounds.
5/ What do you think this debate looks like a decade from now?
A strong possibility is: AI technology will be more capable in 10 years than it is today, but not radically so (yet!). It would still be just a normal technology, with normal technology costs and benefits—jobs, productivity, etc. The idea of AIs autonomously founding and leading new companies, or autonomously inventing and developing brilliant new science and technology, will still sound as implausible as it does today. … And if so, I’m concerned that very concerned people like me will be accused of “crying wolf.” So to be clear: I don’t know when TAI/AGI will arrive. And no one else does, either! That doesn’t constitute a reason to dismiss it, but rather a reason to urgently prepare — just as people prepare for earthquakes and wars whose start date can’t be pinpointed. We have our work cut out and need all the preparation time that we can get.
Micro Reads
▶ Business/ Economics
The UK is at risk of losing Europe’s tech crown - FT Opinion
A Requiem for Hyperglobalization - Foreign Affairs
▶ Policy/Politics
Impact of Artificial Intelligence on Elections - R Street
We were supposed to be flooded with AI disinformation... now 800 million have voted, where is it? - Warp News
▶ Biotech/Health
‘Extremely impressive’: melanoma jab trial results excite doctors - The Guardian
Population Decline Isn’t the Problem. Hungry Kids Are. - Bberg Opinion
No evidence sperm counts are dropping, researchers find - U of Manchester
▶ Clean Energy/Climate
Nuclear Power Is Hard. A Climate-Minded Billionaire Wants to Make It Easier. - NYT
Can Small Nukes Power a Greener UK Future? - Bberg Opinion
▶ Space/Transportation
If Not in New York, Then Where? - NYT Opinion
▶ Up Wing/Down Wing
Our goal should be a planet with fewer humans - Wapo Opinion
▶ Substacks/Newsletters
How to Build an AI Data Center - Construction Physics
Apple Intelligence and the Shape of Things to Come - Hyperdimensional
Storm Clouds on the U.S. Industrial Policy Horizon? - The Dispatch
It’s the Nuclear Regulation, Stupid - Breakthrough Journal
Tesla Optimus Bots Have Started Working Autonomously in the Factory - next BIG future