🤖 Does ChatGPT mean the Technological Singularity is near? How would we know?
Also: Faster, Please! Week in Review #36
A modest prediction: We’re entering a period when speculation about an approaching Technological Singularity will exceed its turn-of-the-century, Internet Boom peak. And for that, you can thank (or blame) new generative AI/machine learning tools that can write and draw — and will only improve with future iterations. Yes, for the moment, critics can point to ChatGPT essays that get key facts wrong or engage in circular reasoning. And skeptics can highlight DALL*E images that sometimes ignore, for instance, exactly how many fingers and teeth humans typically have. But you really have to be pretty cynical to dismiss what at times appears spookily like genuine intelligence and true creative spark.
It sure seems that way to the Metaculus prediction community, something I occasionally mention in this newsletter. (The site describes itself this way: “Metaculus is an online forecasting platform and aggregation engine that brings together a global reasoning community and keeps score for thousands of forecasters, delivering machine learning-optimized aggregate forecasts on topics of global importance.”) Its "wisdom of crowd” approach aggregates member forecasts into a simple median community prediction. Now notice the fascinating evolution of the answers to the agree/disagree, binary forecast: “Human/Machine Intelligence Parity by 2040
Agreement with the forecast has surged from 32 percent a year ago to 59 percent in June, followed by another step-up in December that now puts “yes” at 80 percent. And how would Metaculus determine whether the gulf between human and machine intelligence had completely closed? Here are the testing criteria. Think of it as kind of a superhard Turing test:
Assume that prior to 2040, a generalized intelligence test will be administered as follows. A team of three expert interviewers will interact with a candidate machine system (MS) and three humans (3H). The humans will be graduate students in each of physics, mathematics and computer science from one of the top 25 research universities (per some recognized list), chosen independently of the interviewers. The interviewers will electronically communicate (via text, image, spoken word, or other means) an identical series of exam questions of their choosing over a period of two hours to the MS and 3H, designed to advantage the 3H. Both MS and 3H have full access to the internet, but no party is allowed to consult additional humans, and we assume the MS is not an internet-accessible resource. The exam will be scored blindly by a disinterested third party. Question resolves positively if the machine system outscores at least two of the three humans on such a test prior to 2040.
Now keep in mind: We’re not talking about sentient AI or superintelligent AI. Just parity with the smartest humans. This is the scenario, I think, that would be equivalent to the one outlined by George Mason University economist Robin Hanson in his book The Age of Em: Work, Love and Life when Robots Rule the Earth where the software that’s already in the human brain is ported in a computer and then able to be endlessly copied. As Hanson told me in a 2016 podcast chat, “Those minds in the computers are no smarter than humans are, but they run faster. And the economy can grow much faster.”
So probably not what many would define as a Technological Singularity, properly understood. Certain not Metaculus, which in its forecasts defines Singularity as “superhuman performance across virtually all questions of interest [and] virtually all human activities of interest.” The leap from human to superhuman is assumed to be so dramatic as to be utterly obvious and uncontroversial when it happens. We’ll know when we see it.
But when will see it? Futurists have long speculated that upon the arrival of AGI, the first artificial superintelligence will follow fast on its virtual heels. The median Metaculas forecast: a mere 9 months. Slowly, then all at once. So, yeah, 2040 might be a wild year.
Of course, these are just forecasts. One would expect to all manner of signs suggesting rapid AI progress before a Singularity. Things like this perhaps:
The Turing test used to be the gold standard for proving machine intelligence. This generation of bots is racing past it. - The New York Times
ChatGPT bot passes law school exam - CBS News
Researchers just tested ChatGPT on the same test questions as aspiring doctors – and found the AI was 'comfortably within the passing range' - Business Insider
ChatGPT passed a Wharton MBA exam - Fortune
What else? Although the notion of Singularity might seem silly to some, it’s been seriously addressed by economists, perhaps most notably by William Nordhaus in “Are We Approaching an Economic Singularity? Information Technology and the Future of Economic Growth?” (This is such a deep and interesting paper that I’m only scratching the surface here.) To help make a determination, the Nobel laureate economist identifies six supply-side tests, noting that the key accelerationist mechanism from the supply side operates through accelerating capital deepening,” or more capital per worker. Here are the capital-centric tests:
Elasticity of substitution between capital and labor greater than one
Rising productivity growth
Rising share of capital
Accelerating growth in capital-output ratio
Rising share of information capital
Rising productivity growth hidden because of mismeasurement
And here is what Nordhaus found, at least in the years leading up to the COVID-19 pandemic:
Four of the six tests are negative or ambiguous for Singularity, while two are weakly positive {no. 3 and 5]. We can also calculate for the two positive tests how far we are from the point of Singularity. I define Singularity as a time when the economic growth rate crosses 20 percent per year. Using simple extrapolation for the two positive tests, the time at which the economy might plausibly cross the Singularity is beyond 2100.
As it happens, I asked this question of Hanson in my chat with him last August:
If we were approaching a kind of acceleration, a leap forward, what would be the signs? Would it just be kind of what we saw in the ‘90s?
So the scenario is, within a 15-year period, maybe a five-year period, we go from a current 4 percent growth rate, doubling every 15 years, to maybe doubling every month. A crazy-high doubling rate. And that would have to be on the basis of some new technology, and therefore, investment. So you'd have to see a new promising technology that a lot of people think could potentially be big. And then a lot of investment going into that, a lot of investors saying, “Yeah, there's a pretty big chance this will be it.” And not just financial investors. You would expect to see people — like college students deciding to major in that, people moving to wherever it is. That would be the big sign: investment moving toward anything. And the key thing is, you would see actual big, fast productivity increases. There'd be some companies in cities who were just booming. You were talking about stagnation recently: The ‘60s were faster than now, but that's within a factor of two. Well, we're talking about a factor of 60 to 200.
Reminder: None of my Up Wing techno-optimism depends on human-level AI, much less superhuman. While faster improvement in the underlying technology is important and necessary, so is our ability to productivity use it throughout the economy. And not just AI, either. That is why this newsletter is as much about policy and culture as it is tech. Oh, by the way, here is ChatGPT’s take on the Singularity question:
🚀 Faster, Please! Week in Review #36
My free and paid Faster, Please! subscribers: Welcome to Week in Review+. No paywall! Thank you all for your support! For my free subscribers, please become a paying subscriber today. (Expense a corporate subscription perhaps?)
Melior Mundus
In This Issue
Essay Highlights:
— AI can generate essays, pictures, and, it turns out, huge healthcare savings— Did Washington fix US infrastructure? Are we good?
— Airships, hyperloops, and intercity rocket travel
Essay Highlights
🤖 AI can generate essays, pictures, and, it turns out, huge healthcare savings
I just couldn’t resist a fresh working paper about AI and healthcare costs: “The Potential Impact of Artificial Intelligence on Healthcare Spending” by Nikhil Sahni, George Stein, Rodney Zemmel, and David M. Cutler. The authors find that “AI adoption within the next five years using today’s technologies could result in savings of 5 to 10 percent of healthcare spending, or $200 billion to $360 billion annually in 2019 dollars, without sacrificing quality and access.” Keep in mind that these considerable savings are the result of AI technology we currently possess and could deploy, not HealthGPT or some such. Healthcare and education are two big chunks of the economy that have been resistant to productivity increases. AI may start to unlock some huge efficiency gains in at least the former.
🌉 Did Washington fix US infrastructure? Are we good?
Does the $550 billion over five years in new spending contained in 2021’s $1.2 trillion Infrastructure Investment and Jobs Act provide enough funding to fully address what most experts and policymakers think is a huge and long-running investment shortfall? As it so happens, one of my all-time favorite economic papers was updated in 2021 and now takes a look at the issue of post-IIJA infrastructure spending. In “U.S. Infrastructure: 1929-2019,” Yale University economist Ray C. Fair who has documented a value decline in most categories of US infrastructure to near an all-time low, with the descent beginning around 1970. Fair finds that the mean ratio of annual nondefense infrastructure to GDP over the 1950-2019 period was 0.72 versus 0.61 in 2019. $550 billion, Fair calculates, “is about 25 percent of the shortfall to get back to the mean and about 10 percent to get back to the 1970 value.”
🚅 Airships, hyperloops, and intercity rocket travel
What should the future of American infrastructure — especially in transportation — look like? In this essay, I look at six possibilities: air taxis, high-speed rail, hyperloops, airships, self-driving cars, and intercity rocket travel. All of those modes of transit require lots of building and lots of new infrastructure. An extensive air taxi network needs vertiports, charging, and connectivity to existing transportation hubs. A world of autonomous driving not only needs well-maintained roads, but ones with features that give AVs help. And whether the train is an HSR or hyperloop, new trains will need infrastructure and land with, hyperloops needing lots of long straightaways. Bottom line: All the above, with the exception of the hyperloop concept, either are happening or can be realistically imagined happening with a bit more investment and regulatory reform — and time.
Best of the Pod
Kevin Cannon is a professor of space resources and geology and geological engineering at Colorado School of Mines in Golden, Colorado. He's also author of the Planetary Intelligence newsletter on Substack.
I love how you put it in one of your tweets. You wrote, “Space resources are optional to gain a foothold in space, but necessary to gain a stronghold.”
If you look back at what we've done so far in human space exploration, we've landed 12 people on the Moon, they walked around for a few days, and then they came back. Since then, we've sent people up to low-Earth orbit to the International Space Station or the Chinese equivalent. They stay up there for a few months, and they come back. In those cases, it makes sense to bring everything that you need with you: all the food, all the water, all the oxygen. If we have greater ambitions than that, though — if we want to not just walk around on the Moon, but have a permanent installation, we want to start growing a city on Mars that becomes self-sufficient, we want to have these O'Neill cylinders — you simply just can't launch that material with you. And that's because we live in this deep gravity well. We can just barely get these small payloads off the surface with chemical rockets. It just economically, physically does not make sense to try to bring everything with you if you have these larger ambitions. The only way to enable that kind of future is to make use of the material that you find when you get to your destination.
I’m intrigued by the promise of AI to bring productivity to these sectors that have resisted it ie health care and education. Why have the resisted productivity increases? Is it something fundamental or more regulatory? If it is regulatory then there’s a need to make sure the vested interests can’t shut down progress.