🎇 Sam Altman's 'gentle singularity' message to an anxious public
Keep calm and carry on, he says. A wonderful future awaits, he says. But is his cheery timeline too cheery and too short?
My fellow pro-growth/progress/abundance Up Wingers,
First things first: OpenAI boss Sam Altman is hardly a disinterested observer of the public debate about the present and future of artificial intelligence. The buzzy new essay on his personal blog, “The Gentle Singularity,” should be read as a serious piece meant to inform and also persuade.
I mean, the persuasion part begins with a title that suggests the impact of superintelligence (“OpenAI is a lot of things now, but before anything else, we are a superintelligence research company”) will be big but manageable and even mundane.
Let’s take a look at the superintelligence vision and timeline presented by Altman, a forecast built around the idea of combinatorial power generated by multiple feedback loops operating simultaneously. (AI improves research, creating better AI and robots, which build infrastructure for more powerful AI, rinse and repeat). Altman:
The present foundation (2025) The "event horizon" has already been crossed. … “The takeoff has started.” Carbon-based lifeforms are “close to building digital superintelligence.” AI agents capable of genuine cognitive work have arrived, fundamentally altering software development. More than mere automation, it's the deployment of systems that can work through complex problems. The infrastructure flywheel has already begun spinning. Anticipated economic value from AI is driving massive data-center buildout, which will enable more powerful AI, which will generate more economic value.
Next steps and breakthroughs (2026–2027) Altman expects AI systems capable of generating "novel insights, which means not just processing existing knowledge but creating fresh understanding. This represents a qualitative leap from pattern recognition to genuine discovery. The following year “may” bring embodied AI: robots performing real-world tasks, marking the transition from digital to physical capability — a capability typically part of strong “artificial general intelligence” definitions.
The cascade effect (late 2020s) Now we’re cooking with techno-grease. Once robots can participate in their own supply chains — mining minerals, operating factories, building more robots — the traditional constraints on scaling dissolve. ("If we have to make the first million humanoid robots the old-fashioned way, but then they can operate the entire supply chain … then the rate of progress will obviously be quite different.") Similarly, as AI accelerates AI research itself, what Altman delightfully calls "larval recursive self-improvement" matures. He notes that scientists already report productivity gains of 200–300 percent. (“Advanced AI is interesting for many reasons, but perhaps nothing is quite as significant as the fact that we can use it to do faster AI research”).
The transformation decade (2030s) By the early 2030s, intelligence and energy—"ideas, and the ability to make ideas happen"— become wildly abundant. The cost of intelligence converges toward the cost of electricity. (“The rate of new wonders being achieved will be immense. It’s hard to even imagine today what we will have discovered by 2035.”) Altman envisions achievements compressed into impossibly short timeframes: high-energy physics solved one year, space colonization beginning the next; materials science breakthroughs followed immediately by brain-computer interfaces. The Great Acceleration is in full gear.
Altman’s message inside the message
All in all, what Altman presents is pretty upbeat — hardcore Up Wing, really — and it’s a great way to get a sense of what the CEO of a leading AI company is thinking — or at least says he’s thinking.
But what was he trying to accomplish, exactly, with his nearly 2000-word post? To me, Altman's essay reads as a strategically-timed counternarrative to mounting AI anxiety, particularly a) the recent job apocalypse warnings from his competitor Dario Amodei, Anthropic CEO, and b) public skepticism revealed in recent polling.
For example: Where Amodei predicts 20 percent unemployment, Altman reminds that “if history is any guide, we will figure out new things to do and new things to want, and assimilate new tools quickly (job change after the industrial revolution is a good recent example).”
While Admodei describes a “white-collar bloodbath,” as headline writers put it, Altman concedes “there will be very hard parts like whole classes of jobs going away, but on the other hand the world will be getting so much richer so quickly that we’ll be able to seriously entertain new policy ideas we never could before.”
So new jobs + a universal income scenario, I guess. Let me add that I am baseline sympathetic to the “gentle” thesis given the history of important technologies taking awhile to make a big productivity impact as they move from the lab to the market.
I would guess Altman is aware of new research from Pew that shows only 17 percent of Americans view AI positively (versus 56 percent of experts). Altman may have seen a need to reframe the conversation, ASAP. His essay acknowledges the magnitude of change while systematically while attempting to defusing any mounting panic and dystopian scenarios.
Normalize the supernormal or the extraordinary.
By presenting superintelligence as inevitable but incremental — "wonders become routine"— he makes radical change feel digestible. And by painting OpenAI as building "a brain for the world,” Altman is creating a cultural permission structure for that change. Make the future feel inevitable rather than threatening, and resistance dissolves.
About that timeline …
As AI pundit Ethan Mollick opined on X, “One thing you can definitely say about him and Dario is that they are making very bold, very testable predictions. We will know whether they are right or wrong in a remarkably short time.”
Yet, for now, the only real-world indication of a “remarkably short” timeline is what CEOs are saying and, notably, the amount of investment flowing in the sector. You are not seeing a short timeline scenario playing out in broad economic data, business adoption rates, or the behavior of financial and prediction markets, as seen in the following two charts:
So, faster, please, I guess?
As Congress Does Less, The Courts Are Doing More
Breaking news: The Dispatch has acquired SCOTUSblog, the gold standard in Supreme Court analysis.
Reliable coverage of the Supreme Court has never been more important. With an unbridled executive branch and a Congress eager to surrender its constitutional prerogatives, the federal judiciary is playing an increasingly important role in shaping the country’s direction.
Join 600,000 loyal readers and check out The Dispatch today. No insulting clickbait, no false outrage, no annoying auto-play videos—just reliable journalism that helps you understand the big decisions that will shape our nation’s future.
Faster, Please! readers: Take 25% off a Dispatch membership today
Micro Reads
▶ Economics
Outlook for China - Apollo Academy
Equity Analysts Are Over Liberation Day Tariffs - Bberg Opinion
▶ Business
Can Apple Salvage the AI iPhone in China? - Bberg Opinion
Alphabet’s AI Critics Are Asking the Wrong Questions - Bberg Opinion
Can robotaxis put Tesla on the right road? - The Economist
▶ Policy/Politics
It’s Never Too Early to Start a Trump Account - Bberg Opinion
How nuclear war could start - Wapo Opinion
▶ AI/Digital
▶ Biotech/Health
Why China Biotech Is Getting a DeepSeek Moment, Too - Bberg Opinion
The real fertility crisis - UNFPA
▶ Clean Energy/Climate
▶ Robotics/Drones/AVs
Why humanoid robots need their own safety rules - MIT Tech Review
▶ Space/Transportation
'I was a good, visible target': Jared Isaacman on why Trump pulled his NASA chief nomination - Space
▶ Up Wing/Down Wing
Nine reasons for cautious optimism about individual liberty - Wapo Opinion
▶ Substacks/Newsletters
A bunch of thoughts and evidence on immigration - Noahpinion
Meta is Building an AI Superintelligence Team - AI Supremacy
How California Regulated Itself Into an Energy Crisis - Breakthrough Journal
Taxing Capital Gains Only After Realization - Conversable Economist
The role of aerosol declines in recent warming - Climate Brink
Great piece, thanks James!