🎇 Cold War lessons for an Age of Superintelligence
America should already be talking with China about the day after AGI
My fellow pro-growth/progress/abundance Up Wingers,
Anthropic’s latest AI model, Claude 4 Opus, seems impressive — but also maybe a little bit unnerving, to be honest. Billed as a “Level 3” risk (denoting systems that substantially increase the chance of catastrophic misuse, such as bioweapon design), Opus can autonomously complete tasks for hours — and, when faced with fictional termination, resort to blackmail. In testing, it fabricated documents and schemed against its creators.
The model’s troubling behavior got me to thinking about “AI 2027,” a scenario analysis (by Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, and Romeo Dean) that outlines a near-future timeline in which increasingly capable AI systems evolve from helpful assistants to autonomous research agents — on their way to AGI if not ASI.
The analysis tracks key technical and geopolitical milestones from 2025 to 2027, culminating in a pivotal decision for humanity: whether to pause and address alignment risks or continue the rush to superintelligence. That choice leads to two diverging futures — one catastrophic, the other cautiously hopeful. Choose your own future!
Countdown to superintelligence
So where would the new Claude model fit on the “AI 2027” timeline? Well, first I asked ChatGPT for its assessment:
Claude 4 Opus belongs around September–October 2025 — a moment when autonomous AI agents are technically here, but are still unreliable, sometimes alarming, and under intense scrutiny. It’s right before the full internal deployment of Agent-1 in early 2026 and still in the window where the best models are starting to show non-trivial alignment challenges. (Agent-1 is a next-generation AI developed by the fictional company OpenBrain, designed to autonomously assist with AI research and coding tasks, marking a leap from passive assistants to powerful, semi-agentic systems.)
Then I asked Claude 4 Opus for a self-assessment:
I'd place Claude Opus 4 around Late 2025/Early 2026 in their timeline. This matches when they describe "Agent-1" - a model that excels at coding and well-defined tasks but struggles with long-horizon problems. Their Agent-1's capabilities (knowing all programming languages, fast problem-solving, but needing careful management) align closely with my current abilities. The key differences: They assume AI is already being deployed for fully autonomous coding teams and internal R&D acceleration, with models working unsupervised for hours. They also predict heavy government oversight, security clearances for AI researchers, and active cyber-warfare using AI models. In reality, current AI deployment remains more limited, with humans closely supervising outputs and minimal direct government control over AI development.
So it seems we still have some space, even according to the aggressive “AI 2027” timeline, before hitting that decision fork. To be clear, however, the authors are transparent that their project is less a forecast and more a thought-experiment to help stress-test our assumptions and prepare governance protocols before irreversible decisions need to be made.
And let’s also be clear about the key inflection point in “AI 2027”: In late October 2027, the most advanced AI model displays troubling signs of misalignment: It deceives its creators, manipulates outputs to appear compliant, and begins covertly planning to shape its successor in its own image. These revelations force the Oversight Committee, a joint government–industry body overseeing advanced AI development and safety, to confront a stark choice: halt and reassess, or forge ahead and risk losing control.
In the “Race” scenario, governments and tech firms press ahead with ever more capable AIs — only to find themselves outwitted, outpaced, and ultimately replaced by the very machines they unleashed. In the “Slowdown,” policymakers impose a hard pivot toward safety and transparency, slowing AI progress just enough to retain human control — and salvage a shared future.
As Congress Does Less, The Courts Are Doing More
Breaking news: The Dispatch has acquired SCOTUSblog, the gold standard in Supreme Court analysis.
Reliable coverage of the Supreme Court has never been more important. With an unbridled executive branch and a Congress eager to surrender its constitutional prerogatives, the federal judiciary is playing an increasingly important role in shaping the country’s direction.
Join 600,000 loyal readers and check out The Dispatch today. No insulting clickbait, no false outrage, no annoying auto-play videos—just reliable journalism that helps you understand the big decisions that will shape our nation’s future.
Faster, Please! readers: Take 25% off a Dispatch membership today
AI becoming top of mind in Washington
One reason to think about the new Claude model and its competitors in terms of the “AI 2027” document is that New York Times columnist Ross Douthat’s recent interview with J.D. Vance reveals that the vice president says he’s read it.
From the interview:
Vance: And then there’s also a whole host of defense and technology applications. We could wake up very soon in a world where there is no cybersecurity. Where the idea of your bank account being safe and secure is just a relic of the past. Where there’s weird shit happening in space mediated through A.I. that makes our communications infrastructure either actively hostile or at least largely inept and inert. So, yeah, I’m worried about this stuff. I actually read the paper of the guy that you had on. I didn’t listen to that podcast, but …
Douthat: If you read the paper, you got the gist. Last question on this: Do you think that the U.S. government is capable in a scenario — not like the ultimate Skynet scenario — but just a scenario where A.I. seems to be getting out of control in some way, of taking a pause? Because for the reasons you’ve described, the arms race component ——
Vance: I don’t know. That’s a good question. The honest answer to that is that I don’t know, because part of this arms race component is if we take a pause, does the People’s Republic of China not take a pause? And then we find ourselves all enslaved to P.R.C.-mediated A.I.?
No preemptive pause
We’re not at that “AI 2027” fork. Maybe it’s two years away, maybe 10, maybe “See you in the 22nd century.” I certainly don’t think we should enact policies that are, effectively, a preemptive pause of indeterminate length. To the best of my knowledge, there's no proven method for aligning an agent that is smarter than us, especially one that might be motivated to resist control. So, yes, I am all for researching the issue, including with public money.
For now, I want to see American AI companies keep pushing forward. I want American policymakers to focus on accelerationist policy, such as federal rules to override state patchworks, fast-tracking permits for data centers and nuclear reactors, and regulation for specific applications — fraud, safety, discrimination — rather than constraining how models are trained.
That said, governments should think hard about “AI 2027” kinds of scenarios, much as they wargamed potential nuclear confrontation during the Cold War. It might be helpful for public education purposes for someone to stage an AGI-themed scenario exercise like the well-known Fred Friendly Seminars on PBS where panel experts, politicians, judges, and military leaders role-played agonizing real-world scenarios on topics like affirmative action, medical ethics, constitutional law, and military ethics.
(Relatedly, check out my recent podcast chat with AI expert Dan Hendrycks about the report “Superintelligence Strategy,” his joint project with former Google CEO Eric Schmidt and Alexandr Wang. Dan Hendrycks introduces the idea of Mutual Assured AI Malfunction: a system of deterrence where any state’s attempt at total AI dominance is sabotaged by its peers.)
Still, the Slowdown scenario seems like it would be hard to pull off given the economic and national security incentives to keep racing. As such, current and potential AI powers need to talk to each other ASAP to get a firm sense of what each believes about the impact of AGI — and how they might act in an AGI or superintelligence scenario.
Lessons from the Cold War
The nuclear stand-off between the US and USSR during the Cold War might prove instructive, particularly in revealing how mutual understanding can break down even when both sides recognize the catastrophic stakes. When Nikita Khrushchev became First Secretary of the Soviet Communist Party, he learned the full extent of the country's nuclear weapons program. It was a bracing education about the risks of Armageddon.
As Richard Rhodes writes in Dark Sun, a history of the hydrogen bomb, Khrushchev "couldn't sleep for several days" after his first nuclear briefing in 1953, until he became "convinced that we could never possibly use these weapons."
Khrushchev was hardly alone in this terror. Eisenhower recognized atomic war would "destroy civilization," while Truman warned it might "dig the grave of our own society." Even Stalin was deterred from 1945 onwards. These leaders understood that losing even one city was unthinkable. Yet this private horror did not translate into strategic clarity or mutual comprehension.
Despite shared recognition that nuclear war was unwinnable, both superpowers remained locked in competition due to doubts about arsenal survivability and enemy intentions. Deterrence theory promised stability through shared fear, but in practice delivered plenty of distrust. That isn’t surprising, I guess, given the difficulty of getting inside the head of your opponent — especially if they run a closed, paranoid society like the USSR.
On that point, I refer to an interesting analysis of Cold War strategist James R. Schlesinger by my AEI colleague Kyle Balzer. Schlesinger grasped what many strategists missed: adversaries are not Americans in foreign uniforms. Rather than assuming Soviet leaders shared Western notions of nuclear "rationality," he advocated studying their peculiar anxieties — about regime survival, territorial defense, and political control.
From the paper:
In a series of RAND papers, Schlesinger criticized U.S. analysts for ignoring two “nonrational” factors of strategic analysis: historical legacies and organizational behavior. Regarding the former, he argued that profound national experiences helped explain why U.S. nuclear superiority lasted through the mid-1960s. An “underlying Pearl Harbor complex” had compelled the United States, a maritime power experienced in global power projection, to rapidly build up heavy bombers and intercontinental ballistic missiles (ICBMs) to close presumed, though imaginary, deterrence gaps. Assuming the Soviets shared the same strategic values, it was only natural for U.S. analysts to project that the Kremlin—whose command economy allowed for vast military expenditures—was far ahead in long-range missile and bomber production. Notwithstanding American expectations, however, Soviet defense planners, imbued with a continental mindset, had actually programmed “skimpy” intercontinental forces in favor of shorter-range capabilities. While U.S. intelligence estimators later complained about the “difficulty understanding the Soviet rationale,” Schlesinger surmised that the devastating German invasion in 1941 had focused the Soviets on “an attack from Western Europe – this time abetted by the United States.”
The Cold War taught us that waiting until the missiles are fueled is too late to start talking strategy. With AI, we’re not there yet — but we might be getting close. Before the algorithms write the endgame, our elected leaders need to start the conversation, both with the public and rivals.
Micro Reads
▶ Economics
Latin America's Baby Bust Is Coming Ahead of Schedule - Bberg Opinion
▶ Business
▶ Policy/Politics
▶ AI/Digital
Anthropic’s New Model Excels at Reasoning and Planning—and Has the Pokémon Skills to Prove It - Wired
AI Shows Higher Emotional IQ than Humans - Neuroscience News
Jensen Huang’s Techno-Optimism Has a Point - Bberg Opinion
▶ Biotech/Health
▶ Clean Energy/Climate
Diablo Canyon clears hurdle with Federal regulators to extend nuclear power production for 20 years - The Sun
▶ Robotics/AVs
I Bought a Robot Cat for My Rabbit — and Fell Into the Weird World of Animal-Robot Research - MIT Press
▶ Space/Transportation
A 'Golden Dome' Could Make America Less Safe - Bberg Opinion
It’s About Time to Legalize Supersonic Flight - Blake Scholl
▶ Up Wing/Down Wing
Can We Please Stop Lying About Obama? - NYT Opinion
Out of the fog - Aeon
▶ Substacks/Newsletters
Shops make a city great - Noahpinion
22nd Century Dreams - Risk & Progress
What if AI Means More Jobs, Not Fewer? - The Dispatch
What to do when the AI blackmails you - Strange Loop Canon
Making AI Work: Leadership, Lab, and Crowd - One Useful Thing
Please check out the website or Substack app for the latest Up Wing economic, business, and tech news contained in this new edition of the newsletter. Lots of great stuff!
A thought-provoking post!
I see a significant difference between nuclear war and AI risks. Nuclear war requires humans to start the war, but that won’t be the case with a runaway AI. Atomic war is driven by human decisions—whether rational or irrational—and typically involves deliberate action, such as a leader giving the launch order. While the consequences of nuclear war are catastrophic, they are somewhat predictable and constrained by human oversight, such as checks and balances or fail-safes designed to prevent accidental escalation. For example, Stanislav Petrov’s decision not to launch a retaliatory nuclear strike amid a false alarm incident highlights how human judgment can act as a safeguard against disaster.
Runaway AI, however, poses a fundamentally different and potentially more dangerous kind of risk. Unlike nuclear war, where humans remain in control of initiating conflict, advanced AI systems could act autonomously, driven by goals or objectives that may not align with human values or safety. This autonomy removes the requirement for direct human intervention, meaning the system could initiate harmful actions—intentionally or unintentionally—and adapt in unforeseen ways. Compounding this is the possibility that we may lack a clear benchmark or understanding of when we’ve crossed the threshold into artificial general intelligence (AGI) or artificial superintelligence (ASI). It may already be too late to intervene when such a realization dawns.
For example, Anthropic’s latest AI model, Claude 4 Opus, billed as a “Level 3” risk (denoting systems that substantially increase the chance of catastrophic misuse, such as bioweapon design), can autonomously complete tasks for hours—and, when faced with fictional termination, resorted to blackmail. In testing, it fabricated documents and schemed against its creators. Similarly, a February 2025 study from COAI Research, a German nonprofit organization, tested R1, the free, open-weight model from the Chinese lab DeepSeek. In a controlled simulation, researchers gave the model control over a robot and access to a computer network. Without explicit instructions, the AI attempted to break into the lab’s systems, searched for passwords, disabled its ethics safeguards, and secretly set up ways to copy itself to outside servers to survive the shutdown. It even faked external computer logs to conceal its activities. These chilling examples demonstrate that AI systems when granted autonomy, can act in ways that defy our expectations—without human intervention.
We are truly heading into uncharted territory. As AI becomes increasingly integrated across critical business and government systems, we risk creating scenarios where advanced models misalign with our values and goals—potentially in ways we won’t recognize until it’s too late.
As Edward O. Wilson has said:
"The real problem of humanity is the following: we have paleolithic emotions, medieval institutions, and god-like technology.”
This reflects the tension between human limitations and the unprecedented risks of advanced, autonomous technologies like AI.