đ Cold War lessons for an Age of Superintelligence
America should already be talking with China about the day after AGI
My fellow pro-growth/progress/abundance Up Wingers,
Anthropicâs latest AI model, Claude 4 Opus, seems impressive â but also maybe a little bit unnerving, to be honest. Billed as a âLevel 3â risk (denoting systems that substantially increase the chance of catastrophic misuse, such as bioweapon design), Opus can autonomously complete tasks for hours â and, when faced with fictional termination, resort to blackmail. In testing, it fabricated documents and schemed against its creators.
The modelâs troubling behavior got me to thinking about âAI 2027,â a scenario analysis (by Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, and Romeo Dean) that outlines a near-future timeline in which increasingly capable AI systems evolve from helpful assistants to autonomous research agents â on their way to AGI if not ASI.
The analysis tracks key technical and geopolitical milestones from 2025 to 2027, culminating in a pivotal decision for humanity: whether to pause and address alignment risks or continue the rush to superintelligence. That choice leads to two diverging futures â one catastrophic, the other cautiously hopeful. Choose your own future!
Countdown to superintelligence
So where would the new Claude model fit on the âAI 2027â timeline? Well, first I asked ChatGPT for its assessment:
Claude 4 Opus belongs around SeptemberâOctober 2025 â a moment when autonomous AI agents are technically here, but are still unreliable, sometimes alarming, and under intense scrutiny. Itâs right before the full internal deployment of Agent-1 in early 2026 and still in the window where the best models are starting to show non-trivial alignment challenges. (Agent-1 is a next-generation AI developed by the fictional company OpenBrain, designed to autonomously assist with AI research and coding tasks, marking a leap from passive assistants to powerful, semi-agentic systems.)
Then I asked Claude 4 Opus for a self-assessment:
I'd place Claude Opus 4 around Late 2025/Early 2026 in their timeline. This matches when they describe "Agent-1" - a model that excels at coding and well-defined tasks but struggles with long-horizon problems. Their Agent-1's capabilities (knowing all programming languages, fast problem-solving, but needing careful management) align closely with my current abilities. The key differences: They assume AI is already being deployed for fully autonomous coding teams and internal R&D acceleration, with models working unsupervised for hours. They also predict heavy government oversight, security clearances for AI researchers, and active cyber-warfare using AI models. In reality, current AI deployment remains more limited, with humans closely supervising outputs and minimal direct government control over AI development.
So it seems we still have some space, even according to the aggressive âAI 2027â timeline, before hitting that decision fork. To be clear, however, the authors are transparent that their project is less a forecast and more a thought-experiment to help stress-test our assumptions and prepare governance protocols before irreversible decisions need to be made.
And letâs also be clear about the key inflection point in âAI 2027â: In late October 2027, the most advanced AI model displays troubling signs of misalignment: It deceives its creators, manipulates outputs to appear compliant, and begins covertly planning to shape its successor in its own image. These revelations force the Oversight Committee, a joint governmentâindustry body overseeing advanced AI development and safety, to confront a stark choice: halt and reassess, or forge ahead and risk losing control.
In the âRaceâ scenario, governments and tech firms press ahead with ever more capable AIs â only to find themselves outwitted, outpaced, and ultimately replaced by the very machines they unleashed. In the âSlowdown,â policymakers impose a hard pivot toward safety and transparency, slowing AI progress just enough to retain human control â and salvage a shared future.
As Congress Does Less, The Courts Are Doing More
Breaking news: The Dispatch has acquired SCOTUSblog, the gold standard in Supreme Court analysis.
Reliable coverage of the Supreme Court has never been more important. With an unbridled executive branch and a Congress eager to surrender its constitutional prerogatives, the federal judiciary is playing an increasingly important role in shaping the countryâs direction.
Join 600,000 loyal readers and check out The Dispatch today. No insulting clickbait, no false outrage, no annoying auto-play videosâjust reliable journalism that helps you understand the big decisions that will shape our nationâs future.
Faster, Please! readers: Take 25% off a Dispatch membership today
AI becoming top of mind in Washington
One reason to think about the new Claude model and its competitors in terms of the âAI 2027â document is that New York Times columnist Ross Douthatâs recent interview with J.D. Vance reveals that the vice president says heâs read it.
From the interview:
Vance: And then thereâs also a whole host of defense and technology applications. We could wake up very soon in a world where there is no cybersecurity. Where the idea of your bank account being safe and secure is just a relic of the past. Where thereâs weird shit happening in space mediated through A.I. that makes our communications infrastructure either actively hostile or at least largely inept and inert. So, yeah, Iâm worried about this stuff. I actually read the paper of the guy that you had on. I didnât listen to that podcast, but âŠ
Douthat: If you read the paper, you got the gist. Last question on this: Do you think that the U.S. government is capable in a scenario â not like the ultimate Skynet scenario â but just a scenario where A.I. seems to be getting out of control in some way, of taking a pause? Because for the reasons youâve described, the arms race component ââ
Vance: I donât know. Thatâs a good question. The honest answer to that is that I donât know, because part of this arms race component is if we take a pause, does the Peopleâs Republic of China not take a pause? And then we find ourselves all enslaved to P.R.C.-mediated A.I.?
No preemptive pause
Weâre not at that âAI 2027â fork. Maybe itâs two years away, maybe 10, maybe âSee you in the 22nd century.â I certainly donât think we should enact policies that are, effectively, a preemptive pause of indeterminate length. To the best of my knowledge, there's no proven method for aligning an agent that is smarter than us, especially one that might be motivated to resist control. So, yes, I am all for researching the issue, including with public money.
For now, I want to see American AI companies keep pushing forward. I want American policymakers to focus on accelerationist policy, such as federal rules to override state patchworks, fast-tracking permits for data centers and nuclear reactors, and regulation for specific applications â fraud, safety, discrimination â rather than constraining how models are trained.
That said, governments should think hard about âAI 2027â kinds of scenarios, much as they wargamed potential nuclear confrontation during the Cold War. It might be helpful for public education purposes for someone to stage an AGI-themed scenario exercise like the well-known Fred Friendly Seminars on PBS where panel experts, politicians, judges, and military leaders role-played agonizing real-world scenarios on topics like affirmative action, medical ethics, constitutional law, and military ethics.
(Relatedly, check out my recent podcast chat with AI expert Dan Hendrycks about the report âSuperintelligence Strategy,â his joint project with former Google CEO Eric Schmidt and Alexandr Wang. Dan Hendrycks introduces the idea of Mutual Assured AI Malfunction: a system of deterrence where any stateâs attempt at total AI dominance is sabotaged by its peers.)
Still, the Slowdown scenario seems like it would be hard to pull off given the economic and national security incentives to keep racing. As such, current and potential AI powers need to talk to each other ASAP to get a firm sense of what each believes about the impact of AGI â and how they might act in an AGI or superintelligence scenario.
Lessons from the Cold War
The nuclear stand-off between the US and USSR during the Cold War might prove instructive, particularly in revealing how mutual understanding can break down even when both sides recognize the catastrophic stakes. When Nikita Khrushchev became First Secretary of the Soviet Communist Party, he learned the full extent of the country's nuclear weapons program. It was a bracing education about the risks of Armageddon.
As Richard Rhodes writes in Dark Sun, a history of the hydrogen bomb, Khrushchev "couldn't sleep for several days" after his first nuclear briefing in 1953, until he became "convinced that we could never possibly use these weapons."
Khrushchev was hardly alone in this terror. Eisenhower recognized atomic war would "destroy civilization," while Truman warned it might "dig the grave of our own society." Even Stalin was deterred from 1945 onwards. These leaders understood that losing even one city was unthinkable. Yet this private horror did not translate into strategic clarity or mutual comprehension.
Despite shared recognition that nuclear war was unwinnable, both superpowers remained locked in competition due to doubts about arsenal survivability and enemy intentions. Deterrence theory promised stability through shared fear, but in practice delivered plenty of distrust. That isnât surprising, I guess, given the difficulty of getting inside the head of your opponent â especially if they run a closed, paranoid society like the USSR.
On that point, I refer to an interesting analysis of Cold War strategist James R. Schlesinger by my AEI colleague Kyle Balzer. Schlesinger grasped what many strategists missed: adversaries are not Americans in foreign uniforms. Rather than assuming Soviet leaders shared Western notions of nuclear "rationality," he advocated studying their peculiar anxieties â about regime survival, territorial defense, and political control.
From the paper:
In a series of RAND papers, Schlesinger criticized U.S. analysts for ignoring two ânonrationalâ factors of strategic analysis: historical legacies and organizational behavior. Regarding the former, he argued that profound national experiences helped explain why U.S. nuclear superiority lasted through the mid-1960s. An âunderlying Pearl Harbor complexâ had compelled the United States, a maritime power experienced in global power projection, to rapidly build up heavy bombers and intercontinental ballistic missiles (ICBMs) to close presumed, though imaginary, deterrence gaps. Assuming the Soviets shared the same strategic values, it was only natural for U.S. analysts to project that the Kremlinâwhose command economy allowed for vast military expendituresâwas far ahead in long-range missile and bomber production. Notwithstanding American expectations, however, Soviet defense planners, imbued with a continental mindset, had actually programmed âskimpyâ intercontinental forces in favor of shorter-range capabilities. While U.S. intelligence estimators later complained about the âdifficulty understanding the Soviet rationale,â Schlesinger surmised that the devastating German invasion in 1941 had focused the Soviets on âan attack from Western Europe â this time abetted by the United States.â
The Cold War taught us that waiting until the missiles are fueled is too late to start talking strategy. With AI, weâre not there yet â but we might be getting close. Before the algorithms write the endgame, our elected leaders need to start the conversation, both with the public and rivals.
Micro Reads
â¶ Economics
Latin America's Baby Bust Is Coming Ahead of Schedule - Bberg Opinion
â¶ Business
â¶ Policy/Politics
â¶ AI/Digital
Anthropicâs New Model Excels at Reasoning and Planningâand Has the PokĂ©mon Skills to Prove It - Wired
AI Shows Higher Emotional IQ than Humans - Neuroscience News
Jensen Huangâs Techno-Optimism Has a Point - Bberg Opinion
â¶ Biotech/Health
â¶ Clean Energy/Climate
Diablo Canyon clears hurdle with Federal regulators to extend nuclear power production for 20 years - The Sun
â¶ Robotics/AVs
I Bought a Robot Cat for My Rabbit â and Fell Into the Weird World of Animal-Robot Research - MIT Press
â¶ Space/Transportation
A 'Golden Dome' Could Make America Less Safe - Bberg Opinion
Itâs About Time to Legalize Supersonic Flight - Blake Scholl
â¶ Up Wing/Down Wing
Can We Please Stop Lying About Obama? - NYT Opinion
Out of the fog - Aeon
â¶ Substacks/Newsletters
Shops make a city great - Noahpinion
22nd Century Dreams - Risk & Progress
What if AI Means More Jobs, Not Fewer? - The Dispatch
What to do when the AI blackmails you - Strange Loop Canon
Making AI Work: Leadership, Lab, and Crowd - One Useful Thing
Please check out the website or Substack app for the latest Up Wing economic, business, and tech news contained in this new edition of the newsletter. Lots of great stuff!
A thought-provoking post!
I see a significant difference between nuclear war and AI risks. Nuclear war requires humans to start the war, but that wonât be the case with a runaway AI. Atomic war is driven by human decisionsâwhether rational or irrationalâand typically involves deliberate action, such as a leader giving the launch order. While the consequences of nuclear war are catastrophic, they are somewhat predictable and constrained by human oversight, such as checks and balances or fail-safes designed to prevent accidental escalation. For example, Stanislav Petrovâs decision not to launch a retaliatory nuclear strike amid a false alarm incident highlights how human judgment can act as a safeguard against disaster.
Runaway AI, however, poses a fundamentally different and potentially more dangerous kind of risk. Unlike nuclear war, where humans remain in control of initiating conflict, advanced AI systems could act autonomously, driven by goals or objectives that may not align with human values or safety. This autonomy removes the requirement for direct human intervention, meaning the system could initiate harmful actionsâintentionally or unintentionallyâand adapt in unforeseen ways. Compounding this is the possibility that we may lack a clear benchmark or understanding of when weâve crossed the threshold into artificial general intelligence (AGI) or artificial superintelligence (ASI). It may already be too late to intervene when such a realization dawns.
For example, Anthropicâs latest AI model, Claude 4 Opus, billed as a âLevel 3â risk (denoting systems that substantially increase the chance of catastrophic misuse, such as bioweapon design), can autonomously complete tasks for hoursâand, when faced with fictional termination, resorted to blackmail. In testing, it fabricated documents and schemed against its creators. Similarly, a February 2025 study from COAI Research, a German nonprofit organization, tested R1, the free, open-weight model from the Chinese lab DeepSeek. In a controlled simulation, researchers gave the model control over a robot and access to a computer network. Without explicit instructions, the AI attempted to break into the labâs systems, searched for passwords, disabled its ethics safeguards, and secretly set up ways to copy itself to outside servers to survive the shutdown. It even faked external computer logs to conceal its activities. These chilling examples demonstrate that AI systems when granted autonomy, can act in ways that defy our expectationsâwithout human intervention.
We are truly heading into uncharted territory. As AI becomes increasingly integrated across critical business and government systems, we risk creating scenarios where advanced models misalign with our values and goalsâpotentially in ways we wonât recognize until itâs too late.
As Edward O. Wilson has said:
"The real problem of humanity is the following: we have paleolithic emotions, medieval institutions, and god-like technology.â
This reflects the tension between human limitations and the unprecedented risks of advanced, autonomous technologies like AI.