☀ The Doomsday Clock needs a pro-progress switch to the Genesis Clock
Also: AI as a thought partner: A Quick Q&A with … machine learning researcher Katie Collins
Warning: The apocalypse edges nearer, at least according to the Bulletin of Atomic Scientists. Its Doomsday Clock, which has tracked humanity's subjective proximity to annihilation since 1947, now stands at 89 seconds to midnight, the closest ever. Moving forward by a single second since the last change in 2023, the clock reflects mounting concerns over nuclear arms, climate change, and artificial intelligence. These existential threats, the Bulletin warns, are amplified by a rising tide of misinformation. As the group said in a media release:
In setting the Clock one second closer to midnight, we send a stark signal: Because the world is already perilously close to the precipice, a move of even a single second should be taken as an indication of extreme danger and an unmistakable warning that every second of delay in reversing course increases the probability of global disaster.
Yet the conceptual timepiece, conceived in 1947 by luminaries including Albert Einstein and Robert Oppenheimer, has long suffered from questionable calibration. During the nail-biting 1960s, when nuclear war loomed, it paradoxically retreated from doom. The 1980s saw it advance menacingly in response to Ronald Reagan's Soviet strategy — the very approach, of course, that helped vanquish the communist threat. Barack Obama's presidency prompted another puzzling adjustment, with unfounded optimism pushing the hands backwards.
Such inconsistencies suggest the need for a different chronometer altogether. Enter the Genesis Clock. Where the Doomsday Clock fixates on catastrophe, this new measure would track humanity's march toward abundance. It would embrace the Proactionary Principle, acknowledging that progress requires embracing rather than shunning risk. With AI, for instance, it would see not a harbinger of doom but a powerful tool for human advancement. The time has come, perhaps, to stop watching for Midnight and start waiting for Dawn.
As I explained in a 2023 essay, as well as in my 2023 book, the Doomsday Clock is utterly subjective in how it determines how close we are to destroying ourselves. But the factors driving the Genesis Clock would be more objective. (The Genesis Clock — in an homage to the first Doomsday Clock time — would initially be set at 5:53 AM, just seven minutes to a symbolic Dawn of 6 AM.) Among those that might determine how close we are to Dawn:
How close are we to achieving artificial general intelligence?
How close are we to extending the average human lifespan to 120?
Do we have self-sustaining colonies off planet?
Do we have a cancer vaccine and a cure for Alzheimer’s?
Can we deflect a large asteroid or comet headed toward Earth?
Is carbon in the atmosphere declining?
Is commercial nuclear fusion both technologically and economically viable?
Is less than 1 percent of the world’s population undernourished with a caloric intake below minimum energy requirements?
Are we bringing back extinct species like the Woolly mammoth?
Is even the poorest nation no poorer than the average economy in 2000?
Is even the least free nation as free as the average nation in 2000?
Is productivity growth among rich nations at least 50 percent higher than its postwar average?
I can’t wait for Dawn when this clock’s alarm finally sounds.
Quick Q&A
✨👥 AI as a thought partner: A Quick Q&A with … machine learning researcher Katie Collins
Far from taking all of our jobs, AI might actually function best as our insightful colleague, working right alongside us to co-generate meaningful ideas and impactful work. In their paper, “Building Machines that Learn and Think with People,” Katie Collins and her co-authors consider what it would mean for AI systems to constructively collaborate with humans. The authors define these “thought partners” as “systems built to meet our expectations and complement our limitations.” Here’s the abstract:
What do we want from machine intelligence? We envision machines that are not just tools for thought, but partners in thought: reasonable, insightful, knowledgeable, reliable, and trustworthy systems that think with us. Current artificial intelligence (AI) systems satisfy some of these criteria, some of the time. In this Perspective, we show how the science of collaborative cognition can be put to work to engineer systems that really can be called “thought partners,” systems built to meet our expectations and complement our limitations. We lay out several modes of collaborative thought in which humans and AI thought partners can engage and propose desiderata for human-compatible thought partnerships. Drawing on motifs from computational cognitive science, we motivate an alternative scaling path for the design of thought partners and ecosystems around their use through a Bayesian lens, whereby the partners we construct actively build and reason over models of the human and world.
I asked Collins a few quick questions about how we might best design these thought partners to achieve that very goal. Collins is a Machine Learning PhD candidate in the Computational and Biological Learning Lab at the University of Cambridge, as well as a Student Fellow at the Leverhulme Centre for the Future of Intelligence. She previously worked as a student researcher at Google DeepMind.
1/ What's the difference between designing AI to work well with humans versus trying to make it act human?
I’m particularly excited by the question: How can we build machines that meet our expectations and complement our limitations? Building a machine that “perfectly” meets our expectations may end up needing to be so similar to us, or extremely simple, that we fail to capture some of the benefits of why we may want to build AI tools in the first place: to go beyond what we can do. In contrast, if we build AI systems that perfectly complement our limitations, they may act so unlike us that we struggle to predict how they will work in new situations, hampering our ability to determine when and how we can effectively interact with them. I see the question of designing AI systems that “work well” with people as navigating that tradeoff: In what ways should an AI system engage with the world like a human and in what ways not? To address that question, I think we need a deep engagement with cognitive science and the behavioral sciences more broadly, as well as the target users, to characterize people’s expectations for machines and what behavioral patterns from people we do (or don’t) want to build in.
2/ With AI helping us search and learn stuff now — do you think traditional online research will become obsolete?
I certainly think that AI systems which can go out and search, learn, and communicate information to us in new ways will change the ways in which we do research. However, I don’t think it will make research obsolete! Instead, AI systems which can increasingly engage in knowledge work that had been previously exclusively in the human purview will, I imagine, place even more emphasis on the need to develop critical thinking abilities and judgement to assess: (1) when we should even use an AI system for a particularly task, and (2) how to judge whether the output is reliable or not. I think this will usher in a new set of “literacy” skills (“AI literacy”) similar to how my generation may have grown up learning good “Google search” practices.
3/ What's an example of a system where AI works with multiple players, not just one-on-one? What makes that better?
One multi-agent direction that I’m personally very excited about is classroom teaching. It can be very hard in big classrooms for teachers to give personalized feedback and tailor lesson material to individual students, or small groups of students. Even in Cambridge, UK, where a substantial amount of teaching is done in very small groups (three students or less), it can be hard to balance lesson material for everyone. I think there’s a wide range of possibilities in how we think about the synergy of AI and education as “thought partners” for individual students and for teachers to help them deliver tailored education to students. And of course, there are a range of questions around how and where AI systems may be deployed in educational settings that maintain and foster interaction amongst small groups of students to ensure people still learn from human interactions! The question of multi-agent systems is also pertinent in medicine, where you may have agents with different specialties engaging with one another and with a broader care team (including patients, patients’ families, and doctors). I’m very excited for the coming months and years moving beyond the dyad (one human — one AI) space, as it opens up many new research questions and practical challenges.
4 /Do you think we'll end up using some AI systems to keep other AI systems in check?
Similar to the question of what multi-agent networks may look like — I do imagine we will have some form of AI systems auditing other AI systems. To an extent, we already do, e.g., with gating of some model outputs. However, I think there is much more research and thought that is needed around when and where we continue to incorporate human evaluation and oversight. If we are assessing how well AI systems work with humans, I think there the “gold standard” evaluation remains some form of interactive evaluation with real humans. Doing that scalably is still an open question!
5/ You mentioned AI should understand different ways people think — how would that actually work? Should we give different AI different personalities or viewpoints? Is it a good idea to deliberately build in different perspectives?
Someone who’s a talented mathematician may demand a different level of explanation or support than someone who’s an experienced doctor, and an experienced doctor may require different assistance than a novice doctor. Here again, I think there’s a lot of potential in the synergies between the behavioral sciences and AI — to understand the role of expertise, and different kinds of expertise, and other experiences on the ways in which people think, plan, and make decisions — and how that shapes the kind of interactive assistance people want want or need. Part of this may involve collecting richer reasoning traces or interaction traces from a wider range of people. I’m personally more interested then in using these kinds of data and insights to guide the kinds of support an AI tool or thought partner may provide, for different tasks (ideation versus low-level coding), which to an extent may benefit from different modes of communication (e.g., modulating verbosity or sophistication of language, or even media communicated - e.g., just text or image diagrams too). However, with all of this, I think there’s a fine line between building AI systems that say what we want versus actually help us, which also likely remains task- and person-specific, to an extent.
On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were Promised
Micro Reads
▶ Economics
▶ Business
▶ Policy/Politics
On DeepSeek and Export Controls - Dario Amodei
▶ AI/Digital
DeepSeek Is Coming for Sam Altman’s Other Company Too - Bberg Opinion
▶ Biotech/Health
▶ Clean Energy/Climate
What If AI Can’t Solve Climate Change? - Heatmap
▶ Space/Transportation
▶ Up Wing/Down Wing
Antikythera - Long Now
▶ Substacks/Newsletters
I don’t believe DeepSeek crashed Nvidia’s stock - Understanding AI
Hayek on Decentralized Information in Markets - Conversable Economist
More AI Efficiency Will See More Demand for AI - next BIG future
Apologies for posting a third comment, but thank you for mentioning the Proactionary Principles. For those unfamiliar:
https://maxmore.substack.com/p/the-proactionary-principle
https://maxmore.substack.com/p/existential-risk-vs-existential-opportunity
"These existential threats, the Bulletin warns, are amplified by a rising tide of misinformation." Let me fix this: "Vastly overblown worries about existential threats are amplified by a rising tide of misinformation."