⏩ A Quick Q&A on the 'effective accelerationism' (e/acc) movement with ... Nadia Asparouhova
'Technologists are implicit authorities on how we build the future. They have a unique responsibility to the public to represent the best of what the world could be.'
Quote of the Issue
“My regret is that I am not optimistic enough. It is not possible to project the fantastic worlds which will continue to open up to us in the coming years. Worlds which far transcend my most daring optimism. No one today can be too optimistic.” - FM-2030, Up-Wingers
The Conservative Futurist: How To Create the Sci-Fi World We Were Promised
“With groundbreaking ideas and sharp analysis, Pethokoukis provides a detailed roadmap to a fantastic future filled with incredible progress and prosperity that is both optimistic and realistic.”
Q&A
⏩ A Quick Q&A on 'effective accelerationism with ... Nadia Asparouhova
Opposite the “doomer” worldview and its apocalyptic concerns about recent advances in artificial intelligence is a trending school of thought called “effective accelerationism,” better known as “e/acc.” Its followers, emerging from social media, are advocates for unrestricted tech progress, AI, and breakneck-speed innovation. Critics of e/acc have accused them of being reckless, delusional, and even cult-like. (Cult accusations go both ways, of course.)
In the latest issue of New Atlantis, Nadia Asparouhova wrote a wonderful piece, “Tech Strikes Back” about “accelerationism” as overdue corrective to years of doom and gloom in Silicon Valley. Asparouhova is a researcher and writer whose work currently focuses on the growing social and political influence of the tech industry. She is the author of Working in Public: The Making and Maintenance of Open Source Software.
I asked Asparouhova a few quick questions about e/acc and their place within tech culture, as well as American culture at large.
1/ Why is acceleration seemingly gaining traction among tech figures?
Tech is now coming out of a "backlash" era that started in the mid-2010s and lasted until the early pandemic. During the backlash, tech was heavily criticized by the public, and it was hard to speak openly about the benefits of technology, which was a jarring turn from the prior, techno-optimistic Obama era.
During the pandemic, a lot of technologists realized that there's still a place in the world for their skills — tech is especially good at revisiting stagnant issues with fresh eyes, and moving quickly and decisively — and also that there are very real social consequences to not doing these things. So that's given them confidence to reassert tech's value to the public, and acceleration is a bold, empowering slogan that captures this sentiment.
2/ How do e/acc proponents reconcile the need for optimistic advancement with the ethical considerations and risks associated with AI development?
Many e/accs would probably agree that powerful technology needs to be developed responsibly, but that we'll only develop an informed perspective by actually trying things, instead of designing safety protocols in a vacuum. In their mind, advancing AI is how we gain a better understanding of its risks.
I'd also say we should consider e/acc in terms of its political function, rather than overindexing on their aphorisms. E/acc is overcorrecting in the optimistic, learn-by-doing direction to prevent us from going too far in the cautious, stop-everything direction. The general public seems to be predisposed towards the cautious approach, because technology is scary and unfamiliar, and if you hear someone yelling for you to stop touching a dangerous machine, you stop. But there needs to be an interest group who can represent the optimistic view from an informed perspective.
I think of these two groups — e/accs and AI safetyists — as representing two strong ideological camps in the political arena, which need to be hashed out through the civil democratic process. It's everyone else's job — whether policy makers, or unaffiliated citizens who might be voting on or reading about these issues — to have access to both perspectives and figure out how to reconcile them with what's best for society. But if e/acc didn't exist, they wouldn't be able to do that at all; they'd only see the cautious perspective.
The general public seems to be predisposed towards the cautious approach, because technology is scary and unfamiliar, and if you hear someone yelling for you to stop touching a dangerous machine, you stop. But there needs to be an interest group who can represent the optimistic view from an informed perspective.
3/ From your perspective, how might e/acc principles influence broader societal issues, such as education, healthcare, and public policy? Are there examples where e/acc-inspired approaches could be applied or considered outside the tech sphere?
Yes! The acceleration rally-cry doesn't just apply to technology, but to all spheres of life, which I think is a differentiating quality of tech in this current decade. There are technologists who are thrilled about starting families and building communities, for example, which is a refreshing narrative shift from the dorm room founder stereotype of the 2010s.
We're also seeing concerted efforts to clear away some of the institutionalist cruft across a number of societal issues. Plymouth is an organization, started by two technologists, that helps high-skilled immigrants navigate the visa process. The Institute for Progress, a tech-minded think tank, is partnering with the National Science Foundation to test new methods of assessing and funding scientists. The Atomic Energy Advancement Act, which recently passed the House with strong bipartisan support, would make it easier to build new nuclear reactors in the US and move us towards developing cheap, abundant forms of energy. What unites these different initiatives is a willingness to revisit old approaches and ask how we can iterate upon them to meet today's changing needs, instead of passively accepting – or worse, simply complaining about – how things don't work. People built these processes; with sufficient desire and effort, we can build better ones.
4/ How does the culture of the tech industry in this decade differ from the sort of prelapsarian tech culture of the 2010s that you depict in the article?
Tech has always been optimistic and enthusiastic about problem-solving, but in the 2010s, they had a less sophisticated toolkit. Tech was in its "software eating the world" era and primarily dominated by software engineers, so solving complex social issues like housing or climate change typically boiled down to putting a digital layer on it. Peter Thiel calls this "indefinite optimism" in his book Zero to One, where someone believes the future will be better, but doesn't have a clear plan as to how that will come about.
In the 2020s, we're seeing tech drawing upon a much wider skill set and talent pool. Tech — which is a poorly-defined term, but I think better described by its values than as a business industry — is now comprised of not just software engineers, but academics, journalists, scientists, hardware engineers, policymakers and advocates. They share a desire to move quickly, to find the best people to solve a given problem, and a willingness to put in the grunt work to get there.
5/ How closely or distantly aligned with the average American mindset is accelerationism? Does it represent the silent majority, or is it truly a niche outlook?
It's hard to say, because I think a lot of people's intuitions about what the "average American" thinks are trained on what they see from extremely online behavior, which isn't representative. I'd guess the majority of Americans are probably disengaged with any notion of progress whatsoever. Most people are passive consumers of technology and don't think much about where it comes from. Technology is a mysterious, unyielding force that exerts power — often, they think, unfairly — over their lives. From this perspective, it's understandable why some people skew towards what I'd call "indefinite pessimism:" they're afraid of the future and reflexively lobby against it, but they can't really articulate why it will be bad.
I'm more disappointed by technologists who I'd call "definite pessimists:" they believe the future will be bad, and are dedicating their talents towards preventing presumed apocalyptic outcomes, instead of imagining what the future could be, and building proactively towards that. Technologists are implicit authorities on how we build the future. They have a unique responsibility to the public to represent the best of what the world could be — even, and especially, when they're terrified on the inside. That's the job of a leader. It doesn't mean being pollyannaish or tone-deaf about the future, but they need to inspire hope. But if technologists project doom – as many have been — that's the cue that others will follow.
Micro Reads
Business and Economics
How AI could explode the economy - Vox
Urbanization Passes the Pritchett Test - Paul Romer
Generative AI and the Future of Work: A Reappraisal - Carl Benedikt Frey and Michael Osborne
The Jobs Equation—Erik Brynjolfsson - The Atlantic
Why family-friendly policies don’t boost birth rates - FT
Software automation and teleworkers as complements and substitutes - VoxEU
The Innovation Puzzle: Patents and Productivity Growth - St. Louis Fed
A circular relationship between productivity and hours worked - VoxEU
AI boom broadens out across Wall Street - FT Opinion
Amazon writes its largest venture cheque yet for AI start-up Anthropic - FT
OpenAI courts Hollywood over plans for video generation model Sora - FT
$75,000 for a baby? South Korean businesses float incentives as demographic crisis looms - FT
Guns and Butter: Measuring Spillover and Implications for Technological Competition - SSRN
How Silicon Valley’s ‘Oppenheimer’ found lucrative trade in AI weapons - FT
Can Demis Hassabis Save Google? - Big Tech
Will A.I. Take All Our Jobs? This Economist Suggests Maybe Not. - NYT Opinion
Policy
Inside the shadowy global battle to tame the world's most dangerous technology - Politico
A.I. Leaders Press Advantage With Congress as China Tensions Rise - NYT
Vaccine investment is a no-brainer — so why aren’t we doing it? - FT Opinion
The AI Industry Is Steaming Toward A Legal Iceberg - WSJ
We Need Major, But Not Radical, FDA Reform - Progress Forum
Europe Blunders on AI - City Journal
How NSF’s budget got hammered - Science
Policymakers Should Let Open Source Play a Role in the AI Revolution - Adam Thierer
AI
Navigating the Challenges and Opportunities of Synthetic Voices - OpenAI
OpenAI Unveils A.I. Technology That Recreates Human Voices - NYT
Microsoft and OpenAI Plot $100 Billion Stargate AI Supercomputer - Information
Accountants Can Use the Help That AI Provides - Bberg Opinion
The AI Chip Behind Nvidia’s Supersonic Stock Rally - Bberg
AI chatbots are improving at an even faster rate than computer chips - NS
Mission: Impossible Language Models - Arxiv
Language Models Can Reduce Asymmetry in Information Markets - Arxiv
A conversation with OpenAI’s first artist in residence - MIT Tech Review
How We’ll Reach a 1 Trillion Transistor GPU - IEEE
Inside the Creation of the World’s Most Powerful Open Source AI Model - Wired
How three filmmakers created Sora’s latest stunning videos - MIT Tech Review
Here’s Proof the AI Boom Is Real: More People Are Tapping ChatGPT at Work - Wired
MIT Unveils Gen AI Tool That Generates High Res Images 30 Times Faster - Hot Hardware
Health
Scientists Put Tardigrade Proteins Into Human Cells. Here's What Happened. - ScienceAlert
Your Dog Will Have an Anti-Aging Drug Before You Dog - Bberg Opinion
Tooth loss linked to early signs of Alzheimer’s disease - NS
The great rewiring: is social media really behind an epidemic of teenage mental illness? - Nature
Clean Energy
Fusion Tech Finds Geothermal Energy Application - IEEE
The Coming Electricity Crisis - WSJ Opinion
The U.S. Needs a Nuclear Energy Makeover - WSJ Opinion
Former Nuclear Regulatory Commission chair argues nuclear power isn’t a climate solution - Verge
Divisive Sun-dimming study at Harvard cancelled: what’s next? - Nature
Amazon and other tech giants can’t afford to lose power — so they’re making it - MarketWatch Opinion
Robotics
Why the Pentagon wants to build thousands of easily replaceable, AI-enabled drones - CNAS
Space/Transportation
Study: Autonomous vehicles could save hundreds of lives if they are more widely deployed - SacBee
Could a Self-Sustaining Starship Carry Humanity to Distant Worlds? - MIT Press
After Concorde, a long road back to supersonic air travel - Ars
Why Did Supersonic Airliners Fail? - Construction Physics
A Fiery Finale for a Rocket That Brings the Heat - NYT
Fusion drive space engine ready for flight - New Atlas
Up Wing/Down Wing
The Deaths of Effective Altruism - Wired
The rise of bleak chic - FT Opinion
It is human nature to be risk-averse*, but over the last 40-50 years we as a society have become more so. We equate slow movement with safety, excessively discount the risks of "do nothing", and freely externalize the costs of safety. Those costs are real and significant and, like other externalities, are a form of costly pollution.
I am not saying we should not do our due diligence. What I am saying is that we should not be captured by "analysis paralysis". No lawyer or insurer is going to recommend doing something risky--that's their seat at the table. But they need to stay in their lane and not commandeer the whole road or call all the shots. Too often we let them unilaterally make decisions concerning risk. Let them call out and quantify risk, and then make the informed decision.
If computers had been invented during the Progressive Era 100 years ago, you'd have to have an operator's license to have one and it would be "computing is a privilege, not a right".
Maybe we need tort reform. Maybe we need to repeal McCarran-Ferguson, shut down the ISO, and bust up the insurance monopoly.
Bankruptcy law has done wonders for society and innovation, and has taken the edge off of purely financial risk. Maybe we need a "bankruptcy law" for other kinds of risk.
You think we're controlled by Wall Street? It's more like we're controlled by Hartford, Connecticut and the insurance industry, who for some dumb reason have grabbed the mantle of being "society's conscience", which they most certainly do not deserve.
If there one takeaway here, it's recognizing that SAFETY IS NOT FREE.
*Vegas wasn't built on winners, and neither was Hartford. The extended-warranty people have made billions exploiting this.
With the recent news about the social media companies getting coerced by the govt. to censor political speech, it is obvious that the Anti-Trust laws must be repealed.