🌎 The case against the case against Marc Andreessen's 'Why AI Will Save the World' essay
Also: A Friday flashback interview about space-based solar power
Quote of the Issue
“How we feel about the evolving future tells us who we are as individuals and as a civilization: Do we search for stasis—a regulated, engineered world? Or do we embrace dynamism—a world of constant creation, discovery, and competition? Do we value stability and control, or evolution and learning” - Virginia Postrel, The Future and Its Enemies: The Growing Conflict Over Creativity, Enterprise
The Essay
🌎 The case against the case against Marc Andreessen's 'Why AI Will Save the World' essay
If Marc Andreessen’s viral essay “Why AI Will Save the World,” accomplishes nothing else, it provides an encouraging reminder of what technological progress can potentially accomplish for 21st-century humanity. The venture capitalist begins the piece with a great list of what recent advances can make possible by functioning as a supertool in all sorts of areas of human endeavor.
But the bulk of the piece addresses some common criticisms of generative AI, especially large language models like ChatGPT and Bard. Andreessen frames the AI skeptic/pessimist/worrier/doomer issue — one he sees as becoming a full-fledged moral panic — as an example of the famous "Baptists and bootleggers" phenomenon that often emerges from reform movements. There are both those ("Baptists") who advocate for regulations to address perceived risks and those who are self-interested opportunists ("Bootleggers") seeking financial gain through those regulations.
When it comes to AI risk, he writes, the former “are true believers that AI presents one or another existential risks – strap them to a polygraph, they really mean it,” while the latter “are CEOs who stand to make more money if regulatory barriers are erected that form a cartel of government-blessed AI vendors protected from new startup and open source competition.” I think this is fair, although I won’t characterize which well-known names might fall into each category. Indeed, some folks might be both Baptists and bootleggers.
Andreessen then addresses various AI concerns and criticism posed by the AI skeptics/pessimists/worriers/doomers:
Will AI kill us all like Skynet? No. AI is math and code that’s designed, made, and controlled by humans. It can’t want anything, even to kill us. AI “is not going to come alive any more than your toaster will.”
Will AI ruin our society with misinformation? No. This concern is an extension of the social media “trust and safety” wars, where content is restricted or censored based on various criteria. The AI “alignment” types are a narrow and elitist group that wants to impose their morality on the rest of the world, and this is dangerous because AI will shape the future of everything. “In short, don’t let the thought police suppress AI.”
Will AI take all the jobs? No. Not only does history argue otherwise, so does basic economics. There isn’t a fixed amount of work to be done in the economy. Rather, tech progress “increases productivity, lowers prices, and creates more demand for new products and services, which in turn leads to more jobs and higher wages.” Even if machines could replace an unprecedented number of existing jobs and tasks, that scenario suggests “a takeoff rate of economic productivity growth that would be absolutely stratospheric [and] entrepreneurs would create dizzying arrays of new industries, products, and services, and employ as many people and AI as they could as fast as possible to meet all the new demand.” Beyond that? A world of sci-fi levels of prosperity and abundance.
Will AI lead to dangerous inequality? No. The stuff that starts as the gadgets of the rich — from cars and radio to computers and smartphones — eventually spreads throughout an economy. The owners of technology have an incentive to sell it to as many customers as possible, which lowers the price and spreads the benefits to everyone. And echoing his “Time to Build” essay, he contends that the lack of technology in sectors like housing, education, and health care, where government intervention prevents innovation and competition, is the true threat to inequality.
Will people do bad things with AI? Yes, but so what? AI is a tool, and a tool can be used for both good and ill. That’s no reason to ban the tool. Most bad uses of AI are already illegal and banning or severely limiting access to AI is not a solution. Rather, let’s leverage AI to prevent and counteract crimes, such as by creating systems to verify real content and people. “Let’s put AI to work in cyberdefense, in biological defense, in hunting terrorists, and in everything else that we do to keep ourselves, our communities, and our nation safe.”
The Andreessen Agenda is basically “faster, please.” Also, “freedom, please.” Both Big Tech and Scrappy Startups “should be allowed to build AI as fast and aggressively as they can.” Same with open source models: “There should be no regulatory barriers to open source whatsoever.” Or perhaps you would rather have the Chinese Communist Party be the world’s dominant AI superpower, Andreessen adds.
A good example of a measured and reasonable case against “Why AI Will Save the World” is “Marc Andreessen Is (Mostly) Wrong This Time” by Wired’s Gideon Richfield. And one reason it seems so reasonable is that it’s thoroughly infused with the technology perspective offered in the book Power and Progress by economists Daron Acemoglu and Simon Johnson. (I’ve been writing about it.) Citing Acemoglu and Johnson, Gideon points out that some kinds of innovations are more likely to cost jobs rather than make existing workers more productive. Richfield:
The real concern about AI and jobs, which Andreessen entirely ignores, is that while a lot of people will lose work quickly, new kinds of jobs—in new industries and markets created by AI—will take longer to emerge, and for many workers, reskilling will be hard or out of reach. And this, too, has happened with every major technological upheaval to date.
Richfield’s other substantive point regards inequality. He’s not impressed by Andresseen’s (accurate) claim that companies have a profit incentive to eventually make new technologies widely available:
As the “classic example” he cites Elon Musk’s scheme for turning Teslas from a luxury marque into a mass-market car—which, he notes, made Musk “the richest man in the world.” Yet as Musk was becoming the richest man in the world by taking the Tesla to the masses, and many other technologies have also gone mainstream, the past 30 years have seen a slow but steady rise in income inequality in the US. Somehow, this doesn’t seem like an argument against technology fomenting inequality.
So here’s my response to Richfield’s response. I think context matters a lot here. And the context in which Andreessen wrote his essay is one of almost undiluted media negativity about AI. On the employment issue, for instance, almost all the coverage has been about the threat GenAI poses to white-collar workers. Yet we have every reason to think new jobs will be created even if we can’t predict exactly what those jobs will be. And while there may indeed be dislocation, that’s a safety net and education/training issue rather than a technological issue. And if we are worried about tech progress that’s destructive rather than creative, we should make sure that the US has a regulatory and R&D system that promotes bold, high-impact discovery, invention, and innovation. That’s far more important than trying to, say, fiddle with the tax code to try to steer aways form labor-replacing tech.
And as far as inequality goes, what kind of inequality really matters? Most of the value of innovation — such as the iPhone or mRNA vaccines — goes to society, not the innovations. That, even if the innovators also get crazy rich. (In a 2004 paper, economist and Nobel laureate William Nordhaus found that innovators capture a tiny 2.2 percent of the total social value of their innovations.) Richfields’ inequality argument reminds me of the notion that the American taxpayer gets no return on federal drug R&D since the results of that government research get commercialized by private companies. Absurd. The return the American public gets is the massive benefit provided by those products. As biotech entrepreneur Safi Bahcall told me back in 2019:
Number one, that federal dollars pay for new drugs. No. Federal dollars pay for ideas. Here’s the difference. I have an idea in the shower for a movie. Here’s my vision: Robots take over the world. That’s an idea. Here’s the product: the movie Terminator. The distance between an idea and basic research and a finished drug is roughly the distance between me having that idea in the shower and James Cameron making the movie Terminator. It’s a huge, huge distance. So no, federal research does not pay for drugs. Federal research pays for ideas and there are lots and lots of ideas for biology and drugs just like there lots of ideas for movies and very, very few actually get turned into something useful. That’s number one. Federal dollars do not pay for drugs.
Number two, that federal research turning into something commercial is a bad thing. As you just said in that sentence, that’s exactly the point of federal research. Federal research funds market failures, game-theory issues where it doesn’t make sense for any one company to invest but it does make sense for the entire society. Let’s say the invention of GPS or the internet or fusion power or nuclear power or genetic engineering. The goal of that is to create something commercial otherwise what are we doing it for? Just for fun?
Number three, that the government doesn’t get any economic return. Of course it does. Once it’s created, whether it’s the biotechnology industry or the satellites that deliver GPS and it’s empowered every smartphone in the world or the internet which has enabled these trillion dollar companies. What do those companies do every year? They pay taxes, a lot of taxes. And what do those individuals who work at those companies do every year? They pay taxes. So, of course they get an economic return.
Most importantly, Andreessen’s essay focuses on the trade-offs and opportunity costs of trying to slow and otherwise constrain progress of this fast-developing, yet embryonic technology. It shines a light on exactly the areas that media coverage and many policy makers miss. Expecting any technology to save the world may be asking too much, but criticizing a technology because it only might the world a better place on net with both costs and benefits seems self defeating.
5QQ: Friday Flashback
💡 A Friday flashback interview about space-based solar power
In a recent newsletter issue, I included in the micro reads an LA Times story about a Caltech team’s breakthrough in space-based solar power. Last week, researchers from the Space Solar Power Project announced they had successfully beamed power to a receiver on Caltech’s campus in Pasadena, California, from a satellite orbiting 550km above the Earth. It’s a thrilling proof of concept for an energy source first dreamt up in the mind of Isaac Asimov in his 1941 short story, “Reason.” The basic idea is to harvest solar energy in orbit where there are no clouds and the sun never sets, transmitting that energy back to Earth.
No doubt, reader, you have many questions. Fortunately, the very first episode of Faster, Please! — The Podcast features Ali Hajimiri, co-director of the Caltech Space Solar Power Project and Bren Professor of Electrical Engineering and Medical Engineering at Caltech. Here are five highlights from that interview in 5QQ format.
1/ Space-based solar seems like a beautiful, elegant solution. But why is it a good idea? What problem is it solving?
The primary problem that it solves is being able to get around the days and nights, the cycles of the weather—having dispatchable power where you need it, when you need it, and as much as you need. What we do allows you to send the power where you need at the time you need—and you can even break it up into different proportions. You can say, “I want to send 20 percent to New York, 30 percent to LA, and 40 percent to, I don’t know, Seattle.”
The other thing is that there are places that don’t have the power infrastructure. A good analogy to this is cell phones versus landlines. Thirty years ago, there were places in Africa that didn’t have landlines. In Sub-Saharan Africa today, there are these same places that still don’t have landlines, but there they have leapfrogged to cell phones. So this way, you can actually get to places that don’t have power.
2/ What sorts of concerns are raised about this technology and how do you deal with those?
There are people who think about, “Is it going to cause interference?” and all those things. And those are the kinds of things that we’ve learned how to deal with in radio systems. We have many different radio systems working concurrently and seamlessly, and we don’t seem to have problems with that.
There’s also another set of concerns some people raise. “Is it going to fry birds flying overhead?” The answer is that the energy density that anything, even in that beam spot, will get is comparable to what you get from standing out in the sun—except for the fact that it’s what we call non-ionizing. So [the sun’s rays] can cause cancer, but radio frequencies don’t. All they can do is generate heat. The benefit of this thing is that with that power level, you’d recover probably close to three times, three to three-and-a-half times, more than what you recover from photovoltaics. And you can have it during the day or night.
3/ Is there something you need government to do or to stop doing at this stage in the development of the technology?
These are the kind of things that, to get started, you need a big entity like government to put investment in it—in terms of research and development—because the barrier to entry is pretty large, regarding the amount of initial investment. Of course, the return eventually is going to be large, too.
About the technologies related to wireless power transfer, both terrestrial and space, I think the government needs to be more proactive in terms of allowing it to flourish and not getting in the way. With everything new that comes in, there of course needs to be a thoughtful discourse about it. But if it gets to a point of becoming too much of an impediment to innovation and progress, then that would not be a good thing.
So I think allowing these technologies to flourish—in terms of spectral allocations and other things of that sort—would be a good thing to continue to do.
4/ Are there key, deal-breaking technological challenges that you still need to solve?
There are. I mean, it is fair to say that not all the technical challenges have been solved, but the pathway has become more clear over the last several years in terms of at least how we go about solving them.
Nobody has built a coherent structure of this magnitude anywhere—not even on Earth, let alone in space. So we have this very thin, very flat sheet that transmits the energy. Because of the coherent addition of all these billions and billions of sources—it’s like an army of ants. If you have an army of ants, you want the ants, that are like a mile apart, to be synchronized within a few picoseconds (and a picosecond is one-trillionth of a second).
It’s a combination of various advanced technologies that allows us to get this kind of timing synchronization. But those are the kind of challenges that we’re trying to overcome and solve when you go to this scale. And it is something that has emerged because we’ve solved the other problems. Now we are at the point to say, “Okay, well, now we are scaling it up. How do we do these things?” And we need to solve these problems.
5/ How has the decline in launch costs affected the viability of space-based solar?
I would say it’s one of the four or five enablers that converged to make this closer to something that can actually be done. Definitely, SpaceX is a catalyst in lowering the barrier for space enterprises—anything that you want to do, non-governmental stuff, smaller projects—SpaceX and alike. I mean, there are other places like Blue Origin, things like that.
So people are trying to do that. They are trying to level the playing field so that more entrepreneurs can get into it. Now it can be in academia, industry, or anywhere else. And that plays a role. And again, there are all these other technologies and architectural changes that also enable us. So I would say that’s definitely one of the four or five catalysts that had to come together to make this happen.
Micro Reads
▶ Taurine's Astounding Anti-Aging Powers Raise New Questions - Lisa Jarvis, Bloomberg Opinion |
▶ Inside the quest to engineer climate-saving ‘super trees’ - Boyce Upholt, New Scientist |
▶ Nuclear-Powered Cargo Ships Are Trying to Stage a Comeback - Chris Baraniuk, Wired |
▶ Apple’s Vision Pro headset has made the metaverse feel outdated - John Gapper, FT Opinion |
▶ What’s the Best Use for Crypto? Let AI Figure It Out - Tyler Cowen, Bloomberg Opinion |
▶ New A.I. Chatbot Tutors Could Upend Student Learning - Natasha Singer, New York Times |
▶ Regulating Artificial Intelligence: The Need, Challenges, and Possible Solutions - Shane Tews, AEIdeas |
▶ Is China Gaining a Lead in the Tech Arms Race? - Jack Detsch, Rishi Iyengar, and Robbie Gramer, Foreign Policy |