⛔ Why AI can't replace market capitalism
Also: 5 Quick Questions for … economist Michael Strain on technology, AI, and the US economy
Quote of the Issue
“It would appear that we have reached the limits of what it is possible to achieve with computer technology, although one should be careful with such statements, as they tend to sound pretty silly in five years.” - John von Neumann
The Essay
⛔ Why AI can't replace market capitalism
If you’re a regular reader of this newsletter or the writings and social media postings of others considered part of the loose “pro-progress” movement, you’re familiar with the criticisms of “degrowth.” Some extreme left-wing environmentalists think continued global economic growth, much less accelerated growth (Faster, please!) is unsustainable. Growth must stop, and we must redistribute existing wealth from rich to poor. As activist Greta Thunberg has famously put it, “We are at the beginning of mass extinction, and all you can talk about is money and fairy tales of eternal economic growth.”
To degrowth proponents, the notion of a) artificial intelligence accelerating growth and b) increased civilizational material consumption may be as worrisome as AI taking all the jobs or subjugating/eliminating humanity. Less prominent, it seems to me, are the left-wing proponents of techno-socialism. They imagine a near utopia where technology solves almost all problems, not the least of which is the need for humans to do much of anything. Let smart machines do all the work, while carbon-based life forms relax and collect a bountiful universal basic income. “Fully Automated Luxury Communism,” or FALC, as not only the Next Big Thing but the Final Big Thing.
But while most people seem to focus on the “fully automated” part of this vision, I want to explore the “communism” part. A FALC future isn’t just one where robots do all the work, but also one where AI makes all the economic decisions as the ultimate central planner for this supposedly wonderful world. With better hardware and smarter software, computers can can efficiently allocate resources in the absence of a market price system.
Broadly: The price system is a decentralized system of information that allows producers to know what goods and services are in demand and how much resources are needed to produce them. Prices are determined by supply and demand, and this information is used by producers to make decisions about what to produce and how much to produce. Simply put, prices contain necessary information for a smoothly function economy. As Friedrich Hayek put it:
Even the single controlling mind, in possession of all the data for some small, self-contained economic system, would not—every time some small adjustment in the allocation of resources had to be made—go explicitly through all the relations between ends and means which might possibly be affected. … Fundamentally, in a system in which the knowledge of the relevant facts is dispersed among many people, prices can act to coordinate the separate actions of different people in the same way as subjective values help the individual to coordinate the parts of his plan.
But what if that “controlling mind” is some future iteration of a large language model of the sort at the heart of ChatGPT or Bard? What if the “economic calculation” problem of socialism is really just a computational problem. If so, then Moore’s Law + machine learning + Big Data to the rescue. Ironically, techno-socialists point to huge success stories of market capitalism as reason for optimism. As they see it, Amazon and Walmart — companies with annual revenue equal to the GDP of countries such as Argentina and Sweden — demonstrate that centralized economic planning is possible with modern technology that collect and process massive amounts of dispersed data resulting from all our business and consumer choices. Surely with enough processing power, a machine learning algorithm could run the economy of even the largest economy.
Except we are not just dealing with an issue of computation. While AI and computers can certainly process information and make decisions more quickly than humans, they lack the ability to discover and process knowledge in the same way that markets do. Econ 101: Markets are decentralized systems in which prices act as signals to coordinate the actions of millions of buyers and sellers. Prices reflect the subjective valuations of consumers and producers. Importantly, they allow for the discovery of new information about what goods and services are in demand and how to produce them efficiently. AI and computers cannot discover new information or adapt to changing circumstances in the same way that markets can. As Peter J. Boettke and Rosolino A. Candelab explain in “On the Feasibility of Technosocialism,” a 2022 working paper:
Economic calculation is a tool that enables actors to steer a course in a turbulent sea of economic uncertainty, of ceaseless change, of ignorance of the environment, and of alluring hopes and haunting fears. Once all those are assumed away, then the functional significance of economic calculation disappears. But so would opportunities for mutual gain, entrepreneurial innovations, and discovery of new opportunities. In other words, if you assume away change, you assume away the possibility of economic growth and progress. Equilibrium means precisely that: equilibrium. No change, no dynamics, no adaptation, no adjustments. Just static optimality in the use of given technology, given tastes and given resource endowments.
In other words, information discovery, not just computation, is a huge problem for the FALC vision. AI doesn’t have the same incentives as entrepreneurs. It doesn’t have to worry about making a profit or losing money. As a result, AI isn’t as likely to make the same efficient decisions as entrepreneurs. When an entrepreneur makes a mistake, they lose money. This loss provides them with critical feedback that helps them to make better decisions in the future.
But even gathering the needed data isn’t so easy. Think about what economists call “tacit knowledge.” It’s knowledge that is difficult to codify and transmit, such as how to manage a company or negotiate a contract. Or think about this viral exchange on a recent 60 Minutes episode between reporter Anderson Cooper and music producer Rick Rubin:
Anderson Cooper: Do you play instruments?
Rick Rubin: Barely.
Anderson Cooper: Do you know how to work a soundboard?
Rick Rubin: No. I have no technical ability. And I know nothing about music.
Anderson Cooper: You must know something.
Rick Rubin: Well, I know what I like and what I don't like. And I'm, I'm decisive about what I like and what I don't like.
Anderson Cooper: So what are you being paid for?
Rick Rubin: The confidence that I have in my taste and my ability to express what I feel has proven helpful for artists.
For more on tacit knowledge, here’s economist Tyler Cowen giving a lecture, “Economics, Hayek, and Large Language Models,” yesterday at the London School of Economics:
I think AI, large language models, they’re actually going to make central planning harder for Hayekian reasons. My colleague Alex Tabarrok had one good way of putting it: If all these individuals have these armies of research assistants, colleagues, architects, they can just do a lot more stuff. Their work, their lives, their workflows are a lot more complicated. So if you have this intense multiplication of projects, probably the resulting economy is harder to plan.
I was doing a research paper on Jonathan Swift, the Irish writer from the 18th and 17th centuries. And there are all these obscure pamphlets by Swift, which by the way you can’t Google about and they’re not summarized on Wikipedia. And I just said to GPT, “Summarize for me what these different Swift pamphlets say.” And it did it for me in, I don’t know, a second and a half. And then I could decide, did I need to read that pamphlet or not read that pamphlet. It was super useful to me. The fact that you have that, which is amazing, is kind of like witch craft. But it doesn’t make centrally planning an economy that much easier, because we know from reading Hayek, [that] key knowledge is quite decentralized.
We have an economy of bottlenecks. A lot of knowledge, à la Michael Polanyi, is very difficult to articulate. An entrepreneur, scientist, someone running a nonprofit will know very particular things embedded in a context that they couldn’t even spell out for you. But what they know in this hard-to-articulate fashion is essential to succeeding at what they do. Large language models don’t capture all of that, or even very much of that, however miraculous they may be. And then at the same time, like Alex Tabarrok pointed out, you have this intense multiplication of projects and economic complexity. So I don’t think it’s really a new path towards central planning. But one thing interesting about large language models, is just how much context they have. And in this sense they’re very different from a pocket calculator. I can pull out my pocket calculator—I don’t even have one anymore… Two plus two equals four, but it doesn’t have anything beyond that. Everything is very literal and very exact. But if you put in a query to GPT-4, it figures out what you mean.
I asked it a question, I think this was yesterday: I said, “Please define web 3.0 for me in a manner good for a Bloomberg reader.” I was writing a column for Bloomberg. I thinking, what’s the appropriate definition? So I said, “Let’s get a definition that would make sense for a Bloomberg reader.” All I wrote was, “for a Bloomberg reader.” If you think about those words very literally, what is that even saying? Like someone reading Bloomberg, who has written stuff? Someone reading Bloomberg News? Bloomberg Opinion? Is Bloomberg the reader? Just “for a Bloomberg reader.” It doesn’t actually, in a literal sense, hang together. But GPT knew exactly what I meant. What I meant was someone who was financially oriented, pretty sophisticated in terms of economics, but not wanting a rarified, academic answer either. So GPT did its thing, however you want to conceptualize that, and it spit out an answer for me that I thought was exactly the kind of definition that a Bloomberg reader would find appropriate. So the ways in which GPT has this mapping of how words and concepts fit together, it’s this super complex mapping, and it figures out what you mean. It is this radical advance toward mobilizing some extra knowledge of context.
As Polanyi put it, "We can know more than we can tell.” We also know more than what an LLM can find on the internet. So sorry, techno-socialists, I think we going to have to keeping relying on an innovative and entrepreneurial market system (run by all our human brains) that’s created an explosion of prosperity wherever it’s been tried for the past quarter millennium.
5QQ
💡 5 Quick Questions for … economist Michael Strain on technology, AI and the US economy
Michael Strain is the director of Economic Policy Studies and the Arthur F. Burns Scholar in Political Economy at the American Enterprise Institute. Strain is also the author of The American Dream Is Not Dead: (But Populism Could Kill It). (Templeton Press, 2020), in which he examines long-term trends in economic outcomes for typical workers and households.
1/ Has the internet contributed to US productivity growth?
Yes, the internet has contributed to US productivity growth. The contributions were not very easy to forecast back in the late 1980s and early 1990s when those sorts of forecasts were being made. For example, the internet and computerization more generally have had a big effect on productivity in the retail sector. They've had a big effect on productivity in the services sector more broadly. That wasn't something many people saw ahead of time. And there's a lesson for today as well. We are seeing pretty rapid advances in AI technology, in large language models, and in other sorts of generative AI. I think we should be very confident that eventually those technologies will increase the productivity of American workers and the productivity growth rate of the US economy. But it's going to be hard to know, sitting here right now, exactly how that's going to play out.
2/ Why is it wrong to look at productivity growth for at least the past 15 years and say the internet really hasn't done much for the economy?
I think we don't know what the productivity growth rate would have been in the absence of the internet. It's hard to say what the counterfactual would have been. I also don't think it is the case that the evidence suggests there hasn't been an effect. I think the evidence suggests that there has been an effect. It's hard to know what your expectation would be of the aggregate productivity growth statistics given that. But I think we also have to be modest about our ability to think about magnitudes here. It's much easier to say the internet had an effect — it's much easier to say generative AI will have an effect — than it is to say how big that effect was, or how big that effect will be.
3/ Some people are so worried about the impact of generative AI on labor markets that they think one policy response would be to tax capital and labor differently. The assumption is that we give too much of an advantage to capital versus labor, and therefore those tax rates should be changed. Do you think that's a policy lever that should be employed?
I don't think the right response to exciting new technological advancement that has the opportunity to drive long-term prosperity and increases in living standards over a long period of time is to attempt to slow down the rate of technological progress through the tax system.
4/ Some people on the left hope that AI will become so sophisticated and so smart that it can actually replace markets and the price system, becoming the ultimate techno central planner. Does that seem at all conceivable?
No. That doesn't seem even remotely plausible to me. I have a very hard time envisioning a world where human beings are willing to turn over key decisions about how society is organized and how our individual and collective lives are ordered to a computer.
5/ Setting aside the politics, do you think it is possible to have AI perform the function which is currently performed by the price mechanism and markets?
There are technological and even physical dimensions to that question that I'm just not sure about. For example, if a computer were able to make those sorts of resource allocation decisions for an economy with 330 million people, is there enough energy to power that computer? I don't know the answer to that question, but that’s one very important dimension that needs to be explored to fully answer that question.
But even assuming that those sorts of computational and physical obstacles can be overcome, I think my answer to your question is, yes and no. Yes, I think we could have a more efficient, more successful Soviet-style system where the central computer sends everybody an email and tells them what their career will be and how long they have to work. It monitors their job performance, and it decides who to promote and who not to. It makes sure that the economy is producing enough bread and makes sure the economy is producing enough amoxicillin and makes sure that the economy is producing enough houses. I think the technology could assist autocrats that have that ambition.
But I don't think technology would ever be able to do that as well as a decentralized, market-based system can do it. The reason for that is because it would be very hard for a computer or an artificial intelligence system to incorporate people's aspirations and people's hopes and people's dreams for the future and dreams for their children's futures.
Take a young person who doesn't do well on standardized tests but is really determined to succeed and goes off to start a business, or through sheer grit and determination is able to enter into one of the lucrative professions in our economy. An AI system could look at that test result from when that person was 12 years old, but I think it would be hard to see into that person's heart and to build that into the system. The same thing with innovators and entrepreneurs, and over a long enough period of time, those folks have a disproportionate impact in fueling long-term prosperity. And I think the world where we leave those types of decisions to individuals and where resource allocation decisions are decentralized, ultimately, is a world that will be much, much better than the one where those sorts of decisions are made by a computer — even a really smart computer.
Micro Reads
▶ Google DeepMind’s game-playing AI just found another way to make code faster - Will Douglas Heaven, MIT Tech Review |
▶ Peter Singer on animal rights, octopus farms and why AI is speciesist - Madeleine Cuff, New Scientist |
▶ Microsoft Is Bringing OpenAI’s GPT-4 AI model to US Government Agencies - Rachel Metz, Bloomberg |
▶ Why the AI boom is not a dotcom redux - John Plender, FT Opinion |
▶ Instagram is apparently testing an AI chatbot that lets you choose from 30 personalities - James Vincent, The Verge |
▶ Industrial Research Labs and R&D Productivity - Anne Marie Knott and Natalya Vinokurova, SSRN |
▶ The Problem with AI Licensing & an “FDA for Algorithms” - Adam and Thierer and Neil Chilson, Federalist Society |
I think what we’ll actually see is a reduction in firm size, since AIs will reduce transactions costs of using the market enabling more opportunities for efficient human to human interaction mediated via AIs and markets instead of firms.
Maybe I didn't understand the quote. But if the Swift pamphlets were so obscure to not be indexed on Google, how did they get found and pulled into ChatGPT?