Transcript: My Q&A with economist Pete Boettke on why AI can’t plan the economy
There's nothing new about hopes that technology can make socialism actually work
In case you missed yesterday’s episode of Faster, Please! — The Podcast, or if you prefer reading to listening, here’s the transcript of my conversation with economist Pete Boettke. (Typically, the podcast and transcript are published at the same time.) Pete is a university professor of economics and philosophy at George Mason University and director of the F.A. Hayek Program for Advanced Study in Philosophy, Politics, and Economics at the Mercatus Center. Last year, he and Rosolino Candela authored the paper, “On the Feasibility of Technosocialism.”
In this conversation, Pete discusses the history of dreams of tech-enabled economic planning, the post-Global Financial Crisis renaissance in socialist thinking, and whether new artificial intelligence tools will finally allow socialist planners to escape the basic economic problems that plagued 20th-century experiments. (Spoiler alert: No, they will not.)
Below is an edited transcript of our conversation.
Technosocialism in the 20th century
James Pethokoukis: The paper that you co-wrote on the feasibility of technosocialism, I think is an interesting paper of economic philosophy, economic history as well. But I think it gained a lot more relevance in November of last year with the introduction of ChatGPT, because when that technology was introduced and people started using it, I began to hear (I guess probably mostly from people on the left) that we had now begun to proceed down a path where at some point this technology would improve enough that it could make the dreams of socialists of past decades finally happen. That it could make socialism an efficient way to run an economy.
Before we get into that, I mentioned that the paper is a piece of economic history, and the notion of technosocialism did not begin last November. It has a long history, and I wonder if you could just take a minute or two and explain a little bit of the history of technosocialism.
Pete Boettke: I think one of the key things to always keep in mind is that socialism in its scientific form, from Marx forward, promised that they would rationalize production. As opposed to the anarchy of production under the market. They were going to use modern technologies all through the different times to be able to rationalize production. The first one was simply the development of advanced bookkeeping and management techniques, like, for example, Taylorism and the idea that you're going to have scientific management to be able to rationalize the economy. And they would point to things like the experience during wartime economies and how countries during wartime could be able to mobilize their resources, use advanced tools of bureaucracy to be able to rationalize production for the goal of fighting the war. And then the claim was, can you do that in peacetime? The assumption was, obviously we can, because we did it in wartime. They don't really distinguish that.
As modern tools of management evolved, there was always an aspiration that those tools could then be used to plan the entire economy as well. Rather than just improve a firm, they could improve an entire economy. That included, by the way, linear programming at different times. And then other kinds of ideas of operations management research, which was very heavily influenced by this idea that you could rationally plan the outcome. Obviously, as computer technology improved, that was the great hope of a lot of people as well. You have the Soviets developing their ideas of linear programming. Kantorovich actually won a Nobel Prize for his idea of doing these kind of things. But then there also was the dream aspiration, even in Chile, where they thought they were going to be able to have this computer system that could plan the economy. That's under Allende. They never were able to implement it. There was always this dream aspiration. The history of socialism is a history of a long march, actually, through various reaching out of aspirations of technology, marrying with the goals of socialism to be able to achieve them. And so this is just the latest ramification of it.
In the current context, besides technology, people also think this is the first time anyone ever said “democratic socialism,” which of course is completely wrong too. The term “Soviet,” actually, in Russian means “worker councils.” Democratic socialism: They were going to bring real democratic planning, real democracy to the idea. You're going to combine democracy and planning. But it's also the case that if you look at the English socialists in the 1930s, their argument was, “We are socialists in our economics because we're liberal Democrats in our politics.” And the belief was that the Great Depression had revealed that moneyed interests and monopoly power were such that they undermined democratic society, so socialism had to be the way to do it. And so this is why Hayek's criticism becomes all that much more, because he really meant it when he said that he dedicated The Road to Serfdom to “socialists of all parties,” because he was trying to show a tragic outcome. You want these democratic values to be achieved, but socialism is actually going to end up undermining the democratic values. So it wasn't the case that earlier socialists just sat back and like rubbed their hands and said, “How can I have a totalitarian society?” Totalitarianism is the sort of unattended, undesirable outcome of their dream aspiration to have a rationalized, democratic society in which the state plays this major role.
If you consider what it was like in the beginning of the 19th century and how people lived miserably and were in extreme poverty, and then by the end of the 19th century we're starting to see more and more people escape from the Malthusian trap — let alone what we see in the 20th century in terms of that — they believed that we had solved the problem of scarcity.
The appeal of economic planning
I wonder if we could just narrow down for a second on the problem that technosocialism was meant to solve. I think most people are probably aware that the economy of the United States was better than the economy of the Soviet Union, particularly toward the end. What was the failure of central planning specifically that if they only had better, more powerful computers, they might be able to solve?
From Adam Smith all the way up to the development of the 20th century of economics, economists had articulated the role of property, prices, and profit and loss accounting in coordinating our economic lives and commercial society. And during this period of time, we of course witnessed and experienced tremendous economic growth and progress. Just look at the 19th century in England and look at what happened in economic growth terms and everything like that. And then the United States and whatnot. But it was also the case that the criticism of market economies grew up alongside of this. You had the development of a concern with monopoly power. You had the development of concern with business cycles and whatnot.
Again, going back to what I was saying before about rationalizing production, the belief was is that the commercial society got it by property rights, prices, and profit and loss. Yes, it could mobilize people's incentives, marshal them in a direction that leads to economic growth. But during a whole period of time, it was kind of ragged. Maybe the results weren't evenly distributed. In fact, it could cause severe disruptions in society. First of all, as we move to cities, we were on top of each other. There were those kind of externalities that were associated. When we move away from living on the farm to now working in the factories, when there's unemployment, that is devastating to the population. There were these social issues that needed to be addressed, recognized, let's say, in the Beveridge Report of the “five giants” of poverty, unemployment, disease, ignorance in our schools... They all had to be addressed through social policies. That's what the idea of the socialist project was supposed to solve: By turning the means of production over to the state, we could eliminate the monopoly power, and we could eliminate the economic disruptions caused by business cycles by rationalizing production, and then we could address these social ills which people began to believe were a consequence of politics rather than of economic life.
Remember that the puzzle as put by Henry George and other reformers was we had “poverty amidst plenty.” If you consider what it was like in the beginning of the 19th century and how people lived miserably and were in extreme poverty, and then by the end of the 19th century we're starting to see more and more people escape from the Malthusian trap — let alone what we see in the 20th century in terms of that — they believed that we had solved the problem of scarcity. And so now what we needed to do was make sure that our politics was arranged such that the results would be equally distributed and we wouldn't have such massive disruptions. And the Great Depression, of course, led to a loss of faith in a whole generation of intellectuals in the West that a market economy could in fact be an engine of economic growth in modern times. Instead, we needed to turn away from viewing the state as a referee, to the state now as an active player in the economic game. And that's kind of the consequence of the Great Depression.
In 2015, for the first time in human history, less than 10 percent of the world's population was living in extreme poverty. It's an economic miracle that's unbelievable. And that's due to the age of globalization, but yet it's completely forgotten by the kind of concern with the Global Financial Crisis, the inequality issues and the growing inequality.
The recent resurgence of socialism
I was wondering if the Global Financial Crisis hasn't played a role in sort of renewed interest in this topic. Even though that predates some of these latest AI advances, certainly some people were saying, “This is a failure of capitalism. It is too chaotic. If ever possible, it'd be great if we could rationalize this economic system.” And then of course, along comes what seems to be a pretty big advance in AI.
Jim, I think that that is very perceptive. It was the Global Financial Crisis combined, I think, with the renewed interest in the observation of great inequality. The Piketty stuff had a very big impact in terms of just the general zeitgeist of people's mindset. For someone like myself who reads a lot on the history of economic thought. . . Right now, by the way, for your listeners, I would highly recommend this, and I think that your listeners would be very excited about this: Jennifer Burns's new biography on Milton Friedman. I think it's going to be a blockbuster. It's amazing. One of the things that she does in that so well is she captures the world in which Milton Friedman was basically emerging as an economic pugilist, in which he's fighting. And it's like we're reliving the same arguments over and over again, because again, Friedman is coming out of the Great Depression. That's when he is being educated: in the middle of the Great Depression. The whole New Deal is on the table. The postwar economy is again, “Can we get planning?” It's the men of science: Can we do all this stuff? And Freedman lived through all of that. And he's trying to explain the power of the price system and the tyranny of economic control to an audience which is completely not interested in that message. Friedman being able to figure out how to actually pull that off and communicate that is amazing. The arguments that Friedman and his colleagues, let's take a broad sense of that, all came up with in the period between 1950 and 1980, they're all the arguments that are being forgotten today.
And it's not just macroeconomics, it's also microeconomics, like antitrust policy. Everything like that. If you read what's in the papers today, in the general view of things, people just lost faith in the market economy to deliver the goods precisely at a time when the market economy had actually delivered. In 2015, for the first time in human history, less than 10 percent of the world's population was living in extreme poverty. It's an economic miracle that's unbelievable. And that's due to the age of globalization, but yet it's completely forgotten by the kind of concern with the Global Financial Crisis, the inequality issues and the growing inequality. And then combine that with COVID and the need for the state to be able to handle a giant, significant externality, and we're all the way back to the kind of world that Milton Friedman had to fight intellectually. A lot of the arguments that are going to emerge are going to be reiterating some of the arguments that happened in the 1930s and ‘40s and ‘50s. But they're now new because there are new technologies and new responses that have to be made.
If I were a technosocialist, I might think that the socialists of the past weren't wrong, they were just a little bit early. And now if you take Moore's Law plus big data, as long as there’s enough computing power, or compute, and enough data—gosh, if Amazon can do it, if Walmart can do it, why the heck can't Washington, DC, do it?
This is where now it becomes a very interesting conversation about how markets really work, because one of the fundamental problems is, when you frame it in terms of the way that you just did, you're viewing the problem of the economic system as a computational problem. It's just a complex computational problem. What, actually, markets are all about is discovery of information that previously doesn't exist. It's not an algorithm... In social learning literature, they make a distinction between “kind” learning environments and “wicked” learning environments. In a kind learning environment, the parameters are fixed. They could be quite huge, like, think about a game of chess. There's a finite number of moves in a game of chess. Now, it could be extremely complicated and complex, but it's finite. It's finite, which means that a computer can churn through all of those moves and just process them quickly. What happens? A computer, Deep Blue, can beat a grand master in chess. But when is a robot going to outmaneuver Roger Federer on a tennis court or Ronaldo or Messi on a soccer pitch or whatever? And the reason for that is that they get the ball and it comes to them in ways oftentimes that never came to them before. All of a sudden, that's a wicked learning environment in which the parameters are not fixed. But what happens is you have to adjust and adapt on the fly to be able to then respond correctly to that, as opposed to the way a game of chess is played.
It’s very interesting to compare, where is it that computers actually are excellent at doing things, and where it is that they're clunky and uncoordinated and everything like that. Going back to your young socialist, they might look at that and say, “Oh, we just aren't there yet, because look at what large language models are doing.” But how do large language models learn? Large language models learn by humans learning and posting things all over the place, and then the computer processes all the stuff that's out there and then summarizes it. I just goofed around in May, because I had to give a little commencement address, and I said, “Oh, let me go on this and check and see what a commencement address would be.” And basically what it did, it spit out a commencement address in five seconds that was basically a compilation of like all the famous commencement addresses. And so it gave me the basic thing. And of course, it was like certainly passable and everything like that. But it wasn't like the computer came up with something I’d never heard before.
Now, tell me what it's like to actually invent these — I'm holding up an iPhone. When the world is full of BlackBerries, and you come along and come up with this, that's something that actually pushes you outside of the limits. You're using different things. Or come up with, like, combinatorial thinking that borrows from jazz music and rap music and some other thing to come up with something that people hadn't even thought about before. And that's kind of what the economy is trying to actually constantly process and do. And it's this wicked learning environment within the commercial society that requires the tools of property rights to incentivize us, prices to guide us in our decisions, profits to lure us, and losses to discipline us. And without those tools of the commercial society, we are basically without a compass floating around. So then what that means is that these tools of large language learning become precisely the tools that: tools an individual can use, just like I use on my phone my fitness app to try to track what I'm doing and everything like that. But it's not for the economy as a whole. And I think this becomes a very important distinction, which is very subtle and very hard for people to understand, which is that a firm like Walmart, which is huge, or Amazon, which is huge, is still nevertheless a firm. It has a single objective function.
An economy doesn't have a single objective function. It satisfies the multiplicity of desires and ends that all of us have. All right? An economic system, a market system, doesn't have a singular overarching end. The whole goal of it is to have a multiplicity of ends, which get satisfied through various different entrepreneurial ventures to meet the consumer demands and whatnot, as opposed to the objective function of a firm. And that firm can legitimately claim to have a residual claimant, an individual who is ultimately responsible for the profits or losses of that firm. Go back to the BlackBerry example: One month before Apple introduced the iPhone, BlackBerry was the number one phone in the world. No one was challenging the market power of BlackBerry. And then all of a sudden, within a month, boom. That's what capitalism brings. That's what Schumpeter referred to as the “creative destruction” aspect. We rely so much on creative destruction to actually fuel economic growth and development. And socialism is going to need to have that as well. But where are they going to discover that? It's not going to come from an algorithm. It's going to have to come from human creativity.
If you hear about these environmentalist degrowth people who don't really like economic growth, who kind of want to keep things stable and focus on redistribution, in that kind of scenario, maybe the AI planner would work.
I'm not going to say that it works. But just like under military interventions where there's one goal to be achieved, which is, say, win the war, that you can mobilize labor by having a draft and marshal labor to do that, therefore you could say, “That works because I now marshaled my troops and I was able to do that”... If you didn't think that you needed to grow in order to have the economy continually fight the problems, let's say, of climate change, then trying to stagnate the economy, yeah, this is the way you would achieve it. But the problem with that is that stagnation doesn't address any of the problems that need to be addressed. The only way that we're ever going to be able to defeat the problems that we are confronted with is by advances in technology and new and innovative ways to produce things or to distribute things in the economy. One of the goals of economic life in a commercial society is to produce more with less. That very act of producing more with less is actually conservation. It's the way in which we actually redirect energy and everything else — technology is lifesaving — and economic processes that allow us to produce more with less.
Now, how do you discover the processes of producing more with less? You do that through the profit and loss system and the guiding role of prices. And incentivized by the idea of property rights. When we don't have those kind of functional operations going on, we're basically just stumbling around blind in the dark. And we can substitute one goal, which is “We're going to achieve X.” But the problem with that is that there's no way that achieving of that X is going to be able to address the existing problems that we have, because we've now reduced the technological discovery aspects of things. And then to put it another way, which is actually ironic, if you don't have an incentive for us to be able to come up with new and innovative ways to use large language models, why would anyone come up with the large language models? They would stagnate. They wouldn't actually get there. The reason why people are coming up with them right now is precisely because there are rates of return to be had from that, because they have private property rights and they can monetize those and therefore they’re creative and [clever] and going out and doing it. But socialism eliminates by nature that very incentive mechanism that's in operation.
The issue with China is actually I think a very big one about figuring out whether or not they've actually experienced the kind of economic growth that they claim to have achieved.
Can AI aid industrial policy?
Let's think of socialism more like the Chinese version, or even those in Western countries who would like to have a lot more intervention, they might want to have more industrial policy or government picks winners and losers for subsidies. Don't you think that in those cases, having a very smart AI might allow those kinds of economies, which aren't Soviet-style economies, to perform better by creating a better planning tool for either industrial policy planners or the engineers in Beijing?
One of the things that I would argue is that even, let's say we talk about national conservatives — let's stick to that group for right now, I'll get to China in a second — the national conservatives who think that we've lost our telos, that the commercial liberal society has lost our telos: We need to have that back. The results have been borne, the benefits, by the elite and who's been left behind is community and whatnot. And so then the idea is now I'm going to now rearrange the economic system to make sure that, let's say, the Rust Belt of America doesn't get hollowed out and that people still have meaning and their jobs and things like that. So I'm going to pick winners and losers in this process.
Even in that scenario, it's still the case that you want to produce more or less. Because you want to actually be able to satisfy the desires of meeting that goal in the most efficacious way as possible. Otherwise, you're going to meet a goal, but not actually have the goal, because you'll be mired in poverty and waste. You're going to still want to achieve the goal of directing resources, let's say, to the Rust Belt as efficaciously as you possibly can, which means that you're going to have to have some kind of mechanism to tell us about how it is that I'm allocating resources. Because scarcity is not going away; we're still going to have to wrestle with scarcity and allocate scarce resources among the different competing ends as efficaciously as we possibly can. And that, in its essence, is the problem of economic calculation: How do I move from the technologically feasible to the economically viable? This is what's going on. I have a technologically feasible project, but is it economically viable in order for me to do it this way or that way or some other way? How do I discover that? If I want to try to make sure that, let's say, the steel industry or the auto industry stayed at the frontier of economic life in America, I'm going to still want to do that in the most effective way I can. And I have to discover that, because currently I'm not doing that. Under my current system, we're not actually doing that. We're finding out that, let's say, exporting that industry to some other country is a cheaper way to do it. If I'm going to try to do it here, I have to find ways in which I'm actually going to lower the cost and improve the output. That's got to be the focus. Even the strongest desire — Oren Cass or whoever wants to have this sort of come about — it's a pipe dream for them to think that they don't have to meet an economic test. It's just them saying things. “We want to prioritize this” Okay, good, you want to prioritize that. But then what is the most effective way for you to prioritize that? And because they have to wrestle with scarcity, they can't just assume away scarcity in doing all of this.
And what about China?
The issue with China is actually I think a very big one about figuring out whether or not they've actually experienced the kind of economic growth that they claim to have achieved. The history of China is very fascinating because when Deng Xiaoping first came into power he tried one model. ‘78 to ‘85 is one model operating. That doesn't work. ‘85 to more recently was more market oriented, globalization, things like that. Xi has kind of gone back to now wanting to sort of control the economy. They got a boost with the COVID shutdown, because that kind of forced everyone into the same kind of model. And the question is, how thriving can this Chinese economy be while being shut down and isolated and planned in the way that it is? And is it going to be an engine of creativity and growth, or is it just going to be a bunch of white elephants with large projects, which in fact are just like the Potemkin villages of the Soviet-type economy in the past? And I think that history is being written for us right now, and we have to look at that in terms of the market. I don't think China is someplace where we point to and say, “Ah, look, it's all working” or whatever. It exists. Xi's in power. He's declared himself basically ruler for life. Let's see how that all shakes out in the next decade or whatever.
There's a senior fellow at AEI, his name is Jesús Fernández-Villaverde. He wrote a piece called “Artificial Intelligence Can’t Solve the Knowledge Problem.” He's been working a lot on these issues, and he’s a very, very established macroeconomist. Very mainstream kind of guy. But he gets it. He recognizes and sees this aspect of things. [MIT economist Daron] Acemoğlu has this new book out worried about technology and other things like that. But he also sort of understands this Hayekian lesson at some fundamental level about the need to have a social environment in which learning is constantly possible rather than just rote memorization.
You mentioned my paper, that it's philosophical or something like that. And at first, as an economist...
I meant that in most positive way possible.
As an economist, I first hear that and I kind of bristle a little bit. But you're 100 percent correct, because a large part of all this stuff goes way back much deeper in computer science literature. Hubert Dreyfus wrote this book called What Computers Can't Do in [1972]. John Searle, the philosopher at Berkeley came up with this idea to try to test what the difference is between AI and human intelligence. And he called it the Chinese room test. There's a distinction there. The person in a box knows all the rules of Chinese but never learned Chinese. And you put the slip of paper in, an English sentence, and then they spit out the Chinese characters. You get it in and out, but they never really learn Chinese. They don't know the subtleties of the language, all these things like that.
And I think that a large part of this is like what's going on with ChatGPT: You spit something in and it spits it out, and it does it at amazing speed and it processes it. But is the computer really learning the subtleties of the arguments that actually are being employed in those kind of ideas? Is it simulating human intelligence? It's simulating human responses because it's actually just churning through a whole list of responses. It's very much like playing chess. The very pieces on chess are defined by the rules by which they can move, and the spots on the board that are fixed and finite. In an economy, those pieces on the board can move wherever they want. And the spots are infinite, not finite. And so now tell me how it is that we're going to move. Imagine if I was playing chess with you and I took my bishop and I moved my bishop in a Z. Just moved it around like that. That's what Apple did to BlackBerry.
One of my favorite American economists,Frank Knight, used to like to say that when people tell me that they need power to do X, I stop listening after the first three words: they need power.
Not wrong, just early
I think the millennial or Gen Z socialist might listen to this whole conversation we've had, and at the end they might think, “Not wrong, just early. Once we get to artificial general intelligence, then the problem will be solved. So you may be right today, here in August, 2023, but in August, 2033, finally Moore's Law and some very smart software will prove you wrong.”
Let me just give you a warning of that kind of thinking, which I agree with you is the way that everyone thinks about this. My favorite Soviet economist is a man named Nikolai Bukharin. Nikolai Bukharin actually was the architect of the original plan towards communism. He wrote a book called The ABCs of Communism and wrote the policies that Lenin implemented between 1918 and 1921. He also then became the architect of the New Economic Policy where they retreated from socialism so that they could stay in power. The fact that he was such a major player in this made him a target for Stalin when Stalin engaged in his purges. And so Bukharin was lost after he first sided with Stalin to get rid of Trotsky, then Stalin outflanked him and got rid of Bukharin. However, in 1925, Bukharin wrote a famous paper defending the New Economic Policy. And in it, he points out that Ludwig von Mises, who he had actually met and studied with in Vienna before the revolution… Because he went to study and learned from the Austrian economists who were considered the leading critics of Marxism, and he was tasked to go there, learn, and then criticize the Austrians. And he wrote a book all on this as well. And he says Ludwig von Mises is the most learned critic of communism. He says that Mises’ analysis of why communism can't work explains why it is that we had to retreat in 1921. But then he says, we will have the last laugh because we will eventually reach a stage where we can then advance to socialism and we will defeat Mises’ argument. But for right now, we have to actually live with the reality that Mises is right. And then we go on.
In the horrors of the 20th century, that speech became Stalin's justification to explain why Bukharin was a right-wing deviationist, and then eventually he executed him. But that speech is a famous speech given to the politburo defending the New Economic Policy, and he makes this very argument that you just made: Socialism cannot work right now, but give us a generation and then we will have socialism. So do not take our power away from us, because we need the power today to do this. And one of my favorite American economists,Frank Knight, used to like to say that when people tell me that they need power to do X, I stop listening after the first three words: they need power. We have to be very conscious of what are the political consequences of concentrating such power in the hands of such few to be able to try to pursue these dream aspirations in terms of the functioning of our democratic society and free society. And we always have to be very vigilant about that, I would argue.
Pete Boettke's interview about AI, markets, and the future of Socialism seems to veer off course in fully addressing the core issue. Boettke's analogy between robots and Messi doesn't capture the essence of the argument about AI's limitations in economic decision-making.
Picture a soccer field where future robots are taking on Messi. Quite a spectacle, right? But in this match, AI might just give Messi a run for his money, possibly even stealing his thunder. Sensory perception, motor skills, decision-making? All within the grasp of future tech. Messi doesn't have some mystical internal knowledge that a computer couldn't access; everything's out there on the field for both human and machine to observe and respond to.
Now, bring that AI into the bustling market of wants and needs, and suddenly it's like a fish out of water. It's not that AI lacks smarts or finesse; it's about consumer preferences. AI might outplay Messi, but it won't decode what you want for dinner next Tuesday, no matter how many updates or algorithms you throw at it.
Boettke dances around this, but he never quite lands the punch. And that's a shame because it's the real argument against AI being able to replace market prices. This isn't about current technological limitations or clunky algorithms; it's about something inherent in the way markets function.
The debate isn't about the creativity of consumer preference or the nuances of human genius. It's rather simple, really: what you want stays locked in your noggin unless you decide to spill the beans. Prices coax that out of you. Absent mind-reading equipment, AI can't peer into your soul and guess your favorite ice cream flavor. It's a game of revelations, and the market prices are the only known referees, something both humans and their mechanical counterparts must heed.
Interesting read, even though I lean towards a different conclusion. I must say, I am disappointed by the references to Road to Serfdom. It is a simplistic book that did not really seek to critique the Soviet Union system (which Hayek considered already lost to the evils of planning), but the then-nascent mixed economies of the Western world. The argument was that any level of intervention in the economy, whether in the form of much-needed WWII price controls or very-successful socialized healthcare, would lead to Soviet-style totalitarianism. This was an outlandish prediction. The mixed economies of postwar Europe 1945-1970 show one of the highest levels of welfare improvements, profit and wage growth ever recorded in history. They also kept and progressively expanded their democratic credentials. Polanyi, in his Great Transformation, offers a much more nuanced analysis of the period that does not err on abusing a fully planned-fully free economic binary.
That said, it is true that Hayek and his acolytes offered sound arguments against central planning as traditionally understood: a huge calculating machine that could plausibly replace markets and the price system to allocate goods and services in a modern economy. I would agree that no plausible AI advancement would be able to handle this, let alone do so without threatening certain rights and freedoms.
However, this is not the only way AI could contribute to "planning", understood in a wider sense. I am quite puzzled at the criticisms of industrial policy, given it has been only through state strategic policies that information technologies have come about, on the Atlantic and the Pacific. From the Pentagon in the US to the MITI in Japan, government has played a fundamental role in steering investment and innovation that we have later seen transplanted to consumer markets and the private sector. And, given challenges like climate change, resource scarcity, poverty and others, we could argue that some of our institutional solutions do show that "planning" (to a certain degree) could work. From healthcare to infrastructures, we see the visible hand contributing to a gradual transformation.
We could spend hours discussing this, and I am far from being an expert in the matter. But for a more interesting discussion on feasible socialist planning and the contribution from computing, you could perhaps bring Evgeny Morozov on your podcast. Here are some of his contributions: https://newleftreview.org/issues/ii116/articles/evgeny-morozov-digital-socialism & https://the-santiago-boys.com/