Why all your biggest fears about AI ... might be wrong
From mass unemployment to existential threats, journalist Timothy B. Lee offers some needed and realistic optimism
Quote of the Issue
Any teacher that can be replaced by a machine should be!” - Arthur C. Clarke
5QQ
💡 5 Quick Questions for … technology reporter Timothy B. Lee on AI
When it comes to the intersection between the world of bits and the world of atoms, Timothy Lee always offers some of my favorite analysis. On self-driving cars, for example, check out this 5QQ he did last summer. (More on that in question four!) And with all the excitement over generative AI since that Q&A, Tim has started a new Substack newsletter, Understanding AI, which I can’t recommend enough. Below is a taste of the kind of analysis Tim offers there.
Tim is a reporter who has written about technology, economics, and public policy for more than a decade. Before launching Understanding AI, he wrote for the Washington Post, Vox.com, and Ars Technica.
1/ Why aren't you worried that generative AI is going to cause mass unemployment?
I would say a big reason is looking at what happened with the internet. The internet had a big impact on information-focused industries like movies, music, journalism, etc. But those are actually a small part of the overall economy. Most of the economy is housing, healthcare, education, transportation. Those are mostly things where there's a big physical component, people doing physical things in the real world: building houses or building cars or helping people in a hospital, things like that. While we've seen very rapid progress with generative AI, I think progress with robotics has been somewhat slower. Even if progress with robotics speeds up, there's going to be a lot of jobs — building the robots, repairing the robots, etc. I just think most jobs are going to be only partially automated, which maybe means it will make us more efficient, but I don't think there are that many professions that are going to be completely replaced by robots.
The other thing is, people tend to forget that the level of employment is ultimately a matter of macroeconomics. If you imagine a situation where AI is very quickly eliminating certain jobs that would have a deflationary effect, which means the Fed could cut interest rates, or Congress could do more deficit spending, which would then put more money in people's pockets, which then would allow them to spend on the remaining services. There are always going to be services people want that human beings can provide. If society gets wealthier, people have more money and they will spend it on whatever goods and services people still want, and some of those will require human workers.
2/ Why don't you think AI is going to kill all of us?
The main reason is that the physical world is very complicated. Right now, there just are not enough robots and other mechanisms for internet-based intelligence to have big impacts on the physical world. I think there are a couple exceptions to that. Certainly if we screwed up and connected nuclear missiles to the internet, you could imagine then some rogue AI or some form of power using nuclear weapons to kill us all. I've heard scenarios where maybe AI helps somebody create a new killer virus that then is synthesized and kills everybody. I don't think that's impossible. But I think the kind of solution to that is to regulate labs that are used for biology and to be very careful with nuclear missiles. A lot of people are worried about a more general situation where the AI becomes so powerful that it “takes over the world.”
I just don't see how that would happen because it's actually pretty difficult. The example I like to look at is times when hackers have tried to cause carnage in the real world. One of the best examples is in Ukraine. In 2015 during the conflict there, Russian hackers tried to shut down part of the Ukrainian electrical grid, and they succeeded in doing that. But then the engineers running that system went in and bypassed the computer and turned the system back on. It was a few hours of disruption; it wasn't like an existential threat to Ukraine. There really are very few examples where hackers or people on the internet have caused big physical problems, because the physical world is complicated and pretty robust.
3/ Is it too late or too soon to regulate AI?
I think it’s too soon, to a large extent, because I don't think anybody has figured out what sensible regulation of AI would look like. Rather than trying to regulate AI itself, what I would ask is if you imagined there was a rogue AI out there that was trying to cause harm or gain control or do something else nefarious, what would be the ways that they could cause harm? I mentioned weapon systems: Certainly we should make sure there's a human in the loop with any kind of automated weapon systems. Biotechnology: We should make sure that there's no way for an automated system to order the creation of some kind of pathogen. Robots, self-driving cars: I would like to see some rules to make sure those are safe and that it's not possible or at least easy for those to be taken over and used to cause harm. But the AI itself, the language models or other AI systems that are built, I just don't know how you would figure out if they're dangerous. I think they're probably not dangerous. But even if they are, I don't know what the test would look like, what the legal standard would look like. As far as I can tell, the people who are raising the alarm about this have not done the work of explaining, “Here's what the regulatory framework should look like, here's what kind of agencies should be in charge, here's what the law should look like.” I'm not completely opposed to it, but I have not seen any proposals in that area that seem credible to me.
4/ Autonomous cars may not have lived up to expectations, but there are actual self-driving cars on the road in America today. If our previous expectations about this rapid deployment were wrong, what do you think are reasonable expectations for the next five to 10 years? Will it move beyond being a limited area, almost experimental thing to something widely used?
I think it will. I think there's a lot of uncertainty about the timeline because as we've seen, it's hard to predict. The two leading companies are Waymo and Cruise, and both of them are planning to grow by a factor of like 10x to 100x over the next two or three years. I think that's probably too optimistic. Currently they're in a couple of cities. I would expect to see that number grow significantly, maybe a dozen cities in two or three years and dozens or hundreds of cities in five to 10 years, something like that. There's a lot of uncertainty about the exact timeline, but if you go to Phoenix there are real driverless cars that can take you around. It's a 180 square mile area of Phoenix that includes downtown Phoenix and gets you to the Sky Train that gets you to the Phoenix airport. So that's like a genuinely useful taxi service. I don't think it's profitable yet. I think that most of the technical problems have been solved, and a lot of it is just kind of making the economics work, getting the costs down, and then doing the logistics of rolling out to new cities.
5/ You’ve started a newsletter about AI. What gets you excited about this technology that you're willing to do a lot more work to write about it?
I think part of what I've found interesting — I think “exciting” is not quite the right word — is that it really does seem like we are getting close to what people call “artificial general intelligence.” I've been interested in that at a high level for 20 years. I've read people like Ray Kurzweil who've been predicting that would happen, and I assumed it was too far away to think about it very hard. Large language models were such a big jump that I was like, “Okay, this is worth taking seriously now.” It seems like that will be a very important development in the human species, when we figure out how to make something that's at least roughly as capable as humans at many cognitive tasks. So that's part of it.
In terms of what I'm optimistic about, I think there is a lot of potential in biology and medicine. I think one of the most significant developments is the protein folding result that DeepMind got. There's this longstanding problem where if you have a DNA sequence, and you want to predict what protein that will produce, and then the shape of that protein — which has an effect for the function of the protein — that used to be something that took a massive amount of computer power to do one protein, so much that I think in many cases they couldn't figure out. There was no automated way to to tell. And a couple years ago, DeepMind used basically the same algorithm that's used for ChatGPT to solve this problem. We now have sequences, we have folding patterns for millions of proteins. I'm not enough of a biologist to know exactly how that's going to accelerate medical research, but it seems very promising. I've also been told that for things like drug discovery: You can give an AI a database of all the drugs and their chemical characteristics and ask it, “What are some other drugs we should be looking at?” And an AI can accelerate that. I expect to see some rapid advances in that. We already mentioned self-driving cars. I am an optimist about that. I think that is going to advance more than at least some people are expecting now, and that will be very beneficial.
Overall, I think I'm optimistic about the general kind of white-collar workforce, professions like law, accounting, consulting. I don't have a super specific prediction about how quickly that will happen or what kind of effects it will have, but we've had for the last 50 years or so a trend of widening income disparity where there's a big college wage premium. I would not be surprised if AI compresses that by automating a fair number of white collar jobs — or at least increasing productivity, reducing the number of those we need, while not doing as much for blue collar jobs or care jobs. I think you could see non-white-collar wages rising, which I think would be a really great thing for the economy. I'm excited about that.
Micro Reads
▶ Robotaxis are here. It’s time to decide what to do about them - MIT Technology Review |
▶ Google DeepMind’s CEO Says Its Next Algorithm Will Eclipse ChatGPT - Wired |
▶ Why High-Powered People Are Working in Their 80s - WSJ |
▶ The Great Inflection? A Debate About AI and Explosive Growth - Asterisk |
▶ Step inside the world's only nuclear-powered passenger ship — built in 1959 - NPR |
▶ Does Anger Drive Populism? - NBER |
▶ Is American Culture Becoming More Pro-Business? - Marginal Revolution |
▶ Space elevators are inching closer to reality - Freethink |
▶ Advertisers should beware being too creative with AI - FT Opinion |
▶ Amazon’s New Robots Are Rolling Out an Automation Revolution - Wired |