My 2023 book, The Conservative Futurist, is based on the idea that we, as a society, are failing to meet our potential: Inefficiency, overregulation, and an overabundance of caution is robbing us of the world we might be living in.
Nicole Kobie shares some of my frustrations in her recent book, The Long History of the Future: Why tomorrow’s technology still isn’t here. She explores the evolutionary history of past technologies and why we just can’t seem to arrive at the future we’ve all been waiting for.
Today on Faster, Please — The Podcast, I chat with Kobie about the role of regulators, the pace of progress, and what careers in journalism have taught us about innovation hype
Kobie is a science and technology journalist whose articles appear in publications fromTeen Vogue, toNew Scientist, toGQ. She is the futures editor forPC Pro and a contributing editor forWired. She is based out of London.
In This Episode
Repeating history (1:42)
The American system of innovation (7:12)
The cost of risk-aversion (16:10)
The problem dynamic (20:28)
Our future rate of change (23:34)
Below is a lightly edited transcript of our conversation.
Repeating history (1:42)
I'm supposed to forget that I basically wrote the same version of this story a year ago . . .
Pethokoukis: I wrote a book about a year ago, and I wrote that book out of frustration. I was frustrated, when I originally started writing it in 2020, that, how come we already didn't have a vaccine for Covid? And then I started thinking about all the other technologies that we didn't have, and it was that frustration that led me write my book.
I'm guessing there was a frustration that led you to write your much better-written book.
Kobie: So I think it's really interesting that you start with Covid vaccines because here, out in the UK, the vaccine that was developed here — this is not something of my area of expertise, but obviously all journalists ended up having to write about Covid quite a bit — but the reason we managed to create a vaccine so quickly (they usually take several years) is because we have this vaccine platform that they'd been coming up with, and they kind of had this virus in their heads of, “Oh, it would probably be this type of a virus, and if we were to design a system that would help us design a vaccine really quickly, what would it look like?” And they had it mostly done when everything hit, so actually we got quite lucky on that one. It could have been a lot worse, we could have been much further behind.
But you're right, I have been writing about technology for a very long time and I keep hearing things about AI, things about driverless cars, and you just feel like you're writing the same headline time after time after time because news has such a short memory. I'm supposed to forget that I basically wrote the same version of this story a year ago, and that every year I'm writing about driverless cars and how they're going to be here imminently, and then 10 years goes by and I'm like, “Maybe I should have renewed my license.” That sort of a thing. And I find that very frustrating because I don't like hype. I like having the reality of the situation, even if it's a bit pessimistic, even if it's not the most happy scenario of what could happen with technology. I'd rather know the downsides and have a better sense of what is actually going to happen. So it really came out of that.
I was writing a section for a British computing magazine called PC Pro, a future section, and it's a very cynical magazine a lot of the time, so I kind of got used to writing why things weren't going to happen and I had this whole list of these different technologies that I'm not necessarily pessimistic about, but I could see why they weren't going to happen as quickly as everyone has said. So just put it together in a book. So a little bit the same as you, but bit of a different story.
So that phenomenon, and I wonder, is it partly sort of a reporter's problem? Because most reporters you have a certain . . . you don't want to write the same story over and over again. I think a lot of reporters have a soft spot for novelty. I think that's not just true with technology, I think it's with economic theories, it's with a lot of things. Then you have the founders or technologists themselves, many of whom probably would like to raise money and to continue raising money, so they're going to hype it, but yet, history would suggest that there's nothing new about this phenomenon, that things always take longer to get from the breakthrough to where it is a ubiquitous technology, everything from electrification, to PCs, to the internal combustion engine.
Is there an actual problem or is it really a problem of our perceptions?
I think it is a problem of perception. We have this idea that technology happens so quickly, that development happens so quickly, and it does, especially something like a smartphone. It went from being something you heard about to something you carried with you in a matter of years — very, very quickly. Of course, the technologies that make up a smartphone took many, many, several decades, a long, long time.
The problem with a lot of innovation and development, especially when it's things like things like AI, they start as almost a philosophical, academic idea. Then they become science and we start to work out the science of how something's going to work. And then you have to engineer it and make it work physically. And then you have to commercialize it. And for every single different aspect of a technology, that's what you're kind of doing. That is a very long road involving very different people. And the academics are like, “Yeah, we solved this. I wrote a paper about this ages ago, a hundred years ago we were talking about AI.” And then the scientists who are doing stuff in the lab, they can make it work in the lab, they can make it work in theory, they can do that in-the-lab bit, and that's amazing. We read about those breakthroughs. Those are the kinds of things that make really great headlines and journalists love those kinds of stories because, hey, it's new. And then you've got engineers who've actually got to physically build it, and that is where the money really needs to come in because this is always harder. Building anything is harder than you think it's going to be. It doesn't matter what it is, it's always harder because you've got the real world, you're out of the lab and you have to think about all of the things that the scientists who were very smart people did not think about.
And then you've got to try to come up with a way to make it work for people, and people are hard. You need to think about regulators, you need to think about business models, and all of that sort of thing. There's a lot of problems in all of that, and a lot of the time, the innovation isn't about that original academic idea. It's about how you're going to bring it to market, or how you're going to make it safe, and all of those kinds of things. There's so much to think about with even the smallest piece of technology.
The American system of innovation (7:12)
It's too easy for people to just kind of jump up and say, “Well, it's corporations being evil. That's the problem.” Well sometimes, yeah. “It's governments being too heavy-handed and regulators being too tight. That's the problem.” Well, it is until your plane crashes, then you definitely wish that those aviation regulators were stricter, right?
I'm old enough to remember in the 1990s, I remember writing stories when I was a reporter about AI. There was a huge AI boom in the 1990s which then kind of fizzled out, and then it sort of came up again. So I've certainly heard about the hype about technologies, and when people talk about hype, often they'll point out the Internet Boom — but to me, that's, again, really just a case of things taking longer than what people expected because all the big moneymaking ideas in the 2010s about how to use the internet and apps — these are not new ideas. These are all ideas people had in the ’90s, but what they lacked was bandwidth to make them work out, and we also lacked the smartphones, but the idea of ordering things online or the sharing economy, the technology wasn't there.
Sometimes the problem is that the technology just isn't there yet. Is there an actual problem — you're in Great Britain — is there a problem with the American system of innovation, which, the stylized version of that would be: government funds lots of basic research on the kinds of questions that businesses would never really do their own — even though they do a lot of R&D, they don't do that kind of R&D because it's not immediately commercial — and that creates this stock of knowledge that then businesses can use to commercialize, see what people will actually buy as a way of valuing it., does it pass the market test, and then we end up with stuff that businesses and consumers can use — that, ideally, is the American system.
Is that a good system? Can that system be improved? What is your contention?
It depends what you're making. If you're making a consumer product, I think yeah, that works decently well. You can see in some ways where it doesn't work, and you can see in some ways where it does work, and to me that's where regulation and the government needs to sit, is to try to push things the right way. Obviously, social media probably needed something helping it along the way at some point so it didn't go down the road that we have now. Smartphones are pretty good, they're a pretty great technology, we're used to using them, there's some issues with surveillance and that sort of thing, but that kind of worked pretty well.
But it depends on the technology. Like I mentioned, these Covid vaccines. Here in the UK, that wasn't a project that was funded by corporations. It definitely got out in the world and was mass-produced by them quickly, which was great, but it was something that came through the academic world here and there was a lot of government funding involved. Of course, the UK has a very strong academic system, and an academic network, and how you get funding for these things.
It depends on the product, it depends what you're trying to buy, and this is the issue when you come into things like transport: so driverless cars, or goofy ideas like hyperloop, or flying taxis and things like that. Is that a consumer product? Is that public transport? How are we deciding what the value is in this? Is it just about how much money it makes for Google, or is it about how it solves problems for cities? And we probably need it to do both, and walking that line to make sure that it does both in a way that works for everybody is very difficult, and I don't think we have easy answers for any of that, partially some of this stuff is so new and partially because we're not very good at talking about these things.
It's too easy for people to just kind of jump up and say, “Well, it's corporations being evil. That's the problem.” Well sometimes, yeah. “It's governments being too heavy-handed and regulators being too tight. That's the problem.” Well, it is until your plane crashes, then you definitely wish that those aviation regulators were stricter, right? So it depends on what the technology is, and we just use technology to cover such a range of innovation that maybe we need some different ways of talking about this.
Flying cars has become such the example, but the reason there isn't a flying car, some might blame regulation, but I think, whether it's regulations were too heavy for some reason, or the technology wasn't there, it didn't make economic sense. And even though there's been a lot of flying taxi startups, it still may not make economic sense. So who determines if it makes economic sense? Does the government determine or do you need to raise money and then try out a product, then the entrepreneur realizes it doesn't make economic sense, and then the company collapses?
To me, that's what I see as the American system, that somebody has an idea, maybe they base the idea off research, and then they try the idea, and they raise money, and then they actually try to create a product, and then the thing fails, and, well, now we know. Now we know that's probably not ready.
Is there a different way of doing it? What country does it better?
I think China does, and I think that's because companies in China and the government are much more linked, and they serve each other. That's not necessarily a good thing, to be clear, especially not for the wider world, all of the time, but China has driverless cars and they're out on the roads. It's not that they work better than the ones in the US, they don't, but there's less of a concern about some of the negative impacts. Where you fall on where that sits, that's kind of up to individuals. Personally, I think a driverless car shouldn't be on the road if it's not perfectly safe, if it's not a really trusted technology, and I am willing to wait for that because I think it is a thing that is worth waiting for, or ensuring that we can actually build it in a way that's affordable. But they're out on the roads in China, they're being tested, you can catch a robot taxi there.
But that should be a worse system because it sounds like you're very skeptical about how safe they are. The fact that they're only on the roads in this country in certain places, in certain cities, there's a slow rollout — that should be a better system.
Personally, I think it is. Now, if you live in San Francisco or you live in the places that are kind of being treated as test labs for these vehicles, you might not be a fan of them, and there's been a lot of pushback in San Francisco around this, especially because it's taken so long and they can actually be quite disruptive to the cities when they don't work out, and it's not like you, as somebody who lives locally, gets compensated because you get delayed on your way to work because a Waymo car got on the way of your bus, or whatever.
But I think that we do need to be slower with technology, and I think that there's nothing wrong with taking a bit of time to make sure that we get it right. It is very likely that, in the next couple of years, there are going to be cities that have these air taxis. To a certain extent, they're just electric helicopters that are cheaper and easier to fly, and we already have those to get people above traffic to get between places. That's an idea that already exists. This isn't a huge, massive leap forward. It is going to happen in cities where people are a little bit less afraid of disrupting everybody. But again, I'm not sure that that's right for people. That might be right for the company; so all of the various aviation companies that are trying this, they're going to end up flying for the first time in cities like Dubai and places like that that aren't worried about what everyday people on the ground think, they don't really care what you think. A place like New York or LA, it's going to be a little bit tougher to convince people that they should have to suffer the safety implications of this if one of these things crashes, because people in the US have a really great ability to be able to speak out about these technologies, and better government regulations, and things like that.
I think it is a very tough question and I think it is almost impossible to get it perfectly, so the question is more about getting it to be good enough, and to me, what I think that requires is good communication between companies and regulators. And in aviation, that is pretty good— you will not talk to any company that is making the so-called “flying cars” and the air taxis. They all go on about how well they work with regulators and how much they appreciate the support of regulators, and I think that's a good thing, but regulators are probably also maybe not making it as easy as it could be to develop a new technology because one of the problems with these companies is that it takes a certain length of time to come up with this idea and how the technology is going to work, and then you have to get all these different certifications, and it is a long road — and this is good, you want to make sure the plane works, but by the time you're certified, the technology has come along enough that now you're out-of-date and your technology is out-of-date, so you want to drop a new piece of technology, a new battery, a new idea, AI, and whatever. To a certain extent you have to come back to the beginning, and now you're behind again, and by the time you get everything certified, that's out-of-date again. So we probably do need to come up with faster ways of looking at new technologies and finding new ways of letting these companies safely work in a new technology into an existing design, new things like that.
The cost of risk-aversion (16:10)
I don't want to talk about this really wide-ranging AI stuff. I want really specifics now, now that we're starting to apply this stuff and we have really specific AI models that work in a very specific way, let's talk about that.
Isn't that kind of the big story, that the reason we don't have some of these technologies is because we've been — at least in the United States — we've been wildly risk-averse. That's the whole story of nuclear energy: We became very risk-averse, and now we're sitting here worried about climate change when we have an established technology that, had we not paused it, we would've had 50 years of improvements, and when we talk about small nuclear reactors, or microreactors, or even fusion, we're 50 years behind where we could be. So don't some of these tech folks have a point that there was a proper reaction in the ’50s and ’60s about regulation and the environment and then we had an overreaction, now it's become just very hard to build things in this country and get them deployed, whether it's flying taxis or nuclear reactors. Now we're going to have this debate about AI. Does does that sound logical to you?
I’m not sure that that is always what is holding these things back. The thing that has been holding AI back is just processing power. Jeffrey Hinton was working on all of these ideas in the ’90s, and he couldn't make it work because the technology wasn't there, and it has taken us this long to get to a point where maybe some of these systems are starting to do useful things. And it is being deployed, it is being used and we should do that.
But some people don't want it deployed, they would like to pause it. You've described this ideal that we've been developing this, and the technology's not there yet, it repeatedly took longer than what people expected, I think you correctly know. And now we're at the point where it seems to maybe be there, and now the second it's there, they're like, “Stop it. Let's slow down.” That's sort of the exact problem you've identified.
Yeah, I do think it is fair to be concerned about the impact of this huge technology. When the whole internet thing happened, we probably should have been slightly more afraid of it and slightly more careful, but you can kind of solve a lot of problems along the way and kind of, “Oh, okay, we need to think about safety of children online — probably should have thought of that a little bit sooner,” and things like that. There's problems that you can kind of solve as you go along, but I think the biggest problem with the discussion and the debate around AI now is we're talking about this huge range of technology. AI is not one thing. So when you say, “AI is here now,” well, AI has been here for decades, it's been doing things for decades, it's not new, but we're talking about a very specific type of AI, we're talking about generative AI that is run by large language models.
Personally, I have absolutely no problem with a large language model generating an AI response to an email so I can just hit a button and say, “Yeah, thanks, that sounds good” without having to type it all out. No one is scared of that. Lots of people are concerned about if you start rolling this out in government widely, which is what the UK government is planning at the moment, and you're letting AI make decisions and reply to people. You're going to get some problems, you're going to get people getting letters from their doctor that are incorrect, or people getting turned down for benefits, and things like that when they should be getting those benefits.
That doesn't mean we can't use AI, it just means we need to think about what are all the downsides. What are the ways that we can mitigate those downsides? What are the ways we can mitigate those risks? But if you ask anyone at an AI developing company now, “Well, how are we going to fix this?” They're like, “Oh, the AI will do it.” Well, how? I just want to specific answer. How are you going to use the AI? What's it actually going to do? What problems do you see and how are you going to fix those problems? Very specific. I don't want to talk about this really wide-ranging AI stuff. I want really specifics now, now that we're starting to apply this stuff and we have really specific AI models that work in a very specific way, let's talk about that. And I think people are capable of having that conversation, but we just really gloss over the details with this one a lot.
The problem dynamic (20:28)
We need more nuance, really, and realize that there aren't villains, this isn't us versus them, it doesn't need to be like this.
So do you view as sort of the problem players here, are they regulators, are they technologists, are they entrepreneurs? Is it the public — which, again, has a very poor understanding of technology, what technology can do. A lot of people I know, when they first tried ChatGPT, they were a little disappointed because they figured, after watching all these sci-fi movies, “I thought computers were already supposed to be able to do this.”
I don't want to say who are the villains, but who are the problem players and what do you do about it?
I mean this in the nicest way possible, but I think that framing is the problem.
Good, that’s fine, attack my framing, that is totally permitted!
I think all of this would be better if we didn't have an “us versus them” thing. I think it's great that OpenAI is trying to develop this technology and is trying to make it useful and to make it work in a way that we might benefit from it. That's what they say they're trying to do, they're trying to make a lot of money while doing it. That's great. That's how this all works. That's fine. Regulators are keeping a close eye on it and want more information from them, and they want to know more about what they're doing, and what they're planning, and how these things are going to work. That seems fair. That's not OpenAI battling regulators, that's not regulators slapping down OpenAI.
Journalists have a lot of blame on this because of the way we frame things. Everything is a battle. Everything is people going head-to-head — no, this is how this is supposed to work. Regulators are supposed to keep them in check. That can be very difficult when you are trying to regulate a very, very new technology. How could you possibly know anything about it? Where are you going to get your information from? From the company themselves. That kind of brings in some inherent challenges, but I think that's all surmountable.
It's kind of like this idea that you're either a Luddite, and you hate AI, and you think it's evil, or you're completely pro-AI and you just can't wait to have your brain uploaded — there's a lot of nuance and variety of what people actually think in between. I think what you mentioned about ChatGPT and how, when you go use it the first time, you're kind of like, “Huh, this is it, hey?”
I think that is the number one thing: Everyone should go use it, and then you're going to be half impressed that this machine is talking to you, that this system can actually chat with you, but then also a little bit disappointed because it's making things up, it's incorrect, it's a bit silly sometimes, that sort of a thing. Personally, I look at it and I just go, I wouldn't trust my business to this. I wouldn't trust the running of a government to a system that operates like this.
Could it write some letters to help the NHS out here not have to have a person sit and type all of these things out, or to send more personalized letters to people so they get better information, and things like that? Yeah, that sounds good. Is that going to completely change how government operates? No. So we need to be a bit more honest about the limitations. We need more nuance, really, and realize that there aren't villains, this isn't us versus them, it doesn't need to be like this. But I see why you think there's villains.
Our future rate of change (23:34)
I think we're really bad at tracking change mentally. We want to see a big, dramatic change and then we look back and we're like, “Whoa . . . This is all very different.”
That was just more my provocative framing. This is a question that you may not like at all, but I'm still going to ask it: You've looked at all these technologies. Do you think that the world of 2035 will look significantly different? The difference between the world of 2025 versus 2015, whatever that change has been, do you anticipate a bigger change between 2025 and 2035, whether because of energy, AI, rockets, flying cars, CRISPR. . . ?
I think it will be different, but I don't think it's going to be as different. I'm kind of thinking back to when I was a kid and how we all lived life pre-internet and things like that, and things were genuinely different, and that gap between that and now is such a big difference. I think about my kid, when she's an adult, how different is it going to be? I think it's going to be different. I think we're going to look back at conversations like this and be like, “Oh gosh, we were naive. How could we have thought this, or not thought this?”
Do I think that no one is going to be working because AI is going to do all work? No, I don't think it's going to be capable of that. Do I think that things like medicine could be really changed by technologies like CRISPR? I really hope so. I think we spend a lot of time talking about things like AI without seeing some of the really big-picture stuff. I write a lot of business technology stories, and it's a lot about how we can improve productivity by a few points, or it might impact a few thousand jobs — let's talk about some bigger things. Let's talk about how we can really change life. Let's talk about how we could work less. I would love to be able to see people actually working three or four days a week instead of these five-day weeks and still maintain productivity and still maintain salaries. I love that idea. I don't think that's going to happen. I think the changes are going to be small and incremental ones.
I think we'll have a lot better transport options. I think all this driverless technology, even if we don't end up with the driverless cars that we fantasize about, it’s definitely going to get applied to public transportation in some really good ways. I'm hoping that medicine will change. I'm worried about the climate change side of it because we are not putting our technology and our innovation into that, the mitigations for that, and I really think that that's where we need some very creative thinking for how we're going to deal with all of this.
So 10, 15, 20 years from now, I think life is going to be relatively the same, but I think in certain industries it's going to be really, really different — but I think I'm still going to be working five days a week sitting in front of a computer, more often than not.
That’s because we're grinders, we love to grind.
I don't, I do not, no.
My last question, I'm not sure if this is quoted in the book, I think it was a Bill Gates quote, “We overestimate what we can accomplish in two years,” or “We underestimate what we can accomplish in 10 years,” something like that. Is that sort of the phenomenon, that there's an announcement and we figure everything's going to be different in 10 years, and then it isn't, and then we look back in 10 years, we're like, “Whoa, actually, there has been a lot of change!”
I think we're really bad at tracking change mentally. We want to see a big, dramatic change and then we look back and we're like, “Whoa,” like you say, “What happened? This is all very different.”
I think we're so focused on the here and now all of the time, we're so thinking about what's going to happen in the next quarter for our company or within the next year with our family, or our careers and things like that, that it's very easy for us to just get caught up in the day-to-day, and I think it is a good thing to look back. That's one of the reasons I wanted to write my book as a history. If you look back, we were talking about flying cars in the ’50s, we were talking about AI . . . the mid-’50s is when this idea kind of really came to life. It takes a long time, but also we've done a lot in that time. There's been a huge amount of change and a huge amount of technologies that have started to enable all of this, and all of that is really positive.
I can get accused of being a bit of a cynic because I'm like, “Where are driverless cars?” But if we manage to make driverless cars happen by 2035, I don't think that that's bad that it took that long. That's just how long it took — and hey, now we have driverless cars. Creating technology is sometimes just going to take longer than we want it to, and that's okay. That's not that the technology is wrong, that's just that we're bad at predicting timelines. I never know how long it's going to take me to finish a story, or get ready in the morning or, whatever, so I'm not surprised that these world-changing technologies were bad judges of that, too.
On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were Promised
Micro Reads
▶ Economics
▶ Business
▶ Policy/Politics
▶ AI/Digital
Anthropic chief says AI could surpass “almost all humans at almost everything” shortly after 2027 - Ars
Elon Musk’s Silence on AI Risks Is Deafening - Bberg Opinion
▶ Biotech/Health
▶ Clean Energy/Climate
Trump’s Dream of Energy Dominance Relies on Canada - Bberg Opinion
▶ Space/Transportation
▶ Up Wing/Down Wing
▶ Substacks/Newsletters
What if AI timelines are too aggressive? - Understanding AI
Trump's executive orders: Five big takeaways - Noahpinion
Open-Source AI and the Future - Hyperdimensional
'ChatGPT' Robotics Moment in 2025 - AI Supremacy
The Big Problem Paradox - Conversable Economist
Share this post