✨ The impact of AI and jobs: A Quick Q&A with … Daniel Rock of the Wharton School
'If AI tools can change the direction of science in addition to helping people do what they currently do more efficiently, then we have a real General Purpose Technology on our hands.'
Large language models are a significant digital technology, but the jury is still out as to whether generative artificial intelligence is a “general-purpose technology,” or GPTs In order to qualify as a GPT, GenAI would need to prove highly applicable in a variety of ways across many industries in the economy. According to Daniel Rock’s co-authored paper, GPTs are GPTs: Labor market impact potential of LLMs, GenAI seems to fit the bill.
Beyond that, Rock and his fellow researchers find that “that roughly 1.8 percent of jobs could have over half their tasks affected by LLMs with simple interfaces and general training. When accounting for current and likely future software developments that complement LLM capabilities, this share jumps to just over 46 percent of jobs.” Again, we’re talking about a significant technology.
Rock is an assistant professor of operations, information, and decisions at the University of Pennsylvania’s Wharton School. His research interests include the economics of AI and digitization, and the future of work and automation. I recently emailed him a few question, which he kindly answered.
1/ Are GPTs general-purpose technologies?
I think "signs point to yes" on this question, but we still have to see how GPTs diffuse and where changes occur. One somewhat standard set of criteria for a general-purpose technology (Bresnahan and Trajtenberg, 1995; Lipsey, Carlaw, and Bekar 2005) is 1) pervasiveness of application, 2) improvement in the technology over time, and 3) creating "innovational complementarities." It's pretty clear that these models are improving rapidly, so the big question about whether GPTs (or AI more broadly) will be transformative in the economy rests primarily on whether the applications become pervasive and if they change the direction of innovation in a meaningful way. We can do that prospectively, but we're going to have to wait and see to know for sure.
Obviously at this point many people have tried ChatGPT, but it's surprising how few pay for a more advanced model. There's lots of diffusion to still happen, and in text generation alone there are many applications throughout the economy. But what I find much more exciting is the ability to encode unstructured data (images, text, code, protein sequences, etc.). Entirely new kinds of relationships are discoverable there, and this means AI is a possible accelerant for scientific discovery. If AI tools can change the direction of science in addition to helping people do what they currently do more efficiently, then we have a real GPT on our hands.
2/ Will there likely be complementary innovation associated with the deployment of GPTs?
I think so. It's actually kind of hard to find drop-in applications of large language models (LLMs) and related technologies. We also have extensive IT systems already up and running throughout firms and organizations. It takes a good deal of effort to productionalize the AI tools we have. AI tools can also help us build the software tools to integrate them better, in a kind of ouroboros of engineering. But it's hard for me to imagine that organizational structures won't change, that tasks people do at work won't change, and that how we use information technologies won't change. We need to accommodate this new kind of software that's more probabilistic than deterministic.
Keep reading with a 7-day free trial
Subscribe to Faster, Please! to keep reading this post and get 7 days of free access to the full post archives.