Machine language: Will ChatGPT have a job market impact?
Also: 5 Quick Questions for … meta-science researcher Saloni Dattani
➡ First things first: I’m happy to be back writing after a month away. That means back to the regular schedule of three essays a week (one with no paywall), a weekend recap, and a regular podcast (with transcript). To thank my free subscribers for their patience, I am offering a special deal all month. See the Big Blue Button Below!
In This Issue
The Essay: Machine language: Will ChatGPT have a job market impact?
5QQ: 5 Quick Questions for … meta-science researcher Saloni Dattani
Micro Reads
Quote of the Issue
“Whether we are based on carbon or on silicon makes no fundamental difference; we should each be treated with appropriate respect.” - Arthur C. Clarke
The Essay
Machine language: Will ChatGPT have a job market impact?
Again, this is my first newsletter after a month-long hiatus. You might think that after such a long break I would have trouble picking a topic to write about. So much to choose from! But the choice was obvious: ChatGPT, a remarkable new general-purpose chatbot developed by artificial intelligence company OpenAI. “ChatGPT has already impressed many technologists with its ability to mimic human language and speaking styles while also providing coherent and topical information,” wrote reporter Kalhan Rosenblatt for NBC News, who gave the example of a prompt that asked ChatGPT to explain zero point energy but in the style of a cat: “Meow, meow, meow, meow! Zero point energy is like the purr-fect amount of energy that is always present, even in the most still and peaceful moments.”
You can find numerous impressive examples of ChatGPT’s capabilities all over social media right now, so I don’t feel the need to give lots of examples. I will say, however, that one of the most impressive ones was a prompt that asked the chatbot to make an argument in favor of free trade — but in the speaking style of Donald Trump. This capability alone could potentially affect the future direction of the United States and the fate of the Free World. Anyway, lots of people seem pretty wowed by ChatGPT:
Setting aside some of that breathless analysis — which I grant might turn out to be totally justified — let’s think for a moment about just what kind of technology ChatGPT is. In the economic sense, it’s another example of machine-learning AI, which itself is considered a general-purpose technology, meaning it has economywide applications across a variety of sectors. The two sectors that seem most obviously affected of late are creative ones: illustration and conversation/writing. OpenAI is also known as the creator of the DALL-E text-to-image generator. (It’s a technology I’ve been using to create images for this newsletter.) And now the versatile ChatGPT, which might be better described as an “answer engine.” The Guardian described it this way: “In the days since it was released, academics have generated responses to exam queries that they say would result in full marks if submitted by an undergraduate, and programmers have used the tool to solve coding challenges in obscure programming languages in a matter of seconds — before writing limericks explaining the functionality.”
What’s interesting about the above description is that it’s one of the few to highlight — the bit about the coding challenge, specifically — how ChatGPT might augment or complement human labor rather than simply automate it. Both of these technological effects increase labor productivity. But by substituting machine effort for human labor in performing various tasks, automation can reduce employment and wages. Indeed, this is the effect the Guardian piece chose to highlight: “Professors, programmers and journalists could all be out of a job in just a few years, after the latest chatbot from the Elon Musk-founded OpenAI foundation stunned onlookers with its writing ability, proficiency at complex tasks, and ease of use.”
Yet there will almost certainly be many instances when ChatGPT or some near-future incarnation will help us do our jobs better or even create new things for us to do. Despite the long history of people worrying about machines taking their jobs, it’s this augmentation/complementary effect that has been most important by far. As Stanford University economist Erik Brynjolfsson wrote earlier this year:
One metric of this is the economic value of an hour of human labor. Its market price as measured by median wages has grown more than tenfold since 1820. An entrepreneur is willing to pay much more for a worker whose capabilities are amplified by a bulldozer than one who can only work with a shovel, let alone with bare hands. In many cases, not only wages but also employment grow with the introduction of new technologies. With the invention of the airplane, a new job category was born: pilots. With the invention of jet engines, pilot productivity (in passenger-miles per pilot-hour) grew immensely. Rather than reducing the number of employed pilots, the technology spurred demand for air travel so much that the number of pilots grew.
Past performance doesn’t guarantee future results, and Brynjolfsson is one of many economists who worry that AI, like too many recent tech advances, will be more job-displacing than augmenting. Another automation worrier is MIT economist Daron Acemoglu, who has a new co-authored paper, “Tasks, Automation, and the Rise in U.S. Wage Inequality,” which presents evidence that automation accounts for more than 50 to 70 percent of the increased income gap between more- and less-educated workers since 1980. As Acemoglu sees it, that statistic is the result of an American economy innovation system producing too much “so-so” innovation like self-checkout machines that reduce expenses at large retail chains. “If you introduce self-checkout kiosks, it’s not going to change productivity all that much,” says Acemoglu. However, in terms of lost wages for employees, he adds, “It’s going to have fairly large distributional effects, especially for low-skill service workers. It’s a labor-shifting device, rather than a productivity-increasing device.”
If we are getting too much “so-so” innovation that automates labor rather than more radical innovation that augments/creates labor, one policy option would be to equalize the tax rates between labor and capital. As Brynjolfsson explains:
In 1986, top tax rates on capital income and labor income were equalized in the United States, but since then, successive changes have created a large disparity, with the 2021 top marginal federal tax rates on labor income of 37 percent, while long capital gains have a variety of favorable rules, including a lower statutory tax rate of 20 percent, the deferral of taxes until capital gains are realized, and the “step-up basis” rule that resets capital gains to zero, wiping out the associated taxes, when assets are inherited. The first rule of tax policy is simple: you tend to get less of whatever you tax. Thus, a tax code that treats income that uses labor less favorably than income derived from capital will favor automation over augmentation. Treating both business models equally would lead to more balanced incentives.
Acemoglu also favors this idea, though my AEI colleague Michael Strain doesn’t: “I would rather the tax code impose even lighter taxes on business investment than it currently does. Lighter taxes on investment will lead to higher worker productivity, which in turn will lead to higher wages for workers. The additional national income from this productivity increase will fuel greater labor demand, making workers as a whole better off. Discouraging productivity-enhancing investment is exactly the wrong direction for tax policy.” (One things all these economists agree on is the need for more worker training/retraining. Brynjolfsson calculates that for each dollar spent on machine learning tech, companies may need to spend nine dollars on intangible human capital.)
I also think that while policy is important — and to the above ideas I would add the importance of R&D investment and smart regulation — so is the inherent momentum of technological progress. Why after a long interregnum, do Next Big Things often come in bunches? That’s exactly the question asked by Swedish economist Ola Olsson in his 2001 paper “Why Does Technology Advance in Cycles?” And the answer might center around what Olsson calls “technological opportunity.” He explains his basic model this way:
As technological opportunity becomes exhausted, profits and income growth rates diminish. Eventually, profits from incremental innovation fall below expected profits from highly risky and costly drastic innovations. Entrepreneurs then switch to drastic innovation, which introduces new areas of technological opportunity and a new technological paradigm. When technological opportunity once again is abundant, incremental innovation resumes and growth rates increase. In this way, development proceeds in long waves of varying duration and intensity. The fundamental determinants of the economy’s behaviour are the capacity of a society to exploit existing technological opportunity and its system of rewards for drastic innovation.
Maybe we are entering a new period of high-impact innovation opportunity as a number of emerging technologies looking promising in areas such as biology, energy, space, and AI.
I think it’s too early to classify what kind of innovation ChatGPT mostly is. But can we have, like, five minutes to marvel at it before hitting the Technopanic Button and assuming massive job loss or that it’s the next step toward unaligned AI that tries to kill us all? Credit, then, to reporter Alex Kantrowitz at Slate who, while not ignoring some potential downsides as well as limitations, wrote a piece that didn’t forget to highlight the cool factor: “OpenAI’s new ChatGPT is scary-good, crazy-fun, and—so far—not particularly evil.”
5QQ
💡 5 Quick Questions for … (meta)science researcher Saloni Dattani
After boosting US economic growth and the importance of optimistic sci-fi, one of the most popular squares on the official Faster, Please! bingo card is metascience — that is, applying scientific inquiry to our scientific institutions and processes with the aim of improving how we conduct research. Pathbreaking scientific discoveries and incremental innovations alike depend on putting R&D dollars and top researchers to their best use. So when I saw “The Pandemic Uncovered Ways to Speed Up Science” in Wired, I knew a Q&A with the author would make for a fitting return to my 5 Quick Questions feature.
Saloni Dattani is a researcher at Our World in Data and founding editor of Works in Progress. She also has a Substack, Scientific Discovery, that describes the behind-the-scenes workings of academic science. Check it out!
1/ Do we already have the necessary talent pipelines to make scientific research roles more specialized with further division of labor? Would we need changes in academia?
What's interesting is that academia has a lot of specialisation in terms of the number and depth of disciplines, but not in terms of the division of labor within them. There's a wide variety of fields, but until recently most research has been performed by small groups of researchers working independently, often repeating each other's efforts.
Researchers are expected to have the skills to theorize, review the literature, perform experiments, run analyses, store data, write papers, present their work, review others' work, and teach. All of these skills take a long time to learn and maintain.
Part of the reason we don't have more division of labor is that there are incentives to work alone: For example, academics get rewarded according to how much they publish, especially if they're the 'first author' on a paper. Because of this, in many fields such as economics, most publications are limited to only a few authors.
Another reason is that academia rewards people with a deep background in the field — with PhD qualifications and a track record of high-profile papers and projects. This is sometimes known as “the Matthew Effect.” That makes it hard for new people to contribute without going through many hurdles first, especially if they have different backgrounds — like software engineering or data management.
A third reason is essentially gatekeeping. Much of research is still hidden behind paywalled journals that big universities subscribe to. This means people outside of academia, or even in institutions that don't subscribe to those journals, can't access data and research. They can't build on it, point out errors, or improve it.
So we need lots of changes: to reward researchers to collaborate on larger projects, to give them the time to specialise in certain aspects of their work, to publish data and research in an accessible way.
2/ You argue that researchers can pool randomized controlled trials to make experiments easier and less costly to run. Why aren't we doing this already?
This is a relatively new way of running trials: They're known as 'platform trials' where a variety of different drugs are tested at the same time. There are a few reasons that they haven't been so common.
One is that trials are funded by different pharmaceutical companies, in order to test their own drugs for their own applications to get them approved. So, getting multiple companies to agree to run their own drugs in a single pooled trial together means getting them through various legal and financial hurdles, or have this coordinated by an independent organisation.
There are some different ways to make that easier. For example: Pharmaceutical companies could finance only their “arm” of the trial (i.e. the part that relates to their own treatment), in what's called a “pay to play” model, but this requires coordination between different companies testing treatments against the same disease. Another approach is for them to be government funded, especially if there's an urgent need to learn about the disease and there are large benefits to the public to find some treatments that work quickly.
So they're constrained by legal and financial hurdles, and coordination problems. Pharmaceutical companies have found them new and risky until recently, upon seeing how they can work in practice.
A final reason is that, for some types of platform trials, the statistical methods to analyse them are actually new. The RECOVERY trial in the UK, for example, had a long running trial without a predetermined number of participants or timeline: new participants kept being enrolled and were randomised to different treatments at different times. This needed a new Bayesian statistical method to figure out which group they should be randomised to, and it needed to make sure there'd be enough participants tested for each treatment.
Since the method is new, the regulatory framework has only recently caught up — usually, regulators want to know in advance how many people will be in a trial and how long it will run, because their guidelines have been developed around traditional ways of running trials.
3/ You’ve writen that "Big institutions, such as governments and international organizations, should collect and share data routinely instead of leaving the burden to small research groups." What sorts of additional data should the government be collecting?
That's a great question. I think the main priorities are: data that's difficult for small research groups to collect, data that's sensitive, and data that's widely useful.
The most obvious example is one that's common in many countries now, but actually faced a lot of opposition in getting started: the census. We know now that basic demographic data is widely useful to tons of different researchers across fields, and also to people working in industry and politics. In practice, it was difficult and expensive to set up, and still is very expensive, especially if the information is not collected electronically. Collecting that data is worthwhile despite the costs, because of how many uses it has, but still could be made easier.
In terms of other data we should be collecting, I can speak from my own field: in psychiatry and medicine, it would be very helpful to have data on which diagnoses people have had, when they were diagnosed, and which treatments they're taking, among the general population. This is because it's difficult to recruit and keep people with illnesses in studies, and studies that try to do that tend to capture only a biased subgroup of them — usually those that are healthier and more educated.
In contrast, in some countries, like Denmark, everyone in the population has their medical data already collected routinely because there's a national healthcare service, and researchers can access it. But that kind of dataset is very rare.
The challenge is doing that in a way that's secure and private, because there are important civil liberty concerns with sharing data like that, and it doesn't need to be collected by the government necessarily — in countries that are autocratic or unstable, having a transparent and independent organisation collect that data might be more valuable.
Anonymising that data is also a crucial part of this, but it's also important to prevent people from being identified if they have a particularly unusual set of data points, for example. That might require encryption or only giving researchers access to summaries of the data, rather than individual data points.
I'm probably not the best person to answer that for other fields, but I can imagine there are lots of examples of data that fit those priorities of being widely useful, sensitive, and difficult for smaller groups to collect.
4/ Scientific institutions seem to be losing trust. How can we reform science to rebuild those institutions and regain consensus and trust?
I tend to believe that trust should be earned and deserved, rather than expected. But it's hard to make a judgment about scientific institutions as a whole.
I'd say what's needed is taking people's concerns seriously and being transparent and fair in dealing with them. With concerns about vaccines, for example, over the past century, many countries have established systems to compensate people if they've had rare side effects from vaccines. But that doesn't mean that we should avoid using vaccines widely, because they've helped avoid an enormous amount of suffering. Being able to deal with different concerns from different people is difficult, but I tend to think of it as a continuous process, where we should try to design institutions in a way that they have the ability to learn what works and what doesn't, and how to get the balance right.
5/ Many researchers post their working papers for the public, influencing other scientists with unpublished work. Is the peer review system fundamentally broken?
I'm in favour of preprints and working papers. What's interesting here is that with the journal system, researchers could influence the public with headlines alone, while the actual data and research would remain hidden behind paywalls.
So, sharing working papers is an improvement because at least now more people are given the opportunity to spot errors or problems with the research, or learn from it and apply it elsewhere.
Journals today coordinate peer review, but the way this happens is both slow and inefficient. Editors of a journal will email researchers to ask if they're available, hope they are, and wait for weeks or months for their response to a study. And, except in rare cases, reviewers aren't compensated for their work.
What's needed is a way to open up the review process to a wider range of people — for example, by sharing data and research publicly — and also treating peer review as a specialisation within science. That might involve treating it as a career pathway, perhaps like auditing works in other industries.
Micro Reads
After the Artemis I mission’s brilliant success, why is an encore 2 years away? - Eric Berger, Ars Technica |
Why Silicon Valley is so hot on nuclear energy and what it means for the industry - Catherine Clifford, CNBC |
Sustainable Funds Powerhouse Parnassus Weighs Investing in Nuclear Energy - Leslie Norton, Morningstar |
AI experts are increasingly afraid of what they’re creating - Kelsey Piper, Vox |
How much would you pay to see a woolly mammoth? - Antonio Regalado, MIT Tech Review |
Restoring a key hormone could help people with Down syndrome - Emily Underwood |
Space Elevators Are Less Sci-Fi Than You Think - Stephen Cohen |
Dimming the Sun to Cool the Planet Is a Desperate Idea, Yet We’re Inching Toward It - Bill McKibben, The New Yorker |
A Rallying Cry for More US Health Innovation at “Warp Speed” - Rachel Silverman, CGD |
Unlocking American Agricultural Innovation - Adin Richards, Institute for Progress