Quote of the Issue
“It is the struggle itself that is most important. We must strive to be more than we are. It does not matter that we will never reach our ultimate goal. The effort yields its own rewards.” – Lt. Cmdr. Data
The Essay
🤖 Do we need a World Congress to govern AI?
I guess you could call me a globalist. I think global flows of capital, currencies, goods, ideas, and talent make for a more peaceful and prosperous world — a world of political collaboration and open economic competition. (None of this “Buy American” business being pushed by the Biden administration). Talk, talk, talk, and trade, trade, trade. Not war, war, war.
That said, I’m not sure the United States should rush to create some global forum on artificial intelligence-machine learning so that countries can regulate, regulate, regulate. I think a lot about a recent column from The Wall Street Journal’s Peggy Noonan, in which she breezily promoted the following idea for AI-ML global governance:
Of course AI’s development should be paused, of course there should be a moratorium, but six months won’t be enough. Pause it for a few years. Call in the world’s counsel, get everyone in. Heck, hold a World Congress. But slow this thing down. We are playing with the hottest thing since the discovery of fire.
Even more than the absurd notion of an indefinite, multiyear, government-enforced moratorium, it’s that bit of about holding a “World Congress” that really gets under my skin here. Let’s think about who might attend such an august global gathering. Wait, before we speculate on attendance, let’s think about where the World AI Congress would be held. Let me suggest Geneva, Switzerland. First, the city is home to many international organizations, including the World Health Organization and World Trade Organization. Second, it’s home to the European Organization for Nuclear Research, also known as CERN.
Now you might know CERN — to be specific, CERN computer scientist Tim Berners-Lee — as the creator of the World Wide Web. Or perhaps you know CERN for its particle accelerator, the Large Hadron Collider, the world’s largest and most powerful. Back in 2008, there were some people who were terrified the collider’s initial operation would create a microscopic black hole that would, as NASA archly describes it, “start rapidly sucking in surrounding matter faster and faster until it devoured the Earth, as sensationalist news reports had suggested it might.”
I can think of no better place to hold a conference about the threat of AI than a place that should have taught us a lesson about science fiction-based threat assessment. (I wonder if the same people who contemplated bombing CERN back then are the same ones today who would conduct air strikes against rogue AI data centers -- presumably data centers in non-compliance with the rules set forth by the World AI Congress.)
Back to the potential WAC attendees. Of course, China would attend. Not only would that nation’s communist rulers embrace the opportunity to help draft global governance rules and show it runs a responsible emerging superpower — and have nice things said about it by elite opinion makers — but Beijing might well see the WAC as an opportunity to play catch-up with America, which currently has a lead when it comes to generative AI. Why would we trust China to follow any sort of pause or stay within any sort of guardrails given that it looks to have just suffered its own version of Sputnik? Generative AI is starting to look like a classic case of the sort of technological surprise that the US government’s DARPA was created to guard against. And that’s a good thing, as Tyler Cowen observes:
With AI, do we get positives? Absolutely, there can be immense benefits from making intelligence more freely available. It also can help us deal with other existential risks. Importantly, AI offers the potential promise of extending American hegemony just a bit more, a factor of critical importance, as Americans are right now the AI leaders. And should we wait, and get a “more Chinese” version of the alignment problem? I just don’t see the case for that, and no I really don’t think any international cooperation options are on the table. We can’t even resurrect WTO or make the UN work or stop the Ukraine war.
I’m also sure Europe, which views regulation as its comparative economic advantage, would love the WAC. But, unsurprisingly, it’s not waiting for any global conference to act:
The European parliament is preparing tough new measures over the use of artificial intelligence, including forcing chatbot makers to reveal if they use copyrighted material, as the EU edges towards enacting the world’s most restrictive regime on the development of AI. MEPs in Brussels are close to agreeing a set of proposals to form part of Europe’s Artificial Intelligence Act, a sweeping set of regulations on the use of AI, according to people familiar with the process. Among the measures likely to be proposed by parliamentarians is for developers of products such as OpenAI’s ChatGPT to declare if copyrighted material is being used to train their AI models, a measure designed to allow content creators to demand payment. MEPs also want responsibility for misuse of AI programmes to lie with developers such as OpenAI, rather than smaller businesses using it.
Of course, the US would send representatives to the WAC. But don’t expect them to advocate the sort of light-touch regulatory approach I’ve been advocating. As described by policy analyst Adam Thierer:
Unfortunately, the Biden administration’s proposed “AI Bill of Rights” mostly stresses possible dangers over potential opportunities, arguing that AI systems “threaten the rights of the American public.” Unsurprisingly, this effort focuses on the alleged need for new government mandates with less attention paid to innovation. Meanwhile, the Department of Commerce just launched a new proceeding on “AI accountability,” and in Congress, Senate Majority Leader Chuck Schumer (D-N.Y.) is apparently pushing for a new law to legislate “responsible AI.” These efforts are focused on demanding algorithmic “explainability” and other amorphous requirements, which would require government meddling with fast-moving computational processes. This could grow to become a cumbersome and slow regulatory approval process that would undermine AI advances.
Last August, I podcasted with Robin Hanson, an economist at George Mason University who things deeply about the intersection of technology and economics. And a portion of that chat dealing with global tech governance has only become more relevant:
Pethokoukis: A lot of people who want to regulate the tech industry here have been looking to what Europe is doing. But Europe has not shown a lot of tech progress. They don't generate the big technology companies. So that, to me, is unsettling. Not only are we converging, but we're converging sometimes toward the least productive areas of the advanced world.
Hanson: In a lot of people's minds, the key thing is the unsafe dangers that tech might provide. And they look to Europe and they say, “Look how they're providing security there. Look at all the protections they're offering against the various kinds of insecurity we could have. Surely, we want to copy them for that.”
Pethokoukis: I don't want to copy them for that. I’m willing to take a few risks.
Hanson: But many people want that level of security. So I'm actually concerned about this over the coming centuries. I think this trend is actually a trend toward not just stronger global governance, but stronger global community or even mobs, if we call it that. That is the reason why nuclear energy is regulated the same everywhere: the regulators in each place are part of a world community, and they each want to be respected in that community. And in order to be respected, they need to conform to what the rest of the community thinks. And that's going to just keep happening more over the coming centuries, I fear.
Pethokoukis: One of my favorite shows, more realistic science-fiction shows and book series, is The Expanse, which takes place a couple hundred years in the future where there's a global government — which seems to be a democratic global government. I’m not sure how efficient it is. I’m not sure how entrepreneurial it is. Certainly the evidence seems to be that global governance does not lead to a vibrant, trial-and-error, experimenting kind of ecology. But just the opposite: one that focuses on safety and caution and risk aversion.
Hanson: And it’s going to get a lot worse. I have a book called The Age of Em: Work, Love, and Life when Robots Rule the Earth, and it’s about very radical changes in technology. And most people who read about that, they go, “Oh, that's terrible. We need more regulations to stop that.” I think if you just look toward the longer run of changes, most people, when they start to imagine the large changes that will be possible, they want to stop that and put limits and control it somehow. And that's going to give even more of an impetus to global governance. That is, once you realize how our children might become radically different from us, then that scares people. And they really, then, want global governance to limit that.
I fear this is going to be the biggest choice humanity ever makes, which is, in the next few centuries we will probably have stronger global governance, stronger global community, and we will credit it for solving many problems, including war and global warming and inequality and things like that. We will like the sense that we've all come together and we get to decide what changes are allowed and what aren't. And we limit how strange our children can be. And even though we will have given up on some things, we will just enjoy … because that's a very ancient human sense, to want to be part of a community and decide together. And then a few centuries from now, there will come this day when it's possible for a colony ship to leave the solar system to go elsewhere. And we will know by then that if we allow that to happen, that's the end of the era of shared governance. From that point on, competition reaffirms itself, war reaffirms itself. The descendants who come out there will then compete with each other and come back here and impose their will here, probably. And that scares the hell out of people.
Indeed, I think I might fear the outcome from a World AI Congress right now far more than my biggest concerns about how AI-ML might evolve.
Micro Reads
▶ Occupational Heterogeneity in Exposure to Generative AI - Edward W. Felten, Manav Raj, Robert Seamans, SSRN | Recent dramatic increases in generative Artificial Intelligence (AI), including language modeling and image generation, has led to many questions about the effect of these technologies on the economy. We use a recently developed methodology to systematically assess which occupations are most exposed to advances in AI language modeling and image generation capabilities. We then characterize the profile of occupations that are more or less exposed based on characteristics of the occupation, suggesting that highly-educated, highly-paid, white-collar occupations may be most exposed to generative AI, and consider demographic variation in who will be most exposed to advances in generative AI. The range of occupations exposed to advances in generative AI, the rapidity with its spread, and the variation in which populations will be most exposed to such advances, suggest that government can play an important role in helping people adapt to how generative AI changes work.
▶ Bill to Restore R&D Expensing Reintroduced in House - Lauren Vella and Chris Cioffi, Bloomberg Tax | A bipartisan bill to restore the research and development tax break, a major priority of the business community, is being reintroduced Tuesday in the House. Sponsored by House Ways and Means Committee members Reps. Ron Estes (R-Kansas) and John Larson (D-Conn.), the legislation would allow businesses to permanently deduct their R&D costs the same year they’re incurred. It reverses a provision in the 2017 tax law that amended the tax code under Section 174 requiring businesses to amortize their research costs over a period of five years starting in 2022.
▶ AI Can Write a Song, but It Can’t Beat the Market - Gregory Zuckerman, WSJ | San Francisco-based quant hedge fund Numerai used machine-learning techniques to score gains of 20% last year, the firm says. Also last year, three senior staffers at DeepMind Technologies, the artificial-intelligence subsidiary of Google parent Alphabet Inc., caused a buzz by leaving to start a machine-learning fund called EquiLibre Technologies, based in Prague. AI may someday help democratize trading, giving individuals and others programs as powerful as those used by big hedge funds, some AI specialists say. For now, though, there are too few firms focusing on machine learning and other AI methods to determine whether big returns are possible, says Jens Foehrenbach, chief investment officer of Man FRM, which invests more than $20 billion in hedge funds. And the early returns are inconsistent.
▶ Some Glimpse AGI in ChatGPT. Others Call It a Mirage - Will Knight, Wired | A team of cognitive scientists, linguists, neuroscientists, and computer scientists from MIT, UCLA, and the University of Texas, Austin, posted a research paper in January that explores how the abilities of large language models differ from those of humans.
The group concluded that while large language models demonstrate impressive linguistic skill—including the ability to coherently generate a complex essay on a given theme—that is not the same as understanding language and how to use it in the world. That disconnect may be why language models have begun to imitate the kind of commonsense reasoning needed to stack objects or solve riddles. But the systems still make strange mistakes when it comes to understanding social relationships, how the physical world works, and how people think.
The way these models use language, by predicting the words most likely to come after a given string, is very difference from how humans speak or write to convey concepts or intentions. The statistical approach can cause chatbots to follow and reflect back the language of users’ prompts to the point of absurdity.