⤴ My chat with machine-learning scientist (and AI optimist) Bojan Tunguz of Nvidia
"My sense is that most of the prominent vocal AI doomers have very little to no technical background."
⭐ First things first, I have a great offer for all my free subscribers: a full-year subscription for just $1 a month. That works out to a whopping 80 percent discount from the usual $60 a year. That’s right, just a buck a month for the next 12 months, or $12 for 12 months. This offer ends this weekend!
It’s going to be a fascinating time as emerging technologies such as generative AI continue to emerge amid a year sure to be full of surprises, both economic and political. What’s going to happen next? Let’s find out together as we work to create a better world.
Melior Mundus
Quote of the Issue
“The most promising words ever written on the maps of human knowledge are terra incognita — unknown territory.” - Daniel J. Boorstin, The Discoverers.
⤴ My chat with machine-learning scientist (and AI optimist) Bojan Tunguz of Nvidia
Perhaps the single biggest business story involving generative AI is the explosive rise of Nvidia, whose high-performance chips power AI applications such as ChatGPT. The company’s market capitalization, which has tripled this year, places it among the elite tech giants with at least $1 trillion in value. So it should come as no surprise that I’ve been looking forward to sharing my recent interview with Bojan Tunguz, a senior system software engineer who works on machine learning at Nvidia. Tungaz, born in Sarajevo, Bosnia and Herzegovina, is a physicist by training, with degrees from Stanford and the University of Illinois. He’s also quadruple Kaggle Grandmaster.
1/ Today's large language models require lots of computational power. Should we worry about AI progress stuttering due to the slowing or ending of Moore's Law?
There is absolutely no danger of this happening any time soon. For two main reasons: 1) Most of the AI algorithms used today are highly parallelizable, which means that they benefit from just using more and more chips, instead of just from faster single chips. 2) The progress in AI algorithms over the last decades has been orders of magnitude faster than all the speedup that we've gotten from all the decades of Moore's Law. And there are no reasons to think that there will be any slowdown in this progress.
2/ What do you think is the biggest misunderstanding about GenAI?
The biggest misunderstanding we have about GenAI is thinking about it in deterministic ways. We have gotten used to believing that only if the outcome of using a tool is completely predictable and reliable then it is useful. That is far from being the case with the current generation of the GenAI tools. The internet is filled with hilarious examples of GenAI tools coming up with completely wrong answers, oftentimes in ways that would never be done by a human. However, if we can deal with the uncertainty, and have some additional ways of checking the results, then GenAI tools can be extremely helpful. One analogy that is often used is that of an intern: someone who knows enough of the subject matter to be helpful, but needs to be supervised and trained to get to the final useful outcome.
3/ What do you think is the biggest upside or benefit to GenAI that is missed or underappreciated?
We are still underappreciating how useful and revolutionary GenAI will be for education. Many people are already using GenAI tools as a reference and for review of their work, but we are only scratching the surface of what is possible. Tailored, individualized, high-quality education for any subject that one can imagine and for any student of any ability or background is within reach.
4/ What do AI doomers misunderstand?
My sense is that most of the prominent vocal AI doomers have very little to no technical background. That by itself does not disqualify them from speaking on the subject, often in interesting and eloquent ways, but in my experience without proper real world experience of actually building advanced technical tools, one does not have a good appreciation of how hard or feasible various AI systems may be. Hard things start seeming within reach, and intrinsic technical constraints are ignored.
I don't think there will be any bottleneck in the immediate future of AI development.
5/ Will the availability and quality of large data sets be a bottleneck on AI progress? If so, what should be done?
I don't think there will be any bottleneck in the immediate future of AI development. We are still scratching the surface of what can be squeezed out of the current datasets, both in terms of better dataset curation and preprocessing, as well as in terms of getting bigger and better models by enlarging the context windows for text datasets for instance. In five-plus years we may exhaust this approach, but by then we will probably find ways to generate more data.
6/ You recently tweeted, "I believe that nuclear energy and synthetic biology are perfect examples of what *not* to do in terms of regulation as a response to hysterical doomism." How should those examples inform the way we approach AI regulation?
Those two examples are the most prominent recent ones of what moral panic as a response to a new advanced technology can yield. By now we should be aware that the most passionate and forceful voices are not necessarily the most correct or even the most virtuous. We need to avoid catastrophizing as the main mental model when dealing with a new technology. We should avoid making AI regulation a political football that can be cheaply exploited for partisan purposes. In the US, for instance, a broad consensus on how to deal with the social and economic disruption coming from the AI revolution should be sought.
7/ Is there a thoughtful depiction of AI in science fiction that you would recommend to others?
HAL 9000 from 2001: A Space Odyssey has aged remarkably well, although the final outcome of that movie may not be the most desirable message that I'd like to convey in terms of AI optimism. :) A more recent movie, which unfortunately I have not seen but have read lots about, might be Her.
The prospects of overcoming our current most intractable problems — political, social, economic, medical, technological — are the biggest arguments in favor of pushing for AI development
8/ What's the elevator pitch for AI optimism?
All of the remarkable progress in all of human history, and especially over the past few centuries, has been a direct consequence of productively harnessing human intelligence. Even though the past few centuries have also been marked by incredible tragedies and catastrophes, our increased prosperity has generally made us kinder to each other. AI has the potential to go way beyond the constraints of biological intelligence, and in a matter of years help us accomplish what otherwise would have taken centuries. The prospects of overcoming our current most intractable problems — political, social, economic, medical, technological — are the biggest arguments in favor of pushing for AI development.
9/ If you had a meeting with Washington policymakers, what would be the big take-away you would want them to remember after they left the room?
First, do no harm. AI as a field is one of the most open and collaborative fields that I have ever come across. Many of the major breakthroughs in recent years have come from the work done by the countless academics, industry specialists, and even hobbyists. They have been tinkering with software and hardware that is readily available to almost anyone, and then freely sharing what they had come up with. This is a healthy ecosystem that should be encouraged and nourished. Tread lightly when thinking about policies that could potentially disrupt this ecosystem in a serious way.
Micro Reads
▶ The new US border wall is an app - Lorena Rios, MIT Tech Review |
▶ Why 2023 is shaping up to be the hottest year on record - Madeleine Cuff, New Scientist |
▶ Intel to start shipping a quantum processor - John Timmer, Ars Technica |
▶ This Is the Worst Part of the AI Hype Cycle - Angel Watercutter, Wired |
▶ Get Ready for Carbon Capture’s Second Coming - David Fickling, Bloomberg Opinion |
▶ AI will soon be able to cover public meetings. But should it? - Sophie Culpepper, Nieman Labs |
▶ U.S. Strategic Interest in the Moon: An Assessment of Economic, National Security, and Geopolitical Drivers - Mariel Borowit, Althea Noonan, and Reem El Ghazal, Space Policy |
▶ Toyota claims solid-state EV battery tech breakthrough could offer +900 miles driving range - Peter Johnson, Electrek |
▶ For the first time in decades, Congress seems interested in space-based solar power - Eric Berger, Ars Technica |
▶ Is the NRC Ready to Meet the Moment? - Doug True, Nuclear Energy Institute |
▶ AI is coming for jobs, but it might be San Francisco’s best hope - Trisha Thadani, The Washington Post |