🤖 If AI can help us make better decisions, that's a gamechanger.
Also: Regulation versus emerging energy technology
“Every technological system suffers accidents, staged as if by a malevolent god in exactly the crooks and crannies where human operators fail to imagine them occurring. Of all large-scale power technologies, nuclear has experienced the least number of accidents and counts the least number of deaths.” - Richard Rhodes, Energy: A Human History
🤖 If AI can help us make better decisions, that's a gamechanger
Item: A human player has comprehensively defeated a top-ranked AI system at the board game Go, in a surprise reversal of the 2016 computer victory that was seen as a milestone in the rise of artificial intelligence. Kellin Pelrine, an American player who is one level below the top amateur ranking, beat the machine by taking advantage of a previously unknown flaw that had been identified by another computer. But the head-to-head confrontation in which he won 14 of 15 games was undertaken without direct computer support. The triumph, which has not previously been reported, highlighted a weakness in the best Go computer programs that is shared by most of today’s widely used AI systems, including the ChatGPT chatbot created by San Francisco-based OpenAI. The tactics that put a human back on top on the Go board were suggested by a computer program that had probed the AI systems looking for weaknesses. The suggested plan was then ruthlessly delivered by Pelrine. - Financial Times, 02/17/2023
Fair warning: I don’t anticipate this becoming an AI doomer newsletter. There are plenty of options for that sort of thing out there. Like major media. If ChatGPT-4 is prompting you to wonder whether the final winds have begun to blow and whether the fabric of reality has started to fray, Faster, Please! probably shouldn’t be your next click.
But if you’re wondering how AI and other technological advances will aid our civilizational project, then Faster, Please! is a great choice for your next click. It probably helps that I look at a lot of Wall Street research where the focus is on how companies can use new technology and less so how it can enslave humanity. (“The generative AI tools could be an AI Boom, even as some tech companies are laying off in a Tech Gloom, but the tools could soon be as mundane as Excel, potentially boosting productivity” is how JPMorgan sees things.)
For instance: How might superhuman intelligence affect our decision-making? And what would be the exact mechanism? Those are the key questions pondered in “Superhuman Artificial Intelligence Can Improve Human Decision Making by Increasing Novelty.”
To find some answers, a team of researchers from City University of Hong Kong, Yale School of Management, and Princeton University looked at a domain where AI already exceeds human performance: the board game Go. They had a “superhuman AI” assess the quality of nearly 6 million professional Go moves over 71 years (1950-2021). They then created 58 billion different game patterns based on that analysis and examined how the win rates of the real human moves differed from those of the possible AI moves. Here’s what they found:
We find that human decision-making significantly improved following the advent of superhuman AI and that this improvement was associated with greater novelty in human decisions. Because AI can identify optimal decisions free of human biases (especially when it is trained via self-play), it can ultimately unearth superior solutions previously neglected by human decision-makers who may be focused on familiar solutions. … Our findings suggest that the development of superhuman AI programs may have prompted human players to break away from traditional strategies and induced them to explore novel moves, which in turn may have improved their decision-making. … The discovery of such superior solutions creates opportunities for humans to learn and innovate further.
Amid all the dystopian or even apocalyptic speculation about the future of AI, maybe we can spend a few moments speculating how AI can enrich our understanding of the world by offering us new perspectives and novel insights. Now I don’t know how to factor that into a GDP or productivity forecast, but it certainly seems like it would be a positive value. As futurist Herman Kahn once said, “It takes a moderate but not extraordinarily good level of decision-making to overcome the problems we can imagine in the future.” Here’s hoping that AI can help humanity at least clear that bar.
⚡ Regulation versus emerging energy technology
Last week, Georgia Power said that the nuclear fission process has begun inside the Unit 3 reactor at the Vogtle nuclear plant, about 150 miles east of Atlanta. This means that the reactor has achieved “initial criticality,” which is when atoms start to split and produce heat. The company expects the reactor to be fully operational by May or June. The Nuclear Regulatory Commission said that this is the first time a nuclear reactor has reached this stage since 2015, when the Watts Bar Unit 2 reactor in Tennessee began its nuclear reaction.
What the NRC — a federal independent agency that replaced the Atomic Energy Commission in 1975 — failed to mention is that Vogtle Unit 3, once fully operational, will be the first reactor that started with the NRC and moved all the way to completion. So given that track record, it’s perhaps unsurprising that the supposed future of nuclear fission power is taking longer than expected. They promised us small modular reactors, or SMRs, and all we’ve gotten is continually stretched-out timelines.
Which is super disappointing. The selling point of SMRs is that they would be potential solutions to the biggest problems facing traditional nuclear power: speed of construction, cost, size, and safety. In a recent MIT Tech Review piece on SMRs, reporter Casey Crownhart compares Vogtle Units 3 and 4 to the planned SMR from NuScale, which has received a final NRC approval for its reactor design. For instance, the Vogtle units will each have a generation capacity of 1,000 megawatts and sit on 1,000 acres versus NuScale plans for several 100-megawatt reactor modules located on 65 acres. From the piece:
Smaller nuclear power facilities could be easier to build and might help cut costs as companies standardize designs for reactors. “That’s the benefit—it becomes more of a routine, more of a cookie-cutter project,” says Jacopo Buongiorno, director of the Center for Advanced Nuclear Energy Systems at MIT. These reactors might also be safer, since the systems needed to keep them cool, as well as those needed to shut them down in an emergency, could be simpler.
Sounds great. But then there’s the title of the MIT Tech Review piece: “We were promised smaller nuclear reactors. Where are they?” One answer: “There are no SMRs running in the US yet, partly because of the lengthy regulatory process run by [NRC].” Lengthy, indeed. The regulatory approval process began in 2008 and final approval for a tweaked reactor design may take until 2025. Here’s a fun fact: “In 2020, when it received a design approval for its reactor, the company said the regulatory process had cost half a billion dollars, and that it had provided about 2 million pages of supporting documents to the NRC.”
Who could conclude this process is optimal other than diehard nuclear opponents? (Things might not be any better with nuclear fusion, by the way.) And nuclear is hardly a special case where permitting and regulation slow progress. This from a new Economist piece on “enhanced geothermal system” technology:
As ever, permitting problems could get in the way. Some 90% of natural geothermal resources are on lands owned by the federal government. An analysis by the National Renewable Energy Laboratory, near Denver, suggests that a geothermal project could trigger up to six separate environmental assessments. Under such a regime, it could take seven to ten years to go from exploration to construction of a geothermal power plant. The Burning Man Project, the non-profit behind a pyromaniacal festival in Nevada, is suing the Bureau of Land Management (blm) over its approval of geothermal exploration in a town close to the annual bacchanal. Lauren Boyd, acting director of the Geothermal Technologies Office, within the doe, says the oil-and-gas industry enjoys a more straightforward permitting process than geothermal.
We know what the problem is. When will we do something about it?
Micro Reads
▶ Generative AI/ChatGPT expert session: Gap between China/U.S LLM - Goldman Sachs |
▶ OpenAI Plans to Up the Ante in Tech’s A.I. Race - Cade Metz, NYT |
▶ 10 Ways GPT-4 Is Impressive but Still Flawed - Cade Metz and Keith Collins, NYT |
▶ GPT-4 is bigger and better than ChatGPT—but OpenAI won’t say why - Will Douglas Heaven, MIT Tech Review |
▶ GPT-4 Will Make ChatGPT Smarter but Won't Fix Its Flaws - Will Knight, Wired |
▶ The runway for futuristic electric planes is still a long one - Casey Crownhart, MIT Tech Review |
▶ New DNA tests predict your disease risk – are we ready for them? - Claire Wilison, NewScientist |
▶ Delivery drone operator Zipline launches short-range service - Patrick McGee, FT |
▶ NASA unveils a new spacesuit astronauts will wear on the moon - Christian Davenport - WaPo |
Great post on the implications of AI assisting a human to achieve a truly impressive Go accomplishment. This is the kind of thing I subscribe to this newsletter for: Jim’s thoughts on how “AI and other technological advances will aid our civilizational project.”
But I must take exception to the “fair warning” dismissiveness of AI safety efforts. Clearly the “doomers” are victims of their emotional predisposition overcoming their intellectual curiosity. Those of us with a more optimistic predisposition should guard against doing the same. There is nothing inconsistent with being excited about the potential of AI to improve our lives, while also putting some forethought into the exact nature of the superhuman alien intelligence we will be giving birth to in the coming decades.