Faster, Please!

Faster, Please!

Share this post

Faster, Please!
Faster, Please!
⚠ My Statement on AI Risk

⚠ My Statement on AI Risk

Feel free to sign on!

James Pethokoukis's avatar
James Pethokoukis
May 31, 2023
∙ Paid
13

Share this post

Faster, Please!
Faster, Please!
⚠ My Statement on AI Risk
6
2
Share

Quote of the Issue

“It is better to err on the side of daring than the side of caution.” - Alvin Toffler

Give a gift subscription

Get 30% off a group subscription


The Essay

Is this the AI future you imagine?
Do you even think about an AI future like this one?

⚠ My Statement on AI Risk

Item: Tech executives and artificial-intelligence scientists are sounding the alarm about AI, saying in a joint statement Tuesday that the technology poses an extinction risk as great as pandemics and nuclear war. More than 350 people signed a statement released by the Center for AI Safety, an organization that said it works to reduce AI risks. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the organization said. - The Wall Street Journal, 05/31/2023

My Statement on AI Risk: Mitigating the risk of extinction with the use of AI should be a global priority alongside other societal-scale technological solutions such as preventing pandemics with biotechnology and mitigating climate change by generating clean energy through nuclear fission and fusion.

Unsurprisingly, I like my statement better. And I write that with all due respect to the folks who signed the Center for AI Safety statement. Yes, I’m aware that a degree of self-interest might play a role in the decision of some who signed onto the statement, as well as the earlier call for a six-month pause in the training of generative AI models. (We are only human after all.) Regulation of generative AI could benefit the tech giants who have already rolled out large language models because it might reinforce existing market structures and raise barriers to entry that will protect their core businesses. This could lead to companies using the technology to improve their existing products rather than replacing them altogether, which could limit the potential for further innovation.

“Instead of ushering in an era of Schumpeterian creative destruction, it will serve as a reminder that large incumbents currently control the innovation process—what some call ‘creative accumulation,’” observes, appropriately enough, The Economist’s “Schumpeter” columnist.

Share

But, but, but … I’m not going to hand-wave away the concerns of all these CEOs, technologists, and scientists as utterly insincere and self-serving. Far from it. GenAI is a tool, and a tool can be used to both help and harm. Thinking seriously about the downsides of what appears to be a powerful general-purpose technology and how to limit those downsides is appropriate. Perhaps that means new missions for existing regulators or new regulatory agencies. Perhaps that means a “San Francisco Project,” in the spirit of the Manhattan Project, devoted to reducing the risk of AI that doesn’t do what it’s expected to do. Perhaps given the embryonic nature of this technology, the default regulatory stance should be “permissionless innovation” with the role of government as a best-practices facilitator rather than, say, pre-approving advanced LLMs.

Keep reading with a 7-day free trial

Subscribe to Faster, Please! to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 James Pethokoukis
Publisher Privacy ∙ Publisher Terms
Substack
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share