1 Comment

I agree 100% on the need for optimistic sci-fi, and I'd even like to see more optimistic Sci-fi on AI, which is clearly a technology set to dramatically improve our lives.

I also agree that applying the precautionary principal to non-existential risks is mistaken. Risks of progress should be balances against risks of stagnation.

I would love to be persuaded that AI existential risk concerns are much ado about nothing. But Adam Thierer's dismissal of AI existential risk is uninspired. The gist of his arguments are that 1) human level AI is further off than some people imagine and 2) many AI researchers are not worried.

Regarding 1) if we are talking about existential risk, I don't really care if the risk is 50 years or 150 years in the future. Some people don't care about long-term existential risk, but that's not likely to persuade those of us that do.

Regarding 2), there is no consensus among AI experts on this. Stuart Russell - the author of the top textbook on AI - is worried, and a lot of other AI experts are also. I grant that the existential risk is small, but 10%, 1% or even 0.1% is something to take serious when we are talking about existential risk. If Adam Thierer doesn't want laymen to rely on sci-fi to judge the risk, he should offer us something better than casual dismissal that it's far in the future.

None of this is to say there exits any sensible policy or regulation to reduce AI risk. That is another important question, and I have no idea whether there is or not. But the observation that Adam Thierer doesn't take AI existential risk seriously causes me to devalue his ideas about AI governance.

In fairness to Thierer, I haven't exhaustively read his work: I've just done a cursory internet search. It is possible that he definitively discredits existential AI risk. If so, I apologize, and ask that he find ways to convey this in popular articles such as this. But in the absence of such an argument, I'll continue to entertain governance proposals that attempt to balance existential risk reduction with the obvious benefits of AI progress.

Expand full comment