Why are superforecasters more optimistic than experts on the AI apocalypse?
A study comparing the views of domain experts and superforecasters on existential risks such as nuclear war and artificial intelligence (AI).
A study compared the views of domain experts and superforecasters on existential risks such as nuclear war and artificial intelligence (AI). The study found that domain experts tended to be more pessimistic than superforecasters about the likelihood of catastrophes and extinctions. According to the Economist:
The median superforecaster reckoned there was a 2.1% chance of an AI-caused catastrophe, and a 0.38% chance of an AI-caused extinction, by the end of the century. AI experts, by contrast, assigned the two events a 12% and 3% chance, respectively.
Superforecasters recognized the potential of AI as a force multiplier for other risks but were more uncertain about its specific risks. The study also highlighted differences in how the two groups perceived societal responses to AI and the limits of human intelligence in dealing with risks.
Read more in the Economist.