On the Catastrophic Risk of AI

Source

Earlier this week, I signed on to a short group statement, coordinated by the Center for AI Safety: Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war. The press coverage has been extensive, and surprising to me. The New York Times headline is “A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn.” BBC : “Artificial intelligence could lead to extinction, experts warn.” Other headlines are similar. I actually don’t think that AI poses a risk to human extinction. I think it poses a similar risk to pandemics and [...]