Managing AI Risks in an Era of Rapid Progress is a consensus paper published on October 24, 2023, written by top AI experts including Stuart Russell, a professor from UC Berkeley.
In this short consensus paper, the authors outline risks from upcoming, advanced AI systems. They examine large-scale social harms and malicious uses, as well as an irreversible loss of human control over autonomous AI systems. In light of rapid and continuing AI progress, they propose urgent priorities for AI R&D and governance.
“In a recent short paper, world-leading AI scientists and governance experts from the US, China, EU, UK, and other countries have highlighted that rapid AI progress will pose societal-scale risks. Along with their benefits, today’s AIs already pose a wide array of harms. Tomorrow’s systems will be far more powerful as AI labs plan to scale them rapidly. Forthcoming AI systems will pose risks including rapid job displacement, automated misinformation, and enabling large-scale cyber and biological threats. A loss of control over forthcoming systems is also described as a genuine concern. AI’s growing risks demand a swift response. The experts urge their governments and leading AI labs to ensure responsible AI development.”Read more in Policy supplement