Prominent AI Scientists from China and the West Propose Joint Strategy to Mitigate Risks from AI

31 Oct 2023

DITCHLEY PARK, UNITED KINGDOM – Ahead of the highly anticipated AI Safety Summit, leading AI scientists from the US, the PRC, the UK and other countries agreed on the importance of global cooperation and jointly called for research and policies to prevent unacceptable risks from advanced AI.

Prominent scientists gathered from the USA, the PRC, the UK, Europe, and Canada for the first “International Dialogue on AI Safety”The meeting was convened by Turing Award winners Yoshua Bengio and Andrew Yao, UC Berkeley professor Stuart Russell, OBE, and founding Dean of the Tsinghua Institute for AI Industry Research Ya-Qin Zhang. The event took place earlier this month at Ditchley Park near Oxford. Attendees worked to build a shared understanding of risks from advanced AI systems, inform intergovernmental processes, and lay the foundations for further cooperation to prevent worst-case outcomes from AI development.

The expert attendees warned governments and AI developers that “coordinated global action on AI safety research and governance is critical to prevent uncontrolled frontier AI development from posing unacceptable risks to humanity.” Attendees produced a joint statement with specific technical and policy recommendations, which is attached below. Prof. Zhang remarked that it is “crucial for governments and AI corporations to invest heavily in frontier AI safety research and engineering”, while Prof. Yao stressed the importance that we “work together as a global community to ensure the safe progress of AI.” Prof. Bengio called upon AI developers to “demonstrate the safety of their approach before training and deploying” AI systems, while Prof. Russell concurred that “if they cannot do that, they cannot build or deploy their systems. Full stop.”


The International Dialogues on AI Safety are a new initiative bringing together scientists from around the world to collaborate on mitigating the risks of artificial intelligence. The event was held in partnership with the Center for Human-Compatible AIFAR AI, and the Ditchley Foundation


CHAI is a multi-institution research group based at UC Berkeley, with academic affiliates at a variety of other universities. CHAI’s goal is to develop the conceptual and technical wherewithal to reorient the general thrust of AI research towards provably beneficial systems.


FAR AI is a non-profit organization working to ensure AI systems are trustworthy and beneficial to society. FAR AI incubates and accelerates research agendas that are too resource-intensive for academia but not yet ready for commercialisation by industry.


Ditchley is an independent foundation working towards the renewal of democratic societies, states and alliances by bringing people together for frank conversations across divides and creating space for strategic thinking.

Fynn Heide


Coordinated global action on AI safety research and governance is critical to prevent uncontrolled frontier AI development from posing unacceptable risks to humanity.

Global action, cooperation, and capacity building are key to managing risk from AI and enabling humanity to share in its benefits. AI safety is a global public good that should be supported by public and private investment, with advances in safety shared widely. Governments around the world — especially of leading AI nations — have a responsibility to develop measures to prevent worst-case outcomes from malicious or careless actors and to rein in reckless competition. The international community should work to create an international coordination process for advanced AI in this vein.

We face near-term risks from malicious actors misusing frontier AI systems, with current safety filters integrated by developers easily bypassed. Frontier AI systems produce compelling misinformation and may soon be capable enough to help terrorists develop weapons of mass destruction. Moreover, there is a serious risk that future AI systems may escape human control altogether. Even aligned AI systems could destabilize or disempower existing institutions. Taken together, we believe AI may pose an existential risk to humanity in the coming decades.

In domestic regulation, we recommend mandatory registration for the creation, sale or use of models above a certain capability threshold, including open-source copies and derivatives, to enable governments to acquire critical and currently missing visibility into emerging risks. Governments should monitor large-scale data centers and track AI incidents, and should require that AI developers of frontier models be subject to independent third-party audits evaluating their information security and model safety. AI developers should also be required to share comprehensive risk assessments, policies around risk management, and predictions about their systems’ behavior in third party evaluations and post-deployment with relevant authorities.

We also recommend defining clear red lines that, if crossed, mandate immediate termination of an AI system — including all copies — through rapid and safe shut-down procedures. Governments should cooperate to instantiate and preserve this capacity. Moreover, prior to deployment as well as during training for the most advanced models, developers should demonstrate to regulators’ satisfaction that their system(s) will not cross these red lines.

Reaching adequate safety levels for advanced AI will also require immense research progress. Advanced AI systems must be demonstrably aligned with their designer’s intent, as well as appropriate norms and values. They must also be robust against both malicious actors and rare failure modes. Sufficient human control needs to be ensured for these systems. Concerted effort by the global research community in both AI and other disciplines is essential; we need a global network of dedicated AI safety research and governance institutions. We call on leading AI developers to make a minimum spending commitment of one third of their AI R&D on AI safety and for government agencies to fund academic and non-profit AI safety and governance research in at least the same proportion.








Yoshua Bengio
Scientific Director and Founder, Montreal Institute for Learning Algorithms
Professor, Department of CS and Operations Research, Université de Montréal 
Turing Award Recipient
Stuart Russell
Professor of EECS, UC Berkeley
Founder and Head, Center for Human-Compatible Artificial Intelligence
Director, Kavli Center for Ethics, Science, and the Public
Andrew Yao
Dean of Institute for Interdisciplinary Information Sciences, Tsinghua University
Distinguished Professor-At-Large, The Chinese University of Hong Kong
Professor of Center for Advanced Study, Tsinghua University
Turing Award Recipient
Ya-Qin Zhang
Chair Professor of AI Science at Tsinghua University
Dean of Institute for AI Industry Research of Tsinghua University (AIR)
Former President of Baidu
Ed Felten
Robert E. Kahn Professor of Computer Science and Public Affairs, Princeton University
Founding Director, Center for Information Technology Policy, Princeton University
Roger Grosse
Associate Professor of Computer Science at the University of Toronto
Founding Member, Vector Institute
Gillian Hadfield
Schwartz Reisman Chair in Technology and Society at the University of Toronto Faculty of Law
Director of the Schwartz Reisman Institute for Technology and Society
AI2050 Senior Fellow
Dylan Hadfield-Menell
Bonnie and Marty (1964) Tenenbaum Career Development Assistant Professor of EECS , MIT
Lead, Algorithmic Alignment Group Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT
Yang-Hui He
London Institute, Fellow
Sana Khareghani
Professor of Practice in AI, King’s College London 
AI Policy Lead, Responsible AI UK
Former Head of UK Government Office for Artificial Intelligence
Karine Perset
Elizabeth Seger
Research Scholar, Centre for the Governance of AI
Dawn Song
Professor of EECS, UC Berkeley
Founder, Oasis Labs
Max Tegmark
Professor, MIT Center for Brains, Minds & Machines
President and Co-founder, Future of Life Institute
Yi Zeng
Professor and Director of Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences
Founding Director of Center for Long-term AI
HongJiang Zhang
Chairman, Beijing Academy of AI
Xin Chen
PhD student, ETH Zurich
Adam Gleave
Founder and CEO, FAR AI
Fynn Heide
Research Scholar, Centre for the Governance of AI