CHAI’s mission is to develop the conceptual and technical wherewithal to reorient the general thrust of AI research towards provably beneficial systems.


What Can We Learn About AI in the 2024 Election?

Could we use AI to bring us together before the 2024 election? We’re going to find out.

Reinforcement Learning Safety Workshop (RLSW) @ RLC 2024

Important Dates
Paper submission deadline: May 10, 2024 (AoE)
Paper acceptance notification: May 23, 2024

Regulating Advanced Artificial Agents

Governance frameworks should address the prospect of AI systems that cannot be safely tested.

Committing to the wrong artificial delegate in a collective-risk dilemma is better than directly committing mistakes

New research from computer scientists Inês Terrucha, Elias Fernández Domingos, Pieter Simoens, and Tom Lenaerts at the Vrije Universiteit Brussel, Université Libre de Bruxelles, and UC Berkeley’s Center for Human-Compatible AI investigates how delegating decisions to artificial agents, compared to humans making choices directly, impacts outcomes in social dilemmas involving collective risk.

Prominent AI Scientists from China and the West Propose Joint Strategy to Mitigate Risks from AI

Ahead of the highly anticipated AI Safety Summit, leading AI scientists from the US, the PRC, the UK and other countries agreed on the importance of global cooperation and jointly called for research and policies to prevent unacceptable risks from advanced AI.

Reinforcement Learning with Human Feedback and Active Teacher Selection (RLHF and ATS)

CHAI PhD graduate student, Rachel Freedman gave a presentation at Stanford University on critical new developments in AI safety, focusing on problems and potential solutions with Reinforcement Learning from Human Feedback (RLHF).

Subscribe to our mailing list

If you would like to receive our newsletters and updates, then please subscribe.