About

CHAI is a multi-institution research group based at UC Berkeley, with academic affiliates at a variety of other universities.

Mission

CHAI’s goal is to develop the conceptual and technical wherewithal to reorient the general thrust of AI research towards provably beneficial systems.

Artificial intelligence research is concerned with the design of machines capable of intelligent behavior, i.e., behavior likely to be successful in achieving objectives. The long-term outcome of AI research seems likely to include machines that are more capable than humans across a wide range of objectives and environments. This raises a problem of control: given that the solutions developed by such systems are intrinsically unpredictable by humans, it may occur that some such solutions result in negative and perhaps irreversible outcomes for humans. CHAI’s goal is to ensure that this eventuality cannot arise, by refocusing AI away from the capability to achieve arbitrary objectives and towards the ability to generate provably beneficial behavior. Because the meaning of beneficial depends on properties of humans, this task inevitably includes elements from the social sciences in addition to AI.

Find out more about the people who work with and at CHAI here

Partners

The Center for Human-Compatible AI is sponsored by Open Philanthropy, the Future of Life Institute, the Leverhulme Trust, and CITRIS. Our partner organizations include the Leverhulme Centre for the Future of Intelligence, the Center for Long-Term Cybersecurity, the Berkeley Existential Risk Initiative, Kavli Center and ICT4Peace.

Berkeley University of California
Cornell University
University of Michigan
Open Philanthropy Project
Future of Life Institute
The Leverhulme Trust
Center for Information Technology Research In The Interest of Society
Center for Long-Term Cybersecurity, UC Berkeley
Centre for the Study of Existential Risk
Future of Humanity Institute, University of Oxford
Berkeley Existential Risk Institute
ICT for Peace Foundation
Berkeley Kavli Center for Ethics, Science, and the Public