CHAI's goal is to develop the conceptual and
technical wherewithal to reorient the general thrust of AI
research towards provably beneficial systems.
Artificial intelligence research is concerned with the design of machines capable of intelligent behavior, i.e., behavior likely to be successful in achieving objectives. The long-term outcome of AI research seems likely to include machines that are more capable than humans across a wide range of objectives and environments. This raises a problem of control: given that the solutions developed by such systems are intrinsically unpredictable by humans, it may occur that some such solutions result in negative and perhaps irreversible outcomes for humans. CHAI's goal is to ensure that this eventuality cannot arise, by refocusing AI away from the capability to achieve arbitrary objectives and towards the ability to generate provably beneficial behavior. Because the meaning of beneficial depends on properties of humans, this task inevitably includes elements from the social sciences in addition to AI.
Find out more about the people who work with and at CHAI here
The Center for Human-Compatible AI is sponsored by the Open Philanthropy Project, the Future of Life Institute, the Leverhulme Trust, and CITRIS. Our partner organizations include the Leverhulme Centre for the Future of Intelligence, the Center for Long-Term Cybersecurity,, the Berkeley Existential Risk Initiative, and ICT4Peace.