CHAI's mission is to develop the conceptual and technical wherewithal to reorient the general thrust of AI research towards provably beneficial systems.
These videos introduce some of the problems that we work on.
Donate to CHAI here.
Thomas Krendl Gilbert submitted “The Passions and the Reward Functions: Rival Views of AI Safety?” to the upcoming Fairness, Accountability, and Transparency (FAT*) 2020 Conference.
CHAI PhD student Rohin Shah, along with Ben Cottier, pubished the blog post “Clarifying Some Key Hypotheses in AI Alignment” on the AI Alignment Forum. The post maps out different key and controversial hypotheses of the AI Alignemnt problem and how they relate to each other.
CHAI PI Siddharth Srivastava, along with his co-authors Sarath Sreedharan, Rao Kambhampati, David Smith, published “Why Can’t You Do That, HAL? Explaining Unsolvability of Planning Tasks” in the 2019 International Joint Conference on Artificial Intelligence (IJCAI) proceedings. The paper discusses how, as anyone who has talked to a 3-year-old knows, explaining why something can’t be done can be harder than explaining a solution to a problem. The paper then goes into new work in having AI explain unsolvability.
CHAI PI Michael Wellman gave a talk at the ICML’s Workshop on AI and Finance on how one form of algorithmic (AI) trading strategy can affect financial market stability. The video for the talk can be found here.