Human Compatible Out In Stores

08 Oct 2019

Stuart Russell, professor of Computer Science at UC Berkeley and Director of the Center for Human-Compatible Intelligence (CHAI), has a new book out today: Human Compatible: Artificial Intelligence and the Control Problem.

Rohin Shah Professionalizes the Alignment Newsletter

28 Sep 2019

CHAI PhD student Rohin Shah’s Alignment Newsletter has grown from a handful of volunteers to a team of people paid to summarize content.

CHAI researchers’ paper “On the Utility of Learning about Humans for Human-AI Coordination” accepted to NeurIPS 2019.

28 Sep 2019

The paper authored by Micah Carroll, Rohin Shah, Tom Griffiths, Pieter Abbeel, and Anca Dragan, along with two other researchers not affiliated with CHAI, was accepted to NeurIPS 2019. An ArXiv link for the paper will be available shortly.

Siddharth Srivastava Awarded NSF Grant on AI and the Future of Work

13 Sep 2019

Siddharth Srivastava, along with other faculty from Arizona State University, was awarded a grant as a part of the NSF’s Convergence Accelerator program. The project focuses on safe, adaptive AI systems/robots that enable workers to learn how to use them on the fly. The central question behind their research is: How can we train people to use adaptive AI systems, whose behavior and functionality is expected to change from day to day? Their approach uses self-explaining AI to enable on-the-fly training. You can read more about the project here.

Thomas Gilbert submits "The Passions and the Reward Functions - Rival Views of AI Safety?" to FAT*2020

28 Aug 2019

Thomas Krendl Gilbert submitted “The Passions and the Reward Functions: Rival Views of AI Safety?” to the upcoming Fairness, Accountability, and Transparency (FAT*) 2020 Conference.

Rohin Shah Publishes "Clarifying Some Key Hypotheses in AI Alignment" on the Alignment Forum

27 Aug 2019

CHAI PhD student Rohin Shah, along with Ben Cottier, pubished the blog post “Clarifying Some Key Hypotheses in AI Alignment” on the AI Alignment Forum. The post maps out different key and controversial hypotheses of the AI Alignemnt problem and how they relate to each other.

Siddharth Srivastava Publishes "Why Can't You Do That, HAL? Explaining Unsolvability of Planning Tasks"

17 Aug 2019

CHAI PI Siddharth Srivastava, along with his co-authors Sarath Sreedharan, Rao Kambhampati, David Smith, published “Why Can’t You Do That, HAL? Explaining Unsolvability of Planning Tasks” in the 2019 International Joint Conference on Artificial Intelligence (IJCAI) proceedings. The paper discusses how, as anyone who has talked to a 3-year-old knows, explaining why something can’t be done can be harder than explaining a solution to a problem. The paper then goes into new work in having AI explain unsolvability.

Michael Wellman gave talk "Trend-Following Trading Strategies and Financial Market Stability" at ICML 2019

16 Aug 2019

CHAI PI Michael Wellman gave a talk at the ICML’s Workshop on AI and Finance on how one form of algorithmic (AI) trading strategy can affect financial market stability. The video for the talk can be found here.

Mark Nitzburg Publishes WIRED Article Advocating for an FDA for Algorithms

15 Aug 2019

CHAI’s Executive Director Mark Nitzburg, along with Olaf Groth, published an article in WIRED Magazine that advocates for the creation of an “FDA for algorithms.”

Rohin Publishes "Learning Biases and Rewards Simultaneously"

05 Jul 2019

Rohin Shah published a short summary of the CHAI paper “On the Feasibility of Learning, Rather than Assuming, Human Biases for Reward Inference”, along with some discussion of its implications on the Alignment Forum.