News RSS feed

“Hard Choices in Artificial Intelligence: Addressing Normative Uncertainty Through Sociotechnical Commitments” Accepted by AIES

21 Feb 2020

The AAAI/ACM Conference on AI, Ethics, and Society (AIES) 2020 accepted a paper, “Hard Choices in Artificial Intelligence: Addressing Normative Uncertainty through Sociotechnical Commitments,” coauthored by CHAI machine ethics researcher Thomas Gilbert.

New Class on Foundations for Beneficial AI at UC Berkeley

28 Jan 2020

Stuart Russell (Computer Science), Lara Buchak and Wesley Holliday (Philosophy), Shachar Kariv (Economics) will co-teach a class on “Foundations for Beneficial AI” during the 2020 Spring Semester.

Rohin Shah Writes Detailed Review of Public Work in AI Alignment

18 Jan 2020

CHAI researcher Rohin Shah wrote a detailed review of public work in AI alignment in 2019 on the AI Alignment Forum. The review features work on topics such as AI risk analysis, value learning, robustness, and field building.

International Conference on Learning Representations Accepts “Adversarial Policies: Attacking Deep Reinforcement Learning”

06 Dec 2019

Adam Gleave, Michael Dennis, Neel Kant, Cody Wild, Sergey Levine, and Stuart Russell had a new paper, “Adversarial Policies: Attacking Deep Reinforcement Learning”, accepted by the International Conference on Learning Representations (ICLR).

Rohin Shah and Micah Carroll Publish “Collaborating with Humans Requires Understanding Them”

02 Nov 2019

CHAI PhD student Rohin Shah and intern Micah Carroll wrote a post on human-AI collaboration on the
Berkeley AI Research Blog.

Human Compatible Out In Stores

08 Oct 2019

Stuart Russell, professor of Computer Science at UC Berkeley and Director of the Center for Human-Compatible Intelligence (CHAI), has a new book out today: Human Compatible: Artificial Intelligence and the Control Problem.

Rohin Shah Professionalizes the Alignment Newsletter

28 Sep 2019

CHAI PhD student Rohin Shah’s Alignment Newsletter has grown from a handful of volunteers to a team of people paid to summarize content.

NeurIPS 2019 Accepts CHAI Researchers’ Paper “On the Utility of Learning about Humans for Human-AI Coordination”

The paper authored by Micah Carroll, Rohin Shah, Tom Griffiths, Pieter Abbeel, and Anca Dragan, along with two other researchers not affiliated with CHAI, was accepted to NeurIPS 2019. An ArXiv link for the paper will be available shortly.

Siddharth Srivastava Awarded NSF Grant on AI and the Future of Work

13 Sep 2019

Siddharth Srivastava, along with other faculty from Arizona State University, was awarded a grant as a part of the NSF’s Convergence Accelerator program. The project focuses on safe, adaptive AI systems/robots that enable workers to learn how to use them on the fly. The central question behind their research is: How can we train people to use adaptive AI systems, whose behavior and functionality is expected to change from day to day? Their approach uses self-explaining AI to enable on-the-fly training. You can read more about the project here.

Thomas Gilbert Submits “The Passions and the Reward Functions – Rival Views of AI Safety?” to FAT*2020

28 Aug 2019

Thomas Krendl Gilbert submitted “The Passions and the Reward Functions: Rival Views of AI Safety?” to the upcoming Fairness, Accountability, and Transparency (FAT*) 2020 Conference.

« Previous PageNext Page »