News
Watch the Book Launch of The Alignment Problem in Conversation with Brian Christian and Nora Young
05 Nov 2020
On October 21st, 2020, CHAI celebrated the launch of Brian Christian’s new book, The Alignment Problem. The book tells the story of the ethics and safety movement in AI, surveying its history, recent milestones, and open problems. The Alignment Problem reports directly from those working on the AI safety frontier, including current CHAI researchers.
CHAI Publishes Progress Report
20 Oct 2020
CHAI published a 2020 Progress Report which highlights some of the most important research that CHAI has produced since its inception.
Brian Christian Publishes The Alignment Problem
10 Oct 2020
Longstanding CHAI participant Brian Christian has published The Alignment Problem: Machine Learning and Human Values, which chronicles the growth and progress of the field of technical AI safety, highlighting its recent milestones and open problems.
Joseph Halpern Presents at the 2020 Conference on Uncertainty in Artificial Intelligence
29 Sep 2020
Cornell Professor Joseph Halpern and Xinming Liu presented “Bounded Rationality in Las Vegas: Probabilistic Finite Automata Play Multi-Armed Bandits” at the 2020 Conference on Uncertainty in Artificial Intelligence.
CHAI PhD Student Vael Gates Publishes in Cognitive Science
PhD student Vael Gates and Professors Anca Dragan and Tom Griffiths published “How to Be Helpful to Multiple People at Once” in the journal Cognitive Science. The authors consider the problem of assisting multiple recipients with very different preferences, with one aim of constraining the space of desirable behavior in assistive artificial intelligence systems.
Tom Gilbert Publishes “Subjectifying Objectivity”
20 Sep 2020
CHAI PhD candidate Thomas Krendl Gilbert and collaborator Andrew Loveridge published “Subjectifying Objectivity: Delineating Tastes in Theoretical Quantum Gravity Research” in Social Studies of Science. Below is the abstract of the paper:
IJCAI-20 Accepts Two Papers by CHAI PhD Student Rachel Freedman
10 Sep 2020
CHAI PhD student Rachel Freedman will present two papers at a workshop at IJCAI 2020. The first paper, “Choice Set Misspecification in Reward Inference,” is coauthored with CHAI Professor Anca Dragan and PhD student Rohin Shah. The paper analyzes what happens when a robot inferring reward functions from human feedback makes incorrect assumptions about the human’s choice set. The second paper, “Aligning with Heterogeneous Preferences for Kidney Exchange,” addresses the problem of preference aggregation by AI algorithms in a real-world public health context: kidney exchange. The paper suggests a roadmap for future automated moral decision making on behalf of heterogeneous groups.
IJCAI-20 Accepts Michael Wellman Paper
“Market Manipulation: An Adversarial Learning Framework for Detection and Evasion,” a new paper by University of Michigan Professor Michael Wellman and Xintong Wang, has been accepted by IJCAI-20. In the paper, they propose an adversarial learning framework to capture the evolving game between a regulator who develops tools to detect market manipulation and a manipulator who obfuscates actions to evade detection. Their experimental results demonstrate the possibility of automatically generating a diverse set of unseen manipulation strategies that can facilitate the training of more robust detection algorithms.