News RSS feed

CHAI Paper Submitted to NeurIPS

24 May 2019

Adam Gleave, Michael Dennis, Neel Kant, Cody Wild, Sergey Levine, and Stuart Russell submitted their paper Adversarial Policies: Attacking Deep Reinforcement Learning to NeurIPS 2019. The abstract can be found below:

CHAI Papers Accepted for ICML 2019

Two papers authored by CHAI have been accepted for the Interantional Conference on Machine Learning 2019.

Stuart Russell’s New Book Now Available on Amazon for Pre-Order

20 May 2019

Stuart Russell’s new book, Human Compatible: Artificial Intelligence and the Problem of Control is now available on Amazon for pre-order. The book is expected to be released October 8th, 2019. The book will explore some of the issues that CHAI researches, such as value-alignment and the risks of autonomous weapons. A description of the book can be found below:

Stuart Russell Receives 2019 Andrew Carnegie Fellowship

“The Andrew Carnegie Fellows Program provides support for high-caliber scholarship in the social sciences and humanities. The anticipated result of each fellowship is the publication of a book or major study that offers a fresh perspective on a pressing challenge of our time. Winning proposals will explore a wide range of topics, including the state of America’s democratic institutions and processes, the cross-section of technology and humanistic endeavor, global connections and global ruptures, and threats to both human and natural environments.” Source

Rohin Shah Leads Q&A on the Value Learning Sequence

15 May 2019

Rohin was recently featured on the AI Safety Reading Group, where he lead a Q&A session on value learning and his work at CHAI. You can find the video here.

CHAI Paper Published in ICLR Proceedings

21 Apr 2019

Zhuolin Yang, Bo Li, Pin-Yu Chen, and Dawn Song published their paper Characterizing Audio Adversarial Examples Using Temporal Dependency in the proceedings of the International Conference on Learning Representations. The abstract is provided below.

CHAI Paper Published in ICLR Proceedings

​Xinlei Pan, Weiyao Wang, Xiaoshuai Zhang, Bo Li,
Jinfeng Yi, and Dawn Song’s paper How You Act Tells a Lot: Privacy-Leaking Attack on Deep Reinforcement Learning was accepted to the International Conference on Autonomous Agents and Multiagent Systems (AAMAS) 2019 conference. The abstract is provided below:

Rohin Shah Publishes One Year Retrospective on the Alignment Newsletter

19 Apr 2019

Rohin has been publishing the Alignment Newsletter, a newsletter that summarizes recent papers related to AI safety, for a full year. He recently wrote up his experience with it and estimated how valuable it has been. You can read the review here.

Stuart Russell and Alison Gopnik Featured in Sam Harris’ Podcast

15 Apr 2019

Professor Stuart Russell and Professor Alison Gopnik were on a recent episode of Sam Harris’ Making Sense podcast. The two CHAI PIs were interviewed to discuss their contributions to John Brockman’s new anthology, Possible Minds: 25 Ways of Looking at AI. You can listen to the podcast on Sam Harris’ website.

UAI 2019 Accepts Paper by Anca Dragan and Smitha Milli

20 Mar 2019

Professor Anca Dragan and Smitha Milli’s paper Literal or Pedagogic Human? Analyzing Human Model Misspecification in Objective Learning was accepted to the Association for Uncertainty and Artificial Intelligence 2019. The abstract is reproduced below:

« Previous PageNext Page »