News
Founders Pledge Recommends Giving to CHAI
12 Jun 2019
Founders Pledge, a non-profit organisation where entrepreneurs make a commitment to give a percentage of their proceeds when they sell their business, has recommend CHAI as an impactful donation opportunity.
In their article, they discuss existential risk as a cause area to donate to, including the risks of misaligned artificial intelligence. The article can be found here.
CHAI Faculty Paper Accepted to ICML
07 Jun 2019
David Bourgin, Joshua Peterson, Daniel Reichman, Thomas Griffiths, and Stuart Russell submitted the paper Cognitive Model Priors for Predicting Human Decisions to the International Conference on Machine Learning 2019. The abstract can be found below:
Vincent Corruble Joins CHAI as Visiting Researcher
30 May 2019
We would like to give a warm welcome to Vincent Corruble as our newest visiting researcher! Vincent is a professor at Sorbonne University and a researcher at the Laboratoire d’Informatique de Paris 6, one of the largest computer science labs in France. He will be here at CHAI continuing his research on characterising and mitigating the risks of a comprehensive AI that anticipates, satisfies, preempts all human needs and desires, and simulating the emergence of ethical values in a society of learning agents. He will be doing his research with CHAI for the next three months.
CHAI Paper Submitted to NeurIPS
24 May 2019
Adam Gleave, Michael Dennis, Neel Kant, Cody Wild, Sergey Levine, and Stuart Russell submitted their paper Adversarial Policies: Attacking Deep Reinforcement Learning to NeurIPS 2019. The abstract can be found below:
CHAI Papers Accepted for ICML 2019
Two papers authored by CHAI have been accepted for the Interantional Conference on Machine Learning 2019.
Stuart Russell’s New Book Now Available on Amazon for Pre-Order
20 May 2019
Stuart Russell’s new book, Human Compatible: Artificial Intelligence and the Problem of Control is now available on Amazon for pre-order. The book is expected to be released October 8th, 2019. The book will explore some of the issues that CHAI researches, such as value-alignment and the risks of autonomous weapons. A description of the book can be found below:
Stuart Russell Receives 2019 Andrew Carnegie Fellowship
“The Andrew Carnegie Fellows Program provides support for high-caliber scholarship in the social sciences and humanities. The anticipated result of each fellowship is the publication of a book or major study that offers a fresh perspective on a pressing challenge of our time. Winning proposals will explore a wide range of topics, including the state of America’s democratic institutions and processes, the cross-section of technology and humanistic endeavor, global connections and global ruptures, and threats to both human and natural environments.” Source
Rohin Shah Leads Q&A on the Value Learning Sequence
15 May 2019
Rohin was recently featured on the AI Safety Reading Group, where he lead a Q&A session on value learning and his work at CHAI. You can find the video here.
CHAI Paper Published in ICLR Proceedings
21 Apr 2019
Zhuolin Yang, Bo Li, Pin-Yu Chen, and Dawn Song published their paper Characterizing Audio Adversarial Examples Using Temporal Dependency in the proceedings of the International Conference on Learning Representations. The abstract is provided below.
CHAI Paper Published in ICLR Proceedings
Xinlei Pan, Weiyao Wang, Xiaoshuai Zhang, Bo Li,
Jinfeng Yi, and Dawn Song’s paper How You Act Tells a Lot: Privacy-Leaking Attack on Deep Reinforcement Learning was accepted to the International Conference on Autonomous Agents and Multiagent Systems (AAMAS) 2019 conference. The abstract is provided below: