News
Three CHAI Researchers Present at the GoalsRL Workshop
14 Jul 2018
Adam Gleave and Rohin Shah attended the 2018 GoalsRL Workshop and presented the paper Active Inverse Reward Design and Adam additionally presented a paper on Multi-task Maximum Entropy Inverse Reinforcement Learning. Also, Daniel Filan presented the paper Exploring Hierarchy-Aware Inverse Reinforcement Learning.
CHAI’s Adam Gleave and Rohin Shah Present “Active Inverse Reward Design”
Two of CHAI’s researchers presented this paper at the 1st Workshop on Goal Specifications for Reinforcement Learning, FAIM 2018. The abstract reads:
Thomas Krendl Gilbert Publishes “A Broader View on Bias in Automated Decision-Making – Reflecting on Epistemology and Dynamics”
02 Jul 2018
CHAI’s Thomas Gilbert and his colleagues at UC Berkeley recently published this paper to arXiv and presented it at the 2018 International Conference on Machine Learning. The abstract reads:
Daniel Filan Posts on Mechanistic Transparency for Machine Learning in the AI Alignment Forum
10 Jun 2018
CHAI’s Daniel Filan’s blog post Mechanistic Transparency for Machine Learning posted on the AI Alignment Forum. You can read the blog post here.
How the Enlightenment Ends
01 Jun 2018
Henry Kissenger writes about the rise of artificial intelligence. Read the article here.
Anca Dragan Publishes “Probabilistically Safe Robot Planning with Confidence-Based Human Predictions”
31 May 2018
The paper published in arXiv and supported by the National Science Foundation deals with human-computer interaction and having robots be able to navigate around moving people properly. The abstract reads:
CHAI’s Rohin Shah Creates New Newsletter for AI Alignment
23 Apr 2018
CHAI PhD student Rohin Shah created this new newsletter for everyone involved in AI Safety to keep up to date on research in the field.