Rohin Publishes “Learning Biases and Rewards Simultaneously”

05 Jul 2019

Rohin Shah published a short summary of the CHAI paper “On the Feasibility of Learning, Rather than Assuming, Human Biases for Reward Inference”, along with some discussion of its implications on the Alignment Forum.

Center for Human-Compatible AI