UAI 2019 Accepts Paper by Anca Dragan and Smitha Milli

20 Mar 2019

Professor Anca Dragan and Smitha Milli’s paper Literal or Pedagogic Human? Analyzing Human Model Misspecification in Objective Learning was accepted to the Association for Uncertainty and Artificial Intelligence 2019. The abstract is reproduced below:

It is incredibly easy for a system designer to misspecify the objective for an autonomous system (“robot’’), thus motivating the desire to have the robot learn the objective from human behavior instead. Recent work has suggested that people have an interest in the robot performing well, and will thus behave pedagogically, choosing actions that are informative to the robot. In turn, robots benefit from interpreting the behavior by accounting for this pedagogy. In this work, we focus on misspecification: we argue that robots might not know whether people are being pedagogic or literal and that it is important to ask which assumption is safer to make. We cast objective learning into the more general form of a common-payoff game between the robot and human, and prove that in any such game literal interpretation is more robust to misspecification. Experiments with human data support our theoretical results and point to the sensitivity of the pedagogic assumption.