“Adversarial Policies: Attacking Deep Reinforcement Learning” accepted by the International Conference on Learning Representations

06 Dec 2019

Adam Gleave, Michael Dennis, Neel Kant, Cody Wild, Sergey Levine, and Stuart Russell had a new paper, “Adversarial Policies: Attacking Deep Reinforcement Learning”, accepted by the International Conference on Learning Representations (ICLR).

They explore the question, is it possible to attack an RL agent simply by choosing an adversarial policy acting in a shared environment so as to create natural observations that are adversarial?

Update: In February 2020, the MIT Technology Review highlighted Adam Gleave’s work on “Adversarial Policies.”