CHAI Paper Published in ICLR Proceedings
21 Apr 2019
Xinlei Pan, Weiyao Wang, Xiaoshuai Zhang, Bo Li,
Jinfeng Yi, and Dawn Song’s paper How You Act Tells a Lot: Privacy-Leaking Attack on Deep Reinforcement Learning was accepted to the International Conference on Autonomous Agents and Multiagent Systems (AAMAS) 2019 conference. The abstract is provided below:
Machine learning has been widely applied to various applications, some of which involve training with privacy-sensitive data. A modest number of data breaches have been studied, including credit
card information in natural language data and identities from face dataset. However, most of these studies focus on supervised learning models. As deep reinforcement learning (DRL) has been deployed
in a number of real-world systems, such as indoor robot navigation, whether trained DRL policies can leak private information requires
in-depth study. To explore such privacy breaches in general, we mainly propose two methods: environment dynamics search via genetic algorithm and candidate inference based on shadow policies.
We conduct extensive experiments to demonstrate such privacy vulnerabilities in DRL under various settings. We leverage the proposed algorithms to infer floor plans from some trained Grid World
navigation DRL agents with LiDAR perception. The proposed algorithm can correctly infer most of the floor plans and reaches an average recovery rate of 95.83% using policy gradient trained
agents. In addition, we are able to recover the robot configuration in continuous control environments and an autonomous driving
simulator with high accuracy. To the best of our knowledge, this is the first work to investigate privacy leakage in DRL settings and we
show that DRL-based agents do potentially leak privacy-sensitive information from the trained policies.