Differentially Private Exploration in Reinforcement Learning with Linear Representation

12/02/2021
by   Paul Luyo, et al.
0

This paper studies privacy-preserving exploration in Markov Decision Processes (MDPs) with linear representation. We first consider the setting of linear-mixture MDPs (Ayoub et al., 2020) (a.k.a. model-based setting) and provide an unified framework for analyzing joint and local differential private (DP) exploration. Through this framework, we prove a O(K^3/4/√(ϵ)) regret bound for (ϵ,δ)-local DP exploration and a O(√(K/ϵ)) regret bound for (ϵ,δ)-joint DP. We further study privacy-preserving exploration in linear MDPs (Jin et al., 2020) (a.k.a. model-free setting) where we provide a O(√(K/ϵ)) regret bound for (ϵ,δ)-joint DP, with a novel algorithm based on low-switching. Finally, we provide insights into the issues of designing local DP algorithms in this model-free setting.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset