Preventing Imitation Learning with Adversarial Policy Ensembles

01/31/2020
by   Albert Zhan, et al.
14

Imitation learning can reproduce policies by observing experts, which poses a problem regarding policy privacy. Policies, such as human, or policies on deployed robots, can all be cloned without consent from the owners. How can we protect against external observers cloning our proprietary policies? To answer this question we introduce a new reinforcement learning framework, where we train an ensemble of near-optimal policies, whose demonstrations are guaranteed to be useless for an external observer. We formulate this idea by a constrained optimization problem, where the objective is to improve proprietary policies, and at the same time deteriorate the virtual policy of an eventual external observer. We design a tractable algorithm to solve this new optimization problem by modifying the standard policy gradient algorithm. Our formulation can be interpreted in lenses of confidentiality and adversarial behaviour, which enables a broader perspective of this work. We demonstrate the existence of "non-clonable" ensembles, providing a solution to the above optimization problem, which is calculated by our modified policy gradient algorithm. To our knowledge, this is the first work regarding the protection of policies in Reinforcement Learning.

READ FULL TEXT

page 6

page 7

research
07/11/2019

Imitation-Projected Policy Gradient for Programmatic Reinforcement Learning

We present Imitation-Projected Policy Gradient (IPPG), an algorithmic fr...
research
07/11/2019

Imitation-Projected Programmatic Reinforcement Learning

We study the problem of programmatic reinforcement learning, in which po...
research
08/21/2020

Adversarial Imitation Learning via Random Search

Developing agents that can perform challenging complex tasks is the goal...
research
05/26/2018

Fast Policy Learning through Imitation and Reinforcement

Imitation learning (IL) consists of a set of tools that leverage expert ...
research
05/31/2019

Diversity-Inducing Policy Gradient: Using Maximum Mean Discrepancy to Find a Set of Diverse Policies

Standard reinforcement learning methods aim to master one way of solving...
research
07/01/2020

Policy Improvement from Multiple Experts

Despite its promise, reinforcement learning's real-world adoption has be...
research
09/29/2021

Mitigation of Adversarial Policy Imitation via Constrained Randomization of Policy (CRoP)

Deep reinforcement learning (DRL) policies are vulnerable to unauthorize...

Please sign up or login with your details

Forgot password? Click here to reset