Policy Teaching in Reinforcement Learning via Environment Poisoning Attacks

11/21/2020
by   Amin Rakhsha, et al.
0

We study a security threat to reinforcement learning where an attacker poisons the learning environment to force the agent into executing a target policy chosen by the attacker. As a victim, we consider RL agents whose objective is to find a policy that maximizes reward in infinite-horizon problem settings. The attacker can manipulate the rewards and the transition dynamics in the learning environment at training-time, and is interested in doing so in a stealthy manner. We propose an optimization framework for finding an optimal stealthy attack for different measures of attack cost. We provide lower/upper bounds on the attack cost, and instantiate our attacks in two settings: (i) an offline setting where the agent is doing planning in the poisoned environment, and (ii) an online setting where the agent is learning a policy with poisoned feedback. Our results show that the attacker can easily succeed in teaching any target policy to the victim under mild conditions and highlight a significant security threat to reinforcement learning agents in practice.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/28/2020

Policy Teaching via Environment Poisoning: Training-time Adversarial Attacks against Reinforcement Learning

We study a security threat to reinforcement learning where an attacker p...
research
02/10/2021

Defense Against Reward Poisoning Attacks in Reinforcement Learning

We study defense strategies against reward poisoning attacks in reinforc...
research
10/13/2019

Policy Poisoning in Batch Reinforcement Learning and Control

We study a security threat to batch reinforcement learning and control w...
research
03/27/2020

Adaptive Reward-Poisoning Attacks against Reinforcement Learning

In reward-poisoning attacks against reinforcement learning (RL), an atta...
research
03/11/2022

Reinforcement Learning for Linear Quadratic Control is Vulnerable Under Cost Manipulation

In this work, we study the deception of a Linear-Quadratic-Gaussian (LQG...
research
04/17/2023

Training Automated Defense Strategies Using Graph-based Cyber Attack Simulations

We implemented and evaluated an automated cyber defense agent. The agent...
research
12/02/2021

Reward-Free Attacks in Multi-Agent Reinforcement Learning

We investigate how effective an attacker can be when it only learns from...

Please sign up or login with your details

Forgot password? Click here to reset