Manipulating Reinforcement Learning: Poisoning Attacks on Cost Signals

02/07/2020
by   Yunhan Huang, et al.
0

This chapter studies emerging cyber-attacks on reinforcement learning (RL) and introduces a quantitative approach to analyze the vulnerabilities of RL. Focusing on adversarial manipulation on the cost signals, we analyze the performance degradation of TD(λ) and Q-learning algorithms under the manipulation. For TD(λ), the approximation learned from the manipulated costs has an approximation error bound proportional to the magnitude of the attack. The effect of the adversarial attacks on the bound does not depend on the choice of λ. In Q-learning, we show that Q-learning algorithms converge under stealthy attacks and bounded falsifications on cost signals. We characterize the relation between the falsified cost and the Q-factors as well as the policy learned by the learning agent which provides fundamental limits for feasible offensive and defensive moves. We propose a robust region in terms of the cost within which the adversary can never achieve the targeted policy. We provide conditions on the falsified cost which can mislead the agent to learn an adversary's favored policy. A case study of TD(λ) learning is provided to corroborate the results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/24/2019

Deceptive Reinforcement Learning Under Adversarial Manipulations on Cost Signals

This paper studies reinforcement learning (RL) under malicious falsifica...
research
08/29/2022

Understanding the Limits of Poisoning Attacks in Episodic Reinforcement Learning

To understand the security threats to reinforcement learning (RL) algori...
research
05/29/2019

Targeted Attacks on Deep Reinforcement Learning Agents through Adversarial Observations

This paper deals with adversarial attacks on perceptions of neural netwo...
research
03/27/2020

Adaptive Reward-Poisoning Attacks against Reinforcement Learning

In reward-poisoning attacks against reinforcement learning (RL), an atta...
research
06/09/2021

Who Is the Strongest Enemy? Towards Optimal and Efficient Evasion Attacks in Deep RL

Evaluating the worst-case performance of a reinforcement learning (RL) a...
research
03/11/2022

Reinforcement Learning for Linear Quadratic Control is Vulnerable Under Cost Manipulation

In this work, we study the deception of a Linear-Quadratic-Gaussian (LQG...
research
06/28/2019

Learning to Cope with Adversarial Attacks

The security of Deep Reinforcement Learning (Deep RL) algorithms deploye...

Please sign up or login with your details

Forgot password? Click here to reset