Reinforcement Learning for Linear Quadratic Control is Vulnerable Under Cost Manipulation

03/11/2022
by   Yunhan Huang, et al.
0

In this work, we study the deception of a Linear-Quadratic-Gaussian (LQG) agent by manipulating the cost signals. We show that a small falsification on the cost parameters will only lead to a bounded change in the optimal policy and the bound is linear on the amount of falsification the attacker can apply on the cost parameters. We propose an attack model where the goal of the attacker is to mislead the agent into learning a `nefarious' policy with intended falsification on the cost parameters. We formulate the attack's problem as an optimization problem, which is proved to be convex, and developed necessary and sufficient conditions to check the achievability of the attacker's goal. We showcase the adversarial manipulation on two types of LQG learners: the batch RL learner and the other is the adaptive dynamic programming (ADP) learner. Our results demonstrate that with only 2.296 cost data, the attacker misleads the batch RL into learning the 'nefarious' policy that leads the vehicle to a dangerous position. The attacker can also gradually trick the ADP learner into learning the same `nefarious' policy by consistently feeding the learner a falsified cost signal that stays close to the true cost signal. The aim of the paper is to raise people's awareness of the security threats faced by RL-enabled control systems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/13/2019

Policy Poisoning in Batch Reinforcement Learning and Control

We study a security threat to batch reinforcement learning and control w...
research
11/21/2020

Policy Teaching in Reinforcement Learning via Environment Poisoning Attacks

We study a security threat to reinforcement learning where an attacker p...
research
03/28/2020

Policy Teaching via Environment Poisoning: Training-time Adversarial Attacks against Reinforcement Learning

We study a security threat to reinforcement learning where an attacker p...
research
11/09/2021

Nash Equilibrium Control Policy against Bus-off Attacks in CAN Networks

A bus-off attack is a denial-of-service (DoS) attack which exploits erro...
research
06/24/2019

Deceptive Reinforcement Learning Under Adversarial Manipulations on Cost Signals

This paper studies reinforcement learning (RL) under malicious falsifica...
research
02/07/2020

Manipulating Reinforcement Learning: Poisoning Attacks on Cost Signals

This chapter studies emerging cyber-attacks on reinforcement learning (R...
research
04/08/2023

Evolving Reinforcement Learning Environment to Minimize Learner's Achievable Reward: An Application on Hardening Active Directory Systems

We study a Stackelberg game between one attacker and one defender in a c...

Please sign up or login with your details

Forgot password? Click here to reset