Provably Efficient Black-Box Action Poisoning Attacks Against Reinforcement Learning

10/09/2021
by   Guanlin Liu, et al.
0

Due to the broad range of applications of reinforcement learning (RL), understanding the effects of adversarial attacks against RL model is essential for the safe applications of this model. Prior theoretical works on adversarial attacks against RL mainly focus on either observation poisoning attacks or environment poisoning attacks. In this paper, we introduce a new class of attacks named action poisoning attacks, where an adversary can change the action signal selected by the agent. Compared with existing attack models, the attacker's ability in the proposed action poisoning attack model is more restricted, which brings some design challenges. We study the action poisoning attack in both white-box and black-box settings. We introduce an adaptive attack scheme called LCB-H, which works for most RL agents in the black-box setting. We prove that the LCB-H attack can force any efficient RL agent, whose dynamic regret scales sublinearly with the total number of steps taken, to choose actions according to a policy selected by the attacker very frequently, with only sublinear cost. In addition, we apply LCB-H attack against a popular model-free RL algorithm: UCB-H. We show that, even in the black-box setting, by spending only logarithm cost, the proposed LCB-H attack scheme can force the UCB-H agent to choose actions according to the policy selected by the attacker very frequently.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/10/2021

Efficient Action Poisoning Attacks on Linear Contextual Bandits

Contextual bandit algorithms have many applicants in a variety of scenar...
research
02/16/2021

Reward Poisoning in Reinforcement Learning: Attacks Against Unknown Learners in Unknown Environments

We study black-box reward poisoning attacks against reinforcement learni...
research
09/06/2019

Blackbox Attacks on Reinforcement Learning Agents Using Approximated Temporal Information

Recent research on reinforcement learning has shown that trained agents ...
research
11/10/2019

Minimalistic Attacks: How Little it Takes to Fool a Deep Reinforcement Learning Policy

Recent studies have revealed that neural network-based policies can be e...
research
05/21/2023

BertRLFuzzer: A BERT and Reinforcement Learning based Fuzzer

We present a novel tool BertRLFuzzer, a BERT and Reinforcement Learning ...
research
08/17/2018

Reinforcement Learning for Autonomous Defence in Software-Defined Networking

Despite the successful application of machine learning (ML) in a wide ra...
research
12/11/2021

MedAttacker: Exploring Black-Box Adversarial Attacks on Risk Prediction Models in Healthcare

Deep neural networks (DNNs) have been broadly adopted in health risk pre...

Please sign up or login with your details

Forgot password? Click here to reset