DeepAI AI Chat
Log In Sign Up

Poisoning Deep Reinforcement Learning Agents with In-Distribution Triggers

06/14/2021
by   Chace Ashcraft, et al.
0

In this paper, we propose a new data poisoning attack and apply it to deep reinforcement learning agents. Our attack centers on what we call in-distribution triggers, which are triggers native to the data distributions the model will be trained on and deployed in. We outline a simple procedure for embedding these, and other, triggers in deep reinforcement learning agents following a multi-task learning paradigm, and demonstrate in three common reinforcement learning environments. We believe that this work has important implications for the security of deep learning models.

READ FULL TEXT

page 1

page 2

page 3

page 4

09/17/2018

Object-sensitive Deep Reinforcement Learning

Deep reinforcement learning has become popular over recent years, showin...
03/10/2020

Explore and Exploit with Heterotic Line Bundle Models

We use deep reinforcement learning to explore a class of heterotic SU(5)...
04/12/2019

Let's Play Again: Variability of Deep Reinforcement Learning Agents in Atari Environments

Reproducibility in reinforcement learning is challenging: uncontrolled s...
03/11/2021

Multi-Task Federated Reinforcement Learning with Adversaries

Reinforcement learning algorithms, just like any other Machine learning ...
05/15/2018

Do deep reinforcement learning agents model intentions?

Inferring other agents' mental states such as their knowledge, beliefs a...
05/23/2023

RLBoost: Boosting Supervised Models using Deep Reinforcement Learning

Data quality or data evaluation is sometimes a task as important as coll...