Snooping Attacks on Deep Reinforcement Learning

05/28/2019
by   Matthew Inkawhich, et al.
0

Adversarial attacks have exposed a significant security vulnerability in state-of-the-art machine learning models. Among these models include deep reinforcement learning agents. The existing methods for attacking reinforcement learning agents assume the adversary either has access to the target agent's learned parameters or the environment that the agent interacts with. In this work, we propose a new class of threat models, called snooping threat models, that are unique to reinforcement learning. In these snooping threat models, the adversary does not have the ability to personally interact with the environment, and can only eavesdrop on the action and reward signals being exchanged between agent and environment. We show that adversaries operating in these highly constrained threat models can still launch devastating attacks against the target agent by training proxy models on related tasks and leveraging the transferability of adversarial examples.

READ FULL TEXT

page 3

page 5

page 7

page 12

research
10/13/2022

Observed Adversaries in Deep Reinforcement Learning

In this work, we point out the problem of observed adversaries for deep ...
research
03/08/2017

Tactics of Adversarial Attack on Deep Reinforcement Learning Agents

We introduce two tactics to attack agents trained by deep reinforcement ...
research
10/04/2021

Automating Privilege Escalation with Deep Reinforcement Learning

AI-based defensive solutions are necessary to defend networks and inform...
research
06/06/2022

Deep Reinforcement Learning for Cybersecurity Threat Detection and Protection: A Review

The cybersecurity threat landscape has lately become overly complex. Thr...
research
11/20/2019

Solving Online Threat Screening Games using Constrained Action Space Reinforcement Learning

Large-scale screening for potential threats with limited resources and c...
research
11/20/2022

Adversarial Cheap Talk

Adversarial attacks in reinforcement learning (RL) often assume highly-p...
research
09/16/2021

Targeted Attack on Deep RL-based Autonomous Driving with Learned Visual Patterns

Recent studies demonstrated the vulnerability of control policies learne...

Please sign up or login with your details

Forgot password? Click here to reset