Sequential Attacks on Agents for Long-Term Adversarial Goals

05/31/2018
by   Edgar Tretschk, et al.
2

Reinforcement learning (RL) has advanced greatly in the past few years with the employment of effective deep neural networks (DNNs) on the policy networks. With the great effectiveness came serious vulnerability issues with DNNs that small adversarial perturbations on the input can change the output of the network. Several works have pointed out that learned agents with a DNN policy network can be manipulated against achieving the original task through a sequence of small perturbations on the input states. In this paper, we demonstrate furthermore that it is also possible to impose an arbitrary adversarial reward on the victim policy network through a sequence of attacks. Our method involves the latest adversarial attack technique, Adversarial Transformer Network (ATN), that learns to generate the attack and is easy to integrate into the policy network. As a result of our attack, the victim agent is misguided to optimise for the adversarial reward over time. Our results expose serious security threats for RL applications in safety-critical systems including drones, medical analysis, and self-driving cars.

READ FULL TEXT

page 4

page 6

page 8

research
12/23/2017

Whatever Does Not Kill Deep Reinforcement Learning, Makes It Stronger

Recent developments have established the vulnerability of deep Reinforce...
research
06/21/2021

Policy Smoothing for Provably Robust Reinforcement Learning

The study of provable adversarial robustness for deep neural network (DN...
research
10/24/2017

One pixel attack for fooling deep neural networks

Recent research has revealed that the output of Deep Neural Networks (DN...
research
08/05/2020

Robust Deep Reinforcement Learning through Adversarial Loss

Deep neural networks, including reinforcement learning agents, have been...
research
07/24/2023

An Estimator for the Sensitivity to Perturbations of Deep Neural Networks

For Deep Neural Networks (DNNs) to become useful in safety-critical appl...
research
04/15/2019

Are Self-Driving Cars Secure? Evasion Attacks against Deep Neural Networks for Steering Angle Prediction

Deep Neural Networks (DNNs) have tremendous potential in advancing the v...
research
11/13/2018

Deep Q learning for fooling neural networks

Deep learning models are vulnerable to external attacks. In this paper, ...

Please sign up or login with your details

Forgot password? Click here to reset