DeepAI AI Chat
Log In Sign Up

Targeted Adversarial Attacks on Deep Reinforcement Learning Policies via Model Checking

12/10/2022
by   Dennis Gross, et al.
Radboud Universiteit
0

Deep Reinforcement Learning (RL) agents are susceptible to adversarial noise in their observations that can mislead their policies and decrease their performance. However, an adversary may be interested not only in decreasing the reward, but also in modifying specific temporal logic properties of the policy. This paper presents a metric that measures the exact impact of adversarial attacks against such properties. We use this metric to craft optimal adversarial attacks. Furthermore, we introduce a model checking method that allows us to verify the robustness of RL policies against adversarial attacks. Our empirical analysis confirms (1) the quality of our metric to craft adversarial attacks against temporal logic properties, and (2) that we are able to concisely assess a system's robustness against attacks.

READ FULL TEXT

page 6

page 7

05/29/2019

Targeted Attacks on Deep Reinforcement Learning Agents through Adversarial Observations

This paper deals with adversarial attacks on perceptions of neural netwo...
10/13/2022

Observed Adversaries in Deep Reinforcement Learning

In this work, we point out the problem of observed adversaries for deep ...
06/28/2019

Learning to Cope with Adversarial Attacks

The security of Deep Reinforcement Learning (Deep RL) algorithms deploye...
01/03/2022

Actor-Critic Network for Q A in an Adversarial Environment

Significant work has been placed in the Q A NLP space to build models ...
03/05/2020

Detection and Recovery of Adversarial Attacks with Injected Attractors

Many machine learning adversarial attacks find adversarial samples of a ...
02/07/2020

Manipulating Reinforcement Learning: Poisoning Attacks on Cost Signals

This chapter studies emerging cyber-attacks on reinforcement learning (R...
06/17/2021

CROP: Certifying Robust Policies for Reinforcement Learning through Functional Smoothing

We present the first framework of Certifying Robust Policies for reinfor...