Adversarial Reinforcement Learning under Partial Observability in Software-Defined Networking

02/25/2019
by   Yi Han, et al.
0

Recent studies have demonstrated that reinforcement learning (RL) agents are susceptible to adversarial manipulation, similar to vulnerabilities previously demonstrated in the supervised setting. Accordingly focus has remained with computer vision, and full observability. This paper focuses on reinforcement learning in the context of autonomous defence in Software-Defined Networking (SDN). We demonstrate that causative attacks---attacks that target the training process---can poison RL agents even if the attacker only has partial observability of the environment. In addition, we propose an inversion defence method that aims to apply the opposite perturbation to that which an attacker might use to generate their adversarial samples. Our experimental results illustrate that the countermeasure can effectively reduce the impact of the causative attack, while not significantly affecting the training process in non-attack scenarios.

READ FULL TEXT

page 5

page 6

research
08/17/2018

Reinforcement Learning for Autonomous Defence in Software-Defined Networking

Despite the successful application of machine learning (ML) in a wide ra...
research
08/09/2023

Adversarial Deep Reinforcement Learning for Cyber Security in Software Defined Networks

This paper focuses on the impact of leveraging autonomous offensive appr...
research
10/12/2022

Efficient Adversarial Training without Attacking: Worst-Case-Aware Robust Reinforcement Learning

Recent studies reveal that a well-trained deep reinforcement learning (R...
research
07/29/2022

Sampling Attacks on Meta Reinforcement Learning: A Minimax Formulation and Complexity Analysis

Meta reinforcement learning (meta RL), as a combination of meta-learning...
research
09/06/2019

Blackbox Attacks on Reinforcement Learning Agents Using Approximated Temporal Information

Recent research on reinforcement learning has shown that trained agents ...
research
07/20/2022

Illusionary Attacks on Sequential Decision Makers and Countermeasures

Autonomous intelligent agents deployed to the real-world need to be robu...
research
09/16/2021

Targeted Attack on Deep RL-based Autonomous Driving with Learned Visual Patterns

Recent studies demonstrated the vulnerability of control policies learne...

Please sign up or login with your details

Forgot password? Click here to reset