Learning to Cope with Adversarial Attacks

06/28/2019
by   Xian Yeow Lee, et al.
5

The security of Deep Reinforcement Learning (Deep RL) algorithms deployed in real life applications are of a primary concern. In particular, the robustness of RL agents in cyber-physical systems against adversarial attacks are especially vital since the cost of a malevolent intrusions can be extremely high. Studies have shown Deep Neural Networks (DNN), which forms the core decision-making unit in most modern RL algorithms, are easily subjected to adversarial attacks. Hence, it is imperative that RL agents deployed in real-life applications have the capability to detect and mitigate adversarial attacks in an online fashion. An example of such a framework is the Meta-Learned Advantage Hierarchy (MLAH) agent that utilizes a meta-learning framework to learn policies robustly online. Since the mechanism of this framework are still not fully explored, we conducted multiple experiments to better understand the framework's capabilities and limitations. Our results shows that the MLAH agent exhibits interesting coping behaviors when subjected to different adversarial attacks to maintain a nominal reward. Additionally, the framework exhibits a hierarchical coping capability, based on the adaptability of the Master policy and sub-policies themselves. From empirical results, we also observed that as the interval of adversarial attacks increase, the MLAH agent can maintain a higher distribution of rewards, though at the cost of higher instabilities.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/19/2022

Sparse Adversarial Attack in Multi-agent Reinforcement Learning

Cooperative multi-agent reinforcement learning (cMARL) has many real app...
research
12/10/2022

Targeted Adversarial Attacks on Deep Reinforcement Learning Policies via Model Checking

Deep Reinforcement Learning (RL) agents are susceptible to adversarial n...
research
07/16/2018

Online Robust Policy Learning in the Presence of Unknown Adversaries

The growing prospect of deep reinforcement learning (DRL) being used in ...
research
05/29/2019

Targeted Attacks on Deep Reinforcement Learning Agents through Adversarial Observations

This paper deals with adversarial attacks on perceptions of neural netwo...
research
10/20/2021

Adversarial Socialbot Learning via Multi-Agent Deep Hierarchical Reinforcement Learning

Socialbots are software-driven user accounts on social platforms, acting...
research
02/07/2020

Manipulating Reinforcement Learning: Poisoning Attacks on Cost Signals

This chapter studies emerging cyber-attacks on reinforcement learning (R...
research
04/17/2022

Towards Comprehensive Testing on the Robustness of Cooperative Multi-agent Reinforcement Learning

While deep neural networks (DNNs) have strengthened the performance of c...

Please sign up or login with your details

Forgot password? Click here to reset