Sparse Adversarial Attack in Multi-agent Reinforcement Learning

05/19/2022
by   Yizheng Hu, et al.
0

Cooperative multi-agent reinforcement learning (cMARL) has many real applications, but the policy trained by existing cMARL algorithms is not robust enough when deployed. There exist also many methods about adversarial attacks on the RL system, which implies that the RL system can suffer from adversarial attacks, but most of them focused on single agent RL. In this paper, we propose a sparse adversarial attack on cMARL systems. We use (MA)RL with regularization to train the attack policy. Our experiments show that the policy trained by the current cMARL algorithm can obtain poor performance when only one or a few agents in the team (e.g., 1 of 8 or 5 of 25) were attacked at a few timesteps (e.g., attack 3 of total 40 timesteps).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/03/2023

Enhancing the Robustness of QMIX against State-adversarial Attacks

Deep reinforcement learning (DRL) performance is generally impacted by s...
research
02/07/2022

Evaluating Robustness of Cooperative MARL: A Model-based Approach

In recent years, a proliferation of methods were developed for cooperati...
research
05/27/2023

Rethinking Adversarial Policies: A Generalized Attack Formulation and Provable Defense in Multi-Agent RL

Most existing works consider direct perturbations of victim's state/acti...
research
02/19/2022

Robust Reinforcement Learning as a Stackelberg Game via Adaptively-Regularized Adversarial Training

Robust Reinforcement Learning (RL) focuses on improving performances und...
research
06/28/2019

Learning to Cope with Adversarial Attacks

The security of Deep Reinforcement Learning (Deep RL) algorithms deploye...
research
06/20/2023

Adversarial Search and Track with Multiagent Reinforcement Learning in Sparsely Observable Environment

We study a search and tracking (S T) problem for a team of dynamic sea...
research
05/04/2023

IMAP: Intrinsically Motivated Adversarial Policy

Reinforcement learning (RL) agents are known to be vulnerable to evasion...

Please sign up or login with your details

Forgot password? Click here to reset