Efficient Adversarial Attacks on Online Multi-agent Reinforcement Learning

07/15/2023
by   Guanlin Liu, et al.
0

Due to the broad range of applications of multi-agent reinforcement learning (MARL), understanding the effects of adversarial attacks against MARL model is essential for the safe applications of this model. Motivated by this, we investigate the impact of adversarial attacks on MARL. In the considered setup, there is an exogenous attacker who is able to modify the rewards before the agents receive them or manipulate the actions before the environment receives them. The attacker aims to guide each agent into a target policy or maximize the cumulative rewards under some specific reward function chosen by the attacker, while minimizing the amount of manipulation on feedback and action. We first show the limitations of the action poisoning only attacks and the reward poisoning only attacks. We then introduce a mixed attack strategy with both the action poisoning and the reward poisoning. We show that the mixed attack strategy can efficiently attack MARL agents even if the attacker has no prior information about the underlying environment and the agents' algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/28/2020

Policy Teaching via Environment Poisoning: Training-time Adversarial Attacks against Reinforcement Learning

We study a security threat to reinforcement learning where an attacker p...
research
06/04/2022

Reward Poisoning Attacks on Offline Multi-Agent Reinforcement Learning

We expose the danger of reward poisoning in offline multi-agent reinforc...
research
12/02/2021

Reward-Free Attacks in Multi-Agent Reinforcement Learning

We investigate how effective an attacker can be when it only learns from...
research
09/13/2019

Strategic Inference with a Single Private Sample

Motivated by applications in cyber security, we develop a simple game mo...
research
06/11/2018

On the adversarial robustness of robust estimators

Motivated by recent data analytics applications, we study the adversaria...
research
04/01/2023

Recover Triggered States: Protect Model Against Backdoor Attack in Reinforcement Learning

A backdoor attack allows a malicious user to manipulate the environment ...
research
11/23/2021

Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the Age of AI-NIDS

Cyber attacks are increasing in volume, frequency, and complexity. In re...

Please sign up or login with your details

Forgot password? Click here to reset