Adversarial attacks in consensus-based multi-agent reinforcement learning

03/11/2021
by   Martin Figura, et al.
0

Recently, many cooperative distributed multi-agent reinforcement learning (MARL) algorithms have been proposed in the literature. In this work, we study the effect of adversarial attacks on a network that employs a consensus-based MARL algorithm. We show that an adversarial agent can persuade all the other agents in the network to implement policies that optimize an objective that it desires. In this sense, the standard consensus-based MARL algorithms are fragile to attacks.

READ FULL TEXT
research
07/03/2023

Enhancing the Robustness of QMIX against State-adversarial Attacks

Deep reinforcement learning (DRL) performance is generally impacted by s...
research
11/12/2021

Resilient Consensus-based Multi-agent Reinforcement Learning

Adversarial attacks during training can strongly influence the performan...
research
01/17/2021

Adversarial Attacks On Multi-Agent Communication

Growing at a very fast pace, modern autonomous systems will soon be depl...
research
07/18/2020

Quantum-Secure Authentication via Abstract Multi-Agent Interaction

Current methods for authentication based on public-key cryptography are ...
research
01/10/2022

Distributed Cooperative Multi-Agent Reinforcement Learning with Directed Coordination Graph

Existing distributed cooperative multi-agent reinforcement learning (MAR...
research
05/09/2023

An Algorithm For Adversary Aware Decentralized Networked MARL

Decentralized multi-agent reinforcement learning (MARL) algorithms have ...
research
07/28/2023

Learning to Collaborate by Grouping: a Consensus-oriented Strategy for Multi-agent Reinforcement Learning

Multi-agent systems require effective coordination between groups and in...

Please sign up or login with your details

Forgot password? Click here to reset