Certified Policy Smoothing for Cooperative Multi-Agent Reinforcement Learning

by   Ronghui Mu, et al.

Cooperative multi-agent reinforcement learning (c-MARL) is widely applied in safety-critical scenarios, thus the analysis of robustness for c-MARL models is profoundly important. However, robustness certification for c-MARLs has not yet been explored in the community. In this paper, we propose a novel certification method, which is the first work to leverage a scalable approach for c-MARLs to determine actions with guaranteed certified bounds. c-MARL certification poses two key challenges compared with single-agent systems: (i) the accumulated uncertainty as the number of agents increases; (ii) the potential lack of impact when changing the action of a single agent into a global team reward. These challenges prevent us from directly using existing algorithms. Hence, we employ the false discovery rate (FDR) controlling procedure considering the importance of each agent to certify per-state robustness and propose a tree-search-based algorithm to find a lower bound of the global reward under the minimal certified perturbation. As our method is general, it can also be applied in single-agent environments. We empirically show that our certification bounds are much tighter than state-of-the-art RL certification solutions. We also run experiments on two popular c-MARL algorithms: QMIX and VDN, in two different environments, with two and four agents. The experimental results show that our method produces meaningful guaranteed robustness for all models and environments. Our tool CertifyCMARL is available at https://github.com/TrustAI/CertifyCMA


page 1

page 2

page 3

page 4


On the Robustness of Cooperative Multi-Agent Reinforcement Learning

In cooperative multi-agent reinforcement learning (c-MARL), agents learn...

Agent-Temporal Attention for Reward Redistribution in Episodic Multi-Agent Reinforcement Learning

This paper considers multi-agent reinforcement learning (MARL) tasks whe...

Towards Comprehensive Testing on the Robustness of Cooperative Multi-agent Reinforcement Learning

While deep neural networks (DNNs) have strengthened the performance of c...

Decentralized Multi-Agent Reinforcement Learning for Task Offloading Under Uncertainty

Multi-Agent Reinforcement Learning (MARL) is a challenging subarea of Re...

Robust Multi-Agent Reinforcement Learning with State Uncertainty

In real-world multi-agent reinforcement learning (MARL) applications, ag...

ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate

Text evaluation has historically posed significant challenges, often dem...

A Multi-Agent Reinforcement Learning Framework for Off-Policy Evaluation in Two-sided Markets

The two-sided markets such as ride-sharing companies often involve a gro...

Please sign up or login with your details

Forgot password? Click here to reset