Normative Disagreement as a Challenge for Cooperative AI

11/27/2021
by   Julian Stastny, et al.
0

Cooperation in settings where agents have both common and conflicting interests (mixed-motive environments) has recently received considerable attention in multi-agent learning. However, the mixed-motive environments typically studied have a single cooperative outcome on which all agents can agree. Many real-world multi-agent environments are instead bargaining problems (BPs): they have several Pareto-optimal payoff profiles over which agents have conflicting preferences. We argue that typical cooperation-inducing learning algorithms fail to cooperate in BPs when there is room for normative disagreement resulting in the existence of multiple competing cooperative equilibria, and illustrate this problem empirically. To remedy the issue, we introduce the notion of norm-adaptive policies. Norm-adaptive policies are capable of behaving according to different norms in different circumstances, creating opportunities for resolving normative disagreement. We develop a class of norm-adaptive policies and show in experiments that these significantly increase cooperation. However, norm-adaptiveness cannot address residual bargaining failure arising from a fundamental tradeoff between exploitability and cooperative robustness.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/24/2021

Balancing Rational and Other-Regarding Preferences in Cooperative-Competitive Environments

Recent reinforcement learning studies extensively explore the interplay ...
research
12/07/2021

The Partially Observable Asynchronous Multi-Agent Cooperation Challenge

Multi-agent reinforcement learning (MARL) has received increasing attent...
research
07/15/2022

Stochastic Market Games

Some of the most relevant future applications of multi-agent systems lik...
research
09/15/2022

How to solve a classification problem using a cooperative tiling Multi-Agent System?

Adaptive Multi-Agent Systems (AMAS) transform dynamic problems into prob...
research
02/28/2023

IQ-Flow: Mechanism Design for Inducing Cooperative Behavior to Self-Interested Agents in Sequential Social Dilemmas

Achieving and maintaining cooperation between agents to accomplish a com...
research
11/29/2021

Adversarial Attacks in Cooperative AI

Single-agent reinforcement learning algorithms in a multi-agent environm...
research
06/20/2023

IMP-MARL: a Suite of Environments for Large-scale Infrastructure Management Planning via MARL

We introduce IMP-MARL, an open-source suite of multi-agent reinforcement...

Please sign up or login with your details

Forgot password? Click here to reset