Stubborn: An Environment for Evaluating Stubbornness between Agents with Aligned Incentives

04/24/2023
by   Ram Rachum, et al.
0

Recent research in multi-agent reinforcement learning (MARL) has shown success in learning social behavior and cooperation. Social dilemmas between agents in mixed-sum settings have been studied extensively, but there is little research into social dilemmas in fullycooperative settings, where agents have no prospect of gaining reward at another agent's expense. While fully-aligned interests are conducive to cooperation between agents, they do not guarantee it. We propose a measure of "stubbornness" between agents that aims to capture the human social behavior from which it takes its name: a disagreement that is gradually escalating and potentially disastrous. We would like to promote research into the tendency of agents to be stubborn, the reactions of counterpart agents, and the resulting social dynamics. In this paper we present Stubborn, an environment for evaluating stubbornness between agents with fully-aligned incentives. In our preliminary results, the agents learn to use their partner's stubbornness as a signal for improving the choices that they make in the environment.

READ FULL TEXT

page 1

page 2

page 3

research
07/03/2023

Theory of Mind as Intrinsic Motivation for Multi-Agent Reinforcement Learning

The ability to model the mental states of others is crucial to human soc...
research
06/14/2023

Mediated Multi-Agent Reinforcement Learning

The majority of Multi-Agent Reinforcement Learning (MARL) literature equ...
research
04/15/2022

The Importance of Credo in Multiagent Learning

We propose a model for multi-objective optimization, a credo, for agents...
research
10/23/2022

A Cooperative Reinforcement Learning Environment for Detecting and Penalizing Betrayal

In this paper we present a Reinforcement Learning environment that lever...
research
06/09/2021

Deception in Social Learning: A Multi-Agent Reinforcement Learning Perspective

Within the framework of Multi-Agent Reinforcement Learning, Social Learn...
research
02/03/2021

Improved Cooperation by Exploiting a Common Signal

Can artificial agents benefit from human conventions? Human societies ma...
research
02/15/2021

Cooperation and Reputation Dynamics with Reinforcement Learning

Creating incentives for cooperation is a challenge in natural and artifi...

Please sign up or login with your details

Forgot password? Click here to reset