Emergent Reciprocity and Team Formation from Randomized Uncertain Social Preferences

11/10/2020
by   Bowen Baker, et al.
6

Multi-agent reinforcement learning (MARL) has shown recent success in increasingly complex fixed-team zero-sum environments. However, the real world is not zero-sum nor does it have fixed teams; humans face numerous social dilemmas and must learn when to cooperate and when to compete. To successfully deploy agents into the human world, it may be important that they be able to understand and help in our conflicts. Unfortunately, selfish MARL agents typically fail when faced with social dilemmas. In this work, we show evidence of emergent direct reciprocity, indirect reciprocity and reputation, and team formation when training agents with randomized uncertain social preferences (RUSP), a novel environment augmentation that expands the distribution of environments agents play in. RUSP is generic and scalable; it can be applied to any multi-agent environment without changing the original underlying game dynamics or objectives. In particular, we show that with RUSP these behaviors can emerge and lead to higher social welfare equilibria in both classic abstract social dilemmas like Iterated Prisoner's Dilemma as well in more complex intertemporal environments.

READ FULL TEXT

page 2

page 4

page 9

page 10

page 11

page 12

page 13

page 18

research
09/22/2022

Developing, Evaluating and Scaling Learning Agents in Multi-Agent Environments

The Game Theory Multi-Agent team at DeepMind studies several aspects...
research
10/20/2020

Negotiating Team Formation Using Deep Reinforcement Learning

When autonomous agents interact in the same environment, they must often...
research
05/19/2023

Understanding the World to Solve Social Dilemmas Using Multi-Agent Reinforcement Learning

Social dilemmas are situations where groups of individuals can benefit f...
research
01/05/2022

Hidden Agenda: a Social Deduction Game with Diverse Learned Equilibria

A key challenge in the study of multiagent cooperation is the need for i...
research
03/24/2023

Causality Detection for Efficient Multi-Agent Reinforcement Learning

When learning a task as a team, some agents in Multi-Agent Reinforcement...
research
07/04/2017

Maintaining cooperation in complex social dilemmas using deep reinforcement learning

Social dilemmas are situations where individuals face a temptation to in...
research
10/23/2020

Towards human-agent knowledge fusion (HAKF) in support of distributed coalition teams

Future coalition operations can be substantially augmented through agile...

Please sign up or login with your details

Forgot password? Click here to reset