Cooperation and Reputation Dynamics with Reinforcement Learning

02/15/2021
by   Nicolas Anastassacos, et al.
0

Creating incentives for cooperation is a challenge in natural and artificial systems. One potential answer is reputation, whereby agents trade the immediate cost of cooperation for the future benefits of having a good reputation. Game theoretical models have shown that specific social norms can make cooperation stable, but how agents can independently learn to establish effective reputation mechanisms on their own is less understood. We use a simple model of reinforcement learning to show that reputation mechanisms generate two coordination problems: agents need to learn how to coordinate on the meaning of existing reputations and collectively agree on a social norm to assign reputations to others based on their behavior. These coordination problems exhibit multiple equilibria, some of which effectively establish cooperation. When we train agents with a standard Q-learning algorithm in an environment with the presence of reputation mechanisms, convergence to undesirable equilibria is widespread. We propose two mechanisms to alleviate this: (i) seeding a proportion of the system with fixed agents that steer others towards good equilibria; and (ii), intrinsic rewards based on the idea of introspection, i.e., augmenting agents' rewards by an amount proportionate to the performance of their own strategy against themselves. A combination of these simple mechanisms is successful in stabilizing cooperation, even in a fully decentralized version of the problem where agents learn to use and assign reputations simultaneously. We show how our results relate to the literature in Evolutionary Game Theory, and discuss implications for artificial, human and hybrid systems, where reputations can be used as a way to establish trust and cooperation.

READ FULL TEXT
research
09/01/2022

Intrinsic fluctuations of reinforcement learning promote cooperation

In this work, we ask for and answer what makes classical reinforcement l...
research
01/05/2022

Hidden Agenda: a Social Deduction Game with Diverse Learned Equilibria

A key challenge in the study of multiagent cooperation is the need for i...
research
07/11/2019

Mobility restores the mechanism which supports cooperation in the voluntary prisoner's dilemma game

It is generally believed that in a situation where individual and collec...
research
04/24/2023

Stubborn: An Environment for Evaluating Stubbornness between Agents with Aligned Incentives

Recent research in multi-agent reinforcement learning (MARL) has shown s...
research
12/10/2022

On Blockchain We Cooperate: An Evolutionary Game Perspective

Cooperation is fundamental for human prosperity. Blockchain, as a trust ...
research
05/04/2022

Exploring the Benefits of Teams in Multiagent Learning

For problems requiring cooperation, many multiagent systems implement so...
research
09/22/2014

Distributed Clustering and Learning Over Networks

Distributed processing over networks relies on in-network processing and...

Please sign up or login with your details

Forgot password? Click here to reset