Model-free conventions in multi-agent reinforcement learning with heterogeneous preferences

by   Raphael Koster, et al.

Game theoretic views of convention generally rest on notions of common knowledge and hyper-rational models of individual behavior. However, decades of work in behavioral economics have questioned the validity of both foundations. Meanwhile, computational neuroscience has contributed a modernized 'dual process' account of decision-making where model-free (MF) reinforcement learning trades off with model-based (MB) reinforcement learning. The former captures habitual and procedural learning while the latter captures choices taken via explicit planning and deduction. Some conventions (e.g. international treaties) are likely supported by cognition that resonates with the game theoretic and MB accounts. However, convention formation may also occur via MF mechanisms like habit learning; though this possibility has been understudied. Here, we demonstrate that complex, large-scale conventions can emerge from MF learning mechanisms. This suggests that some conventions may be supported by habit-like cognition rather than explicit reasoning. We apply MF multi-agent reinforcement learning to a temporo-spatially extended game with incomplete information. In this game, large parts of the state space are reachable only by collective action. However, heterogeneity of tastes makes such coordinated action difficult: multiple equilibria are desirable for all players, but subgroups prefer a particular equilibrium over all others. This creates a coordination problem that can be solved by establishing a convention. We investigate start-up and free rider subproblems as well as the effects of group size, intensity of intrinsic preference, and salience on the emergence dynamics of coordination conventions. Results of our simulations show agents establish and switch between conventions, even working against their own preferred outcome when doing so is necessary for effective coordination.


page 1

page 6

page 13


Off-Beat Multi-Agent Reinforcement Learning

We investigate model-free multi-agent reinforcement learning (MARL) in e...

Modeling the Formation of Social Conventions in Multi-Agent Populations

In order to understand the formation of social conventions we need to kn...

Inducing Stackelberg Equilibrium through Spatio-Temporal Sequential Decision-Making in Multi-Agent Reinforcement Learning

In multi-agent reinforcement learning (MARL), self-interested agents att...

Mastering the Game of Stratego with Model-Free Multiagent Reinforcement Learning

We introduce DeepNash, an autonomous agent capable of learning to play t...

Stackelberg Decision Transformer for Asynchronous Action Coordination in Multi-Agent Systems

Asynchronous action coordination presents a pervasive challenge in Multi...

Promoting Coordination through Policy Regularization in Multi-Agent Reinforcement Learning

A central challenge in multi-agent reinforcement learning is the inducti...

A Game-Theoretic Account of Responsibility Allocation

When designing or analyzing multi-agent systems, a fundamental problem i...

Please sign up or login with your details

Forgot password? Click here to reset