COLA: Consistent Learning with Opponent-Learning Awareness

03/08/2022
by   Timon Willi, et al.
0

Learning in general-sum games can be unstable and often leads to socially undesirable, Pareto-dominated outcomes. To mitigate this, Learning with Opponent-Learning Awareness (LOLA) introduced opponent shaping to this setting, by accounting for the agent's influence on the anticipated learning steps of other agents. However, the original LOLA formulation (and follow-up work) is inconsistent because LOLA models other agents as naive learners rather than LOLA agents. In previous work, this inconsistency was suggested as a cause of LOLA's failure to preserve stable fixed points (SFPs). First, we formalize consistency and show that higher-order LOLA (HOLA) solves LOLA's inconsistency problem if it converges. Second, we correct a claim made in the literature, by proving that, contrary to Schäfer and Anandkumar (2019), Competitive Gradient Descent (CGD) does not recover HOLA as a series expansion. Hence, CGD also does not solve the consistency problem. Third, we propose a new method called Consistent LOLA (COLA), which learns update functions that are consistent under mutual opponent shaping. It requires no more than second-order derivatives and learns consistent update functions even when HOLA fails to converge. However, we also prove that even consistent update functions do not preserve SFPs, contradicting the hypothesis that this shortcoming is caused by LOLA's inconsistency. Finally, in an empirical evaluation on a set of general-sum games, we find that COLA finds prosocial solutions and that it converges under a wider range of learning rates than HOLA and LOLA. We support the latter finding with a theoretical result for a simple game.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/16/2021

Polymatrix Competitive Gradient Descent

Many economic games and machine learning approaches can be cast as compe...
research
05/03/2022

Model-Free Opponent Shaping

In general-sum games, the interaction of self-interested learning agents...
research
09/13/2017

Learning with Opponent-Learning Awareness

Multi-agent settings are quickly gathering importance in machine learnin...
research
07/17/2023

Meta-Value Learning: a General Framework for Learning with Learning Awareness

Gradient-based learning in multi-agent systems is difficult because the ...
research
03/08/2018

SA-IGA: A Multiagent Reinforcement Learning Method Towards Socially Optimal Outcomes

In multiagent environments, the capability of learning is important for ...
research
11/20/2018

Stable Opponent Shaping in Differentiable Games

A growing number of learning methods are actually games which optimise m...
research
02/24/2017

Strongly-Typed Agents are Guaranteed to Interact Safely

As artificial agents proliferate, it is becoming increasingly important ...

Please sign up or login with your details

Forgot password? Click here to reset