Learning to Incentivize Other Learning Agents

06/10/2020
by   Jiachen Yang, et al.
0

The challenge of developing powerful and general Reinforcement Learning (RL) agents has received increasing attention in recent years. Much of this effort has focused on the single-agent setting, in which an agent maximizes a predefined extrinsic reward function. However, a long-term question inevitably arises: how will such independent agents cooperate when they are continually learning and acting in a shared multi-agent environment? Observing that humans often provide incentives to influence others' behavior, we propose to equip each RL agent in a multi-agent environment with the ability to give rewards directly to other agents, using a learned incentive function. Each agent learns its own incentive function by explicitly accounting for its impact on the learning of recipients and, through them, the impact on its own extrinsic objective. We demonstrate in experiments that such agents significantly outperform standard RL and opponent-shaping agents in challenging general-sum Markov games, often by finding a near-optimal division of labor. Our work points toward more opportunities and challenges along the path to ensure the common good in a multi-agent future.

READ FULL TEXT
11/29/2021

How Can Creativity Occur in Multi-Agent Systems?

Complex systems show how surprising and beautiful phenomena can emerge f...
12/20/2021

Adaptive Incentive Design with Multi-Agent Meta-Gradient Reinforcement Learning

Critical sectors of human society are progressing toward the adoption of...
09/06/2019

A Reinforcement Learning Based Approach for Joint Multi-Agent Decision Making

Reinforcement Learning (RL) is being increasingly applied to optimize co...
03/18/2020

Social navigation with human empowerment driven reinforcement learning

The next generation of mobile robots needs to be socially-compliant to b...
01/05/2022

Offsetting Unequal Competition through RL-assisted Incentive Schemes

This paper investigates the dynamics of competition among organizations ...
06/10/2021

ERMAS: Becoming Robust to Reward Function Sim-to-Real Gaps in Multi-Agent Simulations

Multi-agent simulations provide a scalable environment for learning poli...
08/03/2013

Universal Empathy and Ethical Bias for Artificial General Intelligence

Rational agents are usually built to maximize rewards. However, AGI agen...