The Importance of Credo in Multiagent Learning

04/15/2022
by   David Radke, et al.
0

We propose a model for multi-objective optimization, a credo, for agents in a system that are configured into multiple groups (i.e., teams). Our model of credo regulates how agents optimize their behavior for the component groups they belong to. We evaluate credo in the context of challenging social dilemmas with reinforcement learning agents. Our results indicate that the interests of teammates, or the entire system, are not required to be fully aligned for globally beneficial outcomes. We identify two scenarios without full common interest that achieve high equality and significantly higher mean population rewards compared to when the interests of all agents are aligned.

READ FULL TEXT
research
05/04/2022

Exploring the Benefits of Teams in Multiagent Learning

For problems requiring cooperation, many multiagent systems implement so...
research
04/24/2023

Stubborn: An Environment for Evaluating Stubbornness between Agents with Aligned Incentives

Recent research in multi-agent reinforcement learning (MARL) has shown s...
research
05/10/2019

Emergent Escape-based Flocking Behavior using Multi-Agent Reinforcement Learning

In nature, flocking or swarm behavior is observed in many species as it ...
research
10/06/2021

From STL Rulebooks to Rewards

The automatic synthesis of neural-network controllers for autonomous age...
research
08/29/2023

Policy composition in reinforcement learning via multi-objective policy optimization

We enable reinforcement learning agents to learn successful behavior pol...
research
12/20/2021

Adaptive Incentive Design with Multi-Agent Meta-Gradient Reinforcement Learning

Critical sectors of human society are progressing toward the adoption of...
research
02/09/2022

Generalized Strategic Classification and the Case of Aligned Incentives

Predicative machine learning models are frequently being used by compani...

Please sign up or login with your details

Forgot password? Click here to reset