-
A utility-based analysis of equilibria in multi-objective normal form games
In multi-objective multi-agent systems (MOMAS), agents explicitly consid...
read it
-
Multi-Objective Multi-Agent Decision Making: A Utility-based Analysis and Survey
The majority of multi-agent system (MAS) implementations aim to optimise...
read it
-
Multi-Objective Optimization of the Textile Manufacturing Process Using Deep-Q-Network Based Multi-Agent Reinforcement Learning
Multi-objective optimization of the textile manufacturing process is an ...
read it
-
On Existence, Mixtures, Computation and Efficiency in Multi-objective Games
In a multi-objective game, each individual's payoff is a vector-valued f...
read it
-
Learning with Opponent-Learning Awareness
Multi-agent settings are quickly gathering importance in machine learnin...
read it
-
Using Logical Specifications of Objectives in Multi-Objective Reinforcement Learning
In the multi-objective reinforcement learning (MORL) paradigm, the relat...
read it
-
Topological Influence and Locality in Swap Schelling Games
Residential segregation is a wide-spread phenomenon that can be observed...
read it
Opponent Learning Awareness and Modelling in Multi-Objective Normal Form Games
Many real-world multi-agent interactions consider multiple distinct criteria, i.e. the payoffs are multi-objective in nature. However, the same multi-objective payoff vector may lead to different utilities for each participant. Therefore, it is essential for an agent to learn about the behaviour of other agents in the system. In this work, we present the first study of the effects of such opponent modelling on multi-objective multi-agent interactions with non-linear utilities. Specifically, we consider two-player multi-objective normal form games with non-linear utility functions under the scalarised expected returns optimisation criterion. We contribute novel actor-critic and policy gradient formulations to allow reinforcement learning of mixed strategies in this setting, along with extensions that incorporate opponent policy reconstruction and learning with opponent learning awareness (i.e., learning while considering the impact of one's policy when anticipating the opponent's learning step). Empirical results in five different MONFGs demonstrate that opponent learning awareness and modelling can drastically alter the learning dynamics in this setting. When equilibria are present, opponent modelling can confer significant benefits on agents that implement it. When there are no Nash equilibria, opponent learning awareness and modelling allows agents to still converge to meaningful solutions that approximate equilibria.
READ FULL TEXT
Comments
There are no comments yet.