Exploratory LQG Mean Field Games with Entropy Regularization

11/25/2020
by   Dena Firoozi, et al.
0

We study a general class of entropy-regularized multi-variate LQG mean field games (MFGs) in continuous time with K distinct sub-population of agents. We extend the notion of actions to action distributions (exploratory actions), and explicitly derive the optimal action distributions for individual agents in the limiting MFG. We demonstrate that the optimal set of action distributions yields an ϵ-Nash equilibrium for the finite-population entropy-regularized MFG. Furthermore, we compare the resulting solutions with those of classical LQG MFGs and establish the equivalence of their existence.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/14/2021

Shaping Large Population Agent Behaviors Through Entropy-Regularized Mean-Field Games

Mean-field games (MFG) were introduced to efficiently analyze approximat...
research
12/29/2022

Policy Mirror Ascent for Efficient and Independent Learning in Mean Field Games

Mean-field games have been used as a theoretical tool to obtain an appro...
research
02/28/2021

Scaling up Mean Field Games with Online Mirror Descent

We address scaling up equilibrium computation in Mean Field Games (MFGs)...
research
04/19/2021

Ensemble equivalence for mean field models and plurisubharmonicity

We show that entropy is globally concave with respect to energy for a ri...
research
09/30/2020

Entropy Regularization for Mean Field Games with Learning

Entropy regularization has been extensively adopted to improve the effic...
research
08/17/2022

Choquet regularization for reinforcement learning

We propose Choquet regularizers to measure and manage the level of explo...
research
07/21/2022

Incentive Designs for Stackelberg Games with a Large Number of Followers and their Mean-Field Limits

We study incentive designs for a class of stochastic Stackelberg games w...

Please sign up or login with your details

Forgot password? Click here to reset