Log In Sign Up

Game on Random Environment, Mean-field Langevin System and Neural Networks

by   Giovanni Conforti, et al.

In this paper we study a type of games regularized by the relative entropy, where the players' strategies are coupled through a random environment variable. Besides the existence and the uniqueness of equilibria of such games, we prove that the marginal laws of the corresponding mean-field Langevin systems can converge towards the games' equilibria in different settings. As applications, the dynamic games can be treated as games on a random environment when one treats the time horizon as the environment. In practice, our results can be applied to analysing the stochastic gradient descent algorithm for deep neural networks in the context of supervised learning as well as for the generative adversarial networks.


Game on Random Environement, Mean-field Langevin System and Neural Networks

In this paper we study a type of games regularized by the relative entro...

Two-Scale Gradient Descent Ascent Dynamics Finds Mixed Nash Equilibria of Continuous Games: A Mean-Field Perspective

Finding the mixed Nash equilibria (MNE) of a two-player zero sum continu...

Ergodicity of the underdamped mean-field Langevin dynamics

We study the long time behavior of an underdamped mean-field Langevin (M...

Mean-Field Learning: a Survey

In this paper we study iterative procedures for stationary equilibria in...

Ising Game on Graphs

Static equilibria and dynamic evolution in noisy binary choice (Ising) g...

Mean Field Game GAN

We propose a novel mean field games (MFGs) based GAN(generative adversar...

Deep Learning for Mean Field Games with non-separable Hamiltonians

This paper introduces a new method based on Deep Galerkin Methods (DGMs)...