
Convergence Analysis of Machine Learning Algorithms for the Numerical Solution of Mean Field Control and Games: I  The Ergodic Case
We propose two algorithms for the solution of the optimal control of erg...
read it

Meanfield Langevin System, Optimal Control and Deep Neural Networks
In this paper, we study a regularised relaxed optimal control problem an...
read it

A new preconditioner for elliptic PDEconstrained optimization problems
We propose a preconditioner to accelerate the convergence of the GMRES i...
read it

Deep 2FBSDEs for Systems with Control Multiplicative Noise
We present a deep recurrent neural network architecture to solve a class...
read it

Mathematical and computational approaches for stochastic control of river environment and ecology: from fisheries viewpoint
We present a modern stochastic control framework for dynamic optimizatio...
read it

Chanceconstrained optimal inflow control in hyperbolic supply systems with uncertain demand
In this paper, we address the task of setting up an optimal production p...
read it

Optimal control of mean field equations with monotone coefficients and applications in neuroscience
We are interested in the optimal control problem associated with certain...
read it
Applications of the Deep Galerkin Method to Solving Partial IntegroDifferential and HamiltonJacobiBellman Equations
We extend the Deep Galerkin Method (DGM) introduced in Sirignano and Spiliopoulos (2018) to solve a number of partial differential equations (PDEs) that arise in the context of optimal stochastic control and mean field games. First, we consider PDEs where the function is constrained to be positive and integrate to unity, as is the case with FokkerPlanck equations. Our approach involves reparameterizing the solution as the exponential of a neural network appropriately normalized to ensure both requirements are satisfied. This then gives rise to a partial integrodifferential equation (PIDE) where the integral appearing in the equation is handled using importance sampling. Secondly, we tackle a number of HamiltonJacobiBellman (HJB) equations that appear in stochastic optimal control problems. The key contribution is that these equations are approached in their unsimplified primal form which includes an optimization problem as part of the equation. We extend the DGM algorithm to solve for the value function and the optimal control simultaneously by characterizing both as deep neural networks. Training the networks is performed by taking alternating stochastic gradient descent steps for the two functions, a technique similar in spirit to policy improvement algorithms.
READ FULL TEXT
Comments
There are no comments yet.