The mathematical models for many problems around us are in the form of partial differential equations (PDEs) in high dimensions. Notable examples include:
The Hamilton-Jacobi-Bellman (HJB) equation in control theory
Here the dimensionality is the dimensionality of the state space of the original control problem. If the original control problem is described by a PDE, then the corresponding HJB equation is formulated in infinite dimensional space.
The Black-Scholes equations for pricing financial derivatives
Here the dimensionality is the number of underlying financial assets. Nonlinear terms may result when default risks, transaction costs, or other factors are taken into account.
Many electron Schrödinger equation in quantum mechanics
Here the dimensionality is roughly three times the number of electrons in the considered quantum mechanical system.
Solving these PDEs has been a notoriously difficult problem in scientific computing and computational science, due to the well-known curse of dimensionality (CoD): The computational complexity grows exponentially as a function of the dimensionality of the problem . In fact, for this reason, traditional numerical algorithms such as finite difference and finite element methods have been limited to dealing with problems in a rather low dimension. The use of sparse grids can extend the applicability to, say, around 10 dimensions. But beyond that, there seemed to be little hope except for special PDEs.
We are interested in PDEs and algorithms that do not suffer from CoD, i.e., algorithms whose computational complexity scales algebraically with the problem dimension. More precisely, to reach some error tolerance , the computational cost should be no more than
where are absolute, dimension-independent constants. In particular, they do not depend on the dimension
. We are also interested in PDEs for which such algorithms do exist. In fact, we are interested in developing a theory of high dimensional PDEs based on the complexity with which the solutions can be approximated using particular schemes, such as neural network models. We believe that such a theory should be part of the foundation for a theoretical understanding of high dimensional control theory, reinforcement learning, and a host of other problems that will become important research topics in mathematics.
The golden standard for high dimensional problems is the approximation of high dimensional integrals. Let be a Lebesgue square integrable function defined on the set and let
Typical grid-based quadrature rules, such as the Trapezoidal rule and the Simpson’s rule, all suffer from CoD. The one algorithm that does not suffer from CoD is the Monte Carlo algorithm which works as follows. Let ,
, be independent, continuous uniformly distributed random samples onand let
Then a simple calculation gives us
The rate is independent of . To reduce the error to a tolerance of , the number of samples needed must be of order . This is a situation with in (4).
The value of is more subtle. We need to examine specific classes of examples in different dimensions. For example, one can ask about the value of
for the Ising or Heisenberg model in statistical physics. At the moment, results in this direction are still quite sparse.
Algorithms and results similar to those of the approximative computation of integrals in (5)–(7) above have been developed in the case of linear PDEs of the Kolmogorov type but, for a very long time, little progress was made in developing algorithms with quantitatively similar computational complexity for high dimensional nonlinear PDEs and this has impeded advances in several fields such as optimal control and quantum mechanics. Things have changed dramatically in the last few years with the appearance of so-called full history recursive multilevel Picard approximation methods [37, 38, 89] and a host of machine learning-based algorithms for high dimensional PDEs beginning with the Deep BSDE method [36, 70]. Full history recursive multilevel Picard approximation methods (in the following we abbreviate full history recursive multilevel Picard by MLP) are some recursive nonlinear variants of the classical Monte-Carlo approximation methods and, in that sense, MLP approximation methods are nonlinear Monte-Carlo approximation methods. For every arbitrarily small it has been shown that MLP approximation methods achieve (4) with and for a wide class of nonlinear (parabolic) equations (see Section 6 below for details). Although a complete theory is still lacking, the Deep BSDE method has demonstrated very robust performance in practice for a range of problems and has been extended in many different ways. These developments will be reviewed below.
Along with the work on developing algorithms, there has been some effort to develop an open-source platform where codes, review papers, and other information can be shared. The interested reader should consult:http://deeppde.org.
Before launching into a more detailed discussion, it is useful to review briefly the two main ideas that we focus on in this paper: machine learning approximation methods (see Section 1.1 below) and MLP approximation methods (see Section 1.2 below).
1.1 A brief introduction of supervised learning
The basic problem in supervised learning is as follows: Given a natural numberand a sequence , , of pairs of input-output data, we want to recover the target function as accurately as possible. We will assume that the input data ,
, is sampled from the probability distributionon .
Step 1. Choose a hypothesis space. This is a set of trial functions where is a natural number that is strongly related to the dimensionality of . One might choose piecewise polynomials or wavelets. In modern machine learning the most popular choice is neural network functions. Two-layer neural network functions (one input layer, one output layer that usually does not count, and one hidden layer) take the form
where is a fixed scalar nonlinear function and where are the parameters to be optimized (or trained). A popular example for the nonlinear functionthat
. Roughly speaking, deep neural network (DNN) functions are obtained if one composes two-layer neural network functions several times. One important class of DNN models are residual neural networks (ResNet). They closely resemble discretized ordinary differential equations and take the form
for where . Here the parameters are . ResNets are the model of choice for truly deep neural network models.
Step 2. Choose a loss function
. The primary consideration for the choice of the loss function is to fit the data. Therefore one most obvious choice is theloss:
This is also called the “empirical risk”. Sometimes we also add regularization terms.
Step 3. Choose an optimization algorithm
. The most popular optimization algorithms in machine learning are different versions of the gradient descent (GD) algorithm, or its stochastic analog, the stochastic gradient descent (SGD) algorithm. Assume that the objective function we aim to minimize is of the form
( could be an empirical distribution on a finite set). The simplest form of SGD iteration takes the form
for where ,
, is a sequence of i.i.d. random variables sampled from the distributionand is the learning rate which might also change during the course of the iteration. In contrast GD takes the form
Obviously this form of SGD can be adapted to loss functions of the form (10) which can also be regarded as an expectation. The DNN-SGD paradigm is the heart of modern machine learning.
High dimensional stochastic control problems.
One of the earliest applications of deep learning to problems in scientific computing is for the stochastic control problem . This example was chosen because of its close resemblance to the DNN-SGD paradigm in deep learning. From an abstract viewpoint, DNN can be viewed as a (discrete) dynamical system, of which ResNet is a good example. SGD is a natural consequence when applying GD to stochastic optimization problems, in which the objective function is an expectation.
Consider the stochastic dynamical system:
Here are respectively the state, the control, and the noise at time . We assume that the objective function for the control problem is of the form:
where is the time horizon and are the feedback controls. One can see the close analogy between the stochastic control problem and the DNN-SGD paradigm: (14) plays the role of (9) and (15) is in the form of a stochastic optimization problem. In this analogy, the role of the training data is played by the noise .
To develop an algorithm for this problem, one can approximate the feedback control function by any machine learning model, in particular some neural network model:
where is the parameter to be trained at time . The loss function can be defined by
which can be optimized using SGD over different random samples of , . An example of energy storage is shown in Figure 1. It is an allocation problem, with the objective being optimizing total revenue from multiple storage devices and a renewable wind energy source while satisfying stochastic demand. More details of the problem can be found in [67, 95].
1.2 A brief introduction to multilevel Picard approximation methods
Despite the great performance of deep learning-based approximation schemes in various numerical simulations, until today, there is no rigorous mathematical analysis in the scientific literature which proves (or disproves) the conjecture that there exists a deep learning-based approximation method which overcomes the curse of dimensionality in the numerical approximation of PDEs. However, for MLP approximation methods it has been proven in the scientific literature that such approximation methods do overcome the curse of dimensionality in the numerical approximation of semilinear PDEs with general time horizons and, in particular, the convergence results for MLP approximation methods (see [9, 37, 89, 90, 7, 53, 6, 86, 91, 87] and Section 6 below for details) have revealed that semilinear PDEs with general time horizons can be approximated without the curse of dimensionality.
Let us briefly illustrate this in the case of semilinear heat PDEs with bounded initial value functions. Let , let be Lipschitz continuous, for every let , and assume for every , , that and
In the linear case where vanishes, it is known for a long time that classical Monte-Carlo approximation methods can approximate with and in (4). In the general nonlinear case, classical Monte-Carlo approximation methods are not applicable anymore but it has recently been shown in the scientific literature (see Hutzenthaler et al.  and Theorem 3 below) that for every arbitrarily small it holds that MLP approximation methods can approximate in the general nonlinear case with and in (4). The convergence results for MLP approximation methods in the scientific literature have thus revealed that semilinear heat PDEs can, up to an arbitrarily small polynomial order of convergence, been solved with the same computational complexity as linear heat PDEs.
In the following we briefly sketch some of the main ideas in the derivation of MLP approximation schemes. One of the key ideas in the derivation and the convergence analysis of the MLP approximation scheme is to rewrite the PDE in (18) as a stochastic fixed point equation. More formally, we note that (18) ensures that for all , it holds that
is a standard Brownian motion on a probability space(cf., e.g., Beck et al. [8, Theorem 1.1]). Now we can also write (19) as the fixed point equation where is the self-mapping on the set of all bounded functions in which is described through the right hand side of (19). Using one can define Picard iterates , , through the recursion that for all it holds that and . In the next step we note that a telescoping sum argument shows that for all it holds that
MLP approximations are then derived by approximating the expectations in (20) within the fixed point mapping by means of Monte-Carlo approximations with different levels of accuracy.
The procedure in (20) is inspired by multilevel Monte Carlo (MLMC) approximations in the scientific literature (see Heinrich , Heinrich & Sindambiwe , Giles  and, e.g., [75, 52] and the references mentioned therein). There are, however, also several differences when comparing MLP approximations to “classical” MLMC approximations. In particular, we note that MLP approximations are full history recursive in the sense that for all we have that computations of realizations of MLP approximations in the -th iterate require realizations for MLP approximations in the 1st, 2nd, , -th iterate the analysis of MLP approximations (see (74) in Theorem 3 below for details). Taking this into account, the convergence analysis for MLP approximations in the scientific literature turns out to be more much subtle, and we refer to Section 6 below for a sketch of the some of the main ideas for the complexity analysis of MLP approximations and also for references to research articles on MLP approximations in the scientific literature.
In the comparison between classical linear Monte-Carlo methods and MLP approximation methods in (18) above we have restricted ourselves just for simplicity to the problem of approximating semilinear heat PDEs with bounded solutions at the space-time point , and similar results hold in the case of much more general classes of PDEs and more general approximation problems. Until today, MLP approximation schemes are the only approximation schemes in the scientific literature for which it has been proved that they do overcome the curse of dimensionality in the numerical approximation of semilinear PDEs with general time horizons. We refer to Section 6 for further details and, in particular, for a comprehensive literature overview on research articles on MLP approximation methods.
2 General remarks about algorithms for solving PDEs in high dimensions
We begin with a brief overview of the algorithms that have been developed for high dimensional PDEs.
Special classes of PDEs: The representative special classes of PDEs which we review within this subsection are second-order linear parabolic PDEs of the Kolmogorov type and first-order Hamilton-Jacobi equations.
Consider the linear parabolic PDE with a terminal condition specified at described by
Our objective is to compute . For this purpose, we consider the diffusion process described by the stochastic differential equation (SDE)
The solution to the PDE in (21) can be expressed as an expectation in the sense that
In a similar spirit, solutions of the Hamilton-Jacobi equations can also be expressed using the Hopf formula. Consider the PDE
Assume that is convex. Then we have the Hopf formula:
where is the Legendre transform of (see Evans ). The right hand side of the above equation can be computed using some optimization algorithms. A particularly attractive algorithm along these lines was developed in Darbon & Osher .
Control problems: The first application of deep learning to solving scientific computing problems in high dimensions was in the area of stochastic control. In 2016, Han and E  developed a deep neural network-based algorithm for stochastic control problems. The reason for choosing stochastic control was its very close analogy with the setup of deep learning, as we will see later (see Section 4 below). Deep learning-based algorithms for deterministic control problems were first developed in .
Schrödinger equation for spins and electrons:
In an influential paper, Carleo and Troyer developed an algorithm for solving the Schrödinger equation for spins using the restricted Boltzmann machine (RBM) as the trial function. The variational Monte Carlo (VMC) approach was used for ground-state calculations. To solve the dynamic equation, the least square approach was used, i.e., the total integral of the square of the residual was used as the loss function.
For many-electron Schrödinger equation, the story is quite different. The configuration space is now continuous, instead of being discrete. In addition, the wave function should satisfy the anti-symmetry constraint. This has proven to be a difficult issue in solving the Schrödinger equation. In , Han, Zhang, and E developed a deep neural network-based methodology for computing the ground states. Compared with traditional VMC, a permutation-symmetric neural network-based ansatz is used for the Jastrow factor. The anti-symmetric part was treated in a rather simplified fashion. This has been improved in the work [112, 122, 81].
Despite these progresses, it is fair to say that we are still at an early stage for developing neural network-based algorithms for the many-body Schrödinger equation. Since the issues for solving the Schrödinger equation are quite specialized, we will not discuss them further in this review.
Nonlinear parabolic PDEs: The first class of algorithms developed for general nonlinear parabolic PDEs with general time horizons in really high dimensions () is the multilevel Picard method [37, 38, 89]. At this moment, this is also the only algorithm for which a relatively complete theory has been established (see Section 6 below). Among other things, this theory offers a proof that the MLP method overcomes CoD. Shortly after, E, Han, and Jentzen developed the deep neural network-based Deep BSDE method, by making use of the connection between nonlinear parabolic equations and backward stochastic differential equations (BSDE) [36, 70]. This was the first systematic application of deep learning to solving general high dimensional PDEs. Later, Sirignano and Spiliopoulos developed an alternative deep learning-based algorithm using the least squares approach , extending the work of Carleo and Troyer  to general PDEs. Such deep learning-based approximation methods for PDEs have also been extended in different ways and to other parabolic and even elliptic problems; see, e.g., [5, 10, 126, 4, 12, 85, 25, 78, 73, 94, 72].
Some special semilinear parabolic PDEs can be formulated in terms of branching processes. One such example is the Fisher-KPP (Kolmogorov-Petrovski-Piscounov) equation [48, 103, 115]. For such PDEs, Monte Carlo methods can be developed, and such Monte Carlo approximation algorithms overcome the CoD (see [133, 142, 77, 80, 26, 139, 79]) in the specific situation where the time horizon and/or the PDE nonlinearity is sufficiently small.
Variational problems: It is fairly straightforward to construct neural network-based algorithms for solving variational problems. One way of doing this is simply to use the Ritz formulation. The “Deep Ritz method”, to be discussed below, is such an example; see E & Yu  and also Khoo et al. . It is natural to ask whether one can develop a similar Galerkin formulation, i.e., using a weak form of the PDE. In fact, 
was written as a preparation for developing what would be a “Deep Galerkin method”. However, formulating robust neural network algorithms using a weak form has proven to be quite problematic. The difficulty seems to be analogous to the ones encountered in generative adversarial networks (GAN); cf. Goodfellow et al.. For some progress in this direction we refer to the article Zhang et al. . It should also be noted that even though the deep learning-based methodology proposed in Sirignano & Spiliopoulos  was named as a “Deep Galerkin method”, the methodology in  is somehow based on a least square formulation rather than a Galerkin formulation.
Parametric PDEs: One of the earliest applications of deep learning to PDEs is in the study of parametric PDEs. In , Khoo, Lu, and Ying developed a methodology for solving low dimensional PDEs with random coefficients in which the neural network models are used to parametrize the random coefficients. Recently the neural networks are also applied to solve low-dimensional stochastic PDEs . This is a promising direction thought it will not be covered in this review. Another closely related area is solving inverse problems governed by PDEs, which is intrinsically high dimensional as well. Recent works [127, 45, 46, 101, 27] have demonstrated the advantages of approximating the forward and inverse maps with carefully designed neural networks.
Game theory A stochastic game describes the behavior of a population of interactive agents among which everyone makes his/her optimal decision in a common environment. Many scenarios in finance, economics, management science, and engineering can be formulated as stochastic games. With a finite number of agents, the Markovian Nash equilibrium of a game is determined by a coupled system of parabolic PDEs. To solve these problems, Han et al. extend the Deep BSDE method in [68, 69]
with the idea of fictitious play. In a different direction, with an infinite number of agents and no common noise, one can use the mean-field game theory developed in[107, 108, 109, 84, 83] to reduce the characterization of the Nash equilibrium to two coupled equations: a Hamilton-Jacobi-Bellman equation and a Fokker-Planck equation. Neural network-based algorithms have been developed in [23, 24, 131, 111] to solve these equations.
Besides the literature mentioned above, certain deep learning-based approximation methods for PDEs have been proposed (see, e.g., [16, 34, 47, 49, 63, 92, 113, 114, 118, 124, 125, 21, 145]) and various numerical simulations for such methods suggest that deep neural network approximations might have the capacity to indeed solve high dimensional problems efficiently. Actually, the attempts of applying neural networks for solving PDEs can be dated back to the 90s (cf., e.g., [110, 136, 106, 96]), nevertheless, with a focus on low-dimensional PDEs. Apart from neural networks, there are also other attempts in literature in solving high dimensional PDEs with limited success (see, e.g., [3, 137, 146, 19, 57, 33, 14, 56, 20, 50, 55, 44, 28, 31, 29, 30, 105, 123, 66, 58, 60, 59, 141, 140, 130, 18]).
This review will be focused on nonlinear parabolic PDEs and related control problems. There are two main reasons for choosing these topics. The first is that these classes of problems are fairly general and have general interest. The second is that the study of high dimensional problems is in better shape for these classes of problems, compared to others (e.g., the Schrödinger equation discussed above).
We should also mention that the heart of reinforcement learning is solving approximately the Bellman equation, even though reinforcement learning algorithms are not always formulated that way. The dimensionality in these problems is often very high. This is another topic that will not be covered in this review.
3 The Deep BSDE method
The Deep BSDE method was the first deep learning-based numerical algorithm for solving general nonlinear parabolic PDEs in high dimensions [36, 70]. It begins by reformulating the PDE as a stochastic optimization problem. This is done with the help of BSDEs, hence the name “Deep BSDE method”. As a by-product, the Deep BSDE method is also an efficient algorithm for solving high dimensional BSDEs.
3.1 PDEs and BSDEs
Consider the semilinear parabolic PDE
In the same way as in Section 2 above, we consider the diffusion process
Using Itô’s lemma, we obtain that
To proceed further, we recall the notion of backward stochastic differential equations (BSDEs)
introduced by Pardoux and Peng . It was shown in [120, 121] that there is an up-to-equivalence unique adapted stochastic process , , with values in that satisfies the pair of stochastic equations in (29)–(30) above.
The minimizer of this variational problem is the solution to the PDE and vice versa.
3.2 The Deep BSDE Method
A key idea of the Deep BSDE method is to approximate the unknown functions and by feedforward neural networks and . To that purpose, we work with the variational formulation described above and discretize time, say using the Euler scheme on a grid :
At each time slide , we associate a subnetwork. We can stack all these subnetworks together to form a deep composite neural network. This network takes the paths and as the input data and gives the final output, denoted by , as an approximation to .
The error in the matching of the given terminal condition defines the loss function
From the viewpoint of machine learning, this neural network model has several interesting features.
It does not require us to generate training data beforehand. The paths play the role of the data and they are generated on the fly. For this reason, one can think of this model as a model with an infinite amount of data.
For the same reason, it is very natural to use stochastic gradient descent (SGD) to train the network.
The network has a very natural “residual neural network” structure embedded in the stochastic difference equations. For example:
3.3 Some numerical examples
Next we examine the effectiveness of the algorithms described above. We will discuss two examples: The first is a canonical benchmark problem, the linear-quadratic control problem (LQG). The second is a nonlinear Black-Scholes model. We use the simplest implementation of Deep BSDE: Each subnetwork has layers, with input layer (-dimensional), hidden layers (both -dimensional), and -dimensional output. We choose the rectifier function (ReLU) as the activation function and optimize with the Adam method .
We will report the mean and the standard deviation of the relative error from 5 independent runs with different random seeds.
LQG (linear quadratic Gaussian)
Consider the stochastic dynamic model in 100 dimension:
with cost functional:
The associated HJB equation is given by
The solution to this HJB equation can be expressed as
This formula can be evaluated directly using Monte Carlo. Therefore this problem serves as a good model for validating algorithms. The results from the Deep BSDE method is shown in Figure 3.
We see that the accuracy of the trained solution improves along the training curve before it saturates.
Black-Scholes equation with default risk
The pricing model for financial derivatives should take into account the whole basket of the underlies, which results in high dimensional PDEs. In addition, the classical Black-Scholes model can and should be augmented by some important factors in real markets, including the effect of default, transactions costs, uncertainties in the model parameters, etc. Taking into account these effects leads to nonlinear Black-Scholes type of models.
We study a particular case of the recursive valuation model with default risk [35, 15]. The underlying asset price moves as a geometric Brownian motion, and the possible default is modeled by the first jump time of a Poisson process. The claim value is modeled by the nonlinear Black-Scholes model with
where is some nonlinear function. We will consider the fair price of an European claim based on 100 underlying assets conditional on no default having occurred yet. This leads to a problem with . Figure 4 presents the results of Deep BSDE and multilevel Picard for this nonlinear Black-Scholes equation for . Reported in the figure is the approximate solution at . For this problem we cannot find the “exact solution”. Therefore we use the results of the two different methods to calibrate each other.
3.4 Analysis of the Deep BSDE method
There is not yet a complete theory for the analysis of the Deep BSDE method. We will review the existing results that have been obtained so far. Here instead of bounding the cost required for reducing the error to certain tolerance , we bound the error associated with certain hyper-parameters, such as the time step size and the size of the neural network models. The basic strategy is to reduce the problem to bounding the generalization error for supervised learning 
. In order to do that, we need to do the following: (1) Estimate the error in the time discretization. (2) Prove that the functions that need to be approximated using neural networks belong to the right function class and bound their norms in that function class. (3) Adapt the analysis for supervised learning problems to the current setting. For two-layer neural network models, the function class is the Barron space.
At this point, only step (1) has been accomplished.
Theorem 1 (A Posteriori Estimates ).
Under some assumptions, there exists a constant C, independent of h, d, and m, such that for sufficiently small h,
where , , for .
Theorem 2 (Upper Bound of Optimal Loss ).
Under some assumptions, there exists a constant C, independent of h, d, and m, such that for sufficiently small h,
where . If and are independent of , the term can be replaced with .
4 Control problems in high dimensions
One of the areas that high dimensional problems are often encountered is optimal control. In fact the term “curse of dimensionality” was first coined by Richard Bellman in the context of dynamic programming for control problems . Regarding CoD, there is an important difference between open- and closed-loop controls that we now explain. fConsider the optimal control problem with a finite horizon :
Here is the state, is the control, is the terminal cost, and is the running cost. For fixed , the problem above can be thought of as a two-point boundary value problem over the time interval and the optimal control can be sought in the form:
where denotes the optimal path. We refer to  for a review of the numerical algorithms for solving this kind of two-point boundary value problems. In this case, CoD is not really an issue since the dimensionality of the independent variable is just 1. Controls of the form (50) is called an open-loop control. In this case, the optimal control is only known along the optimal path. Once the system deviates from the optimal path, one has to either recompute the optimal control or force the system back to the optimal path. In many applications, one prefers a closed-loop control or feedback control
where the optimal control is known as every point in the state space. Closed-loop controls are functions of the state variable and this is where the CoD problem arises. To characterize open- and closed-loop controls, let
be the extended Hamiltonian associated with this control problem, and define
Here is the co-state variable. An important result is that the solution to the optimal control problem satisfies Pontryagin’s Minimum Principle:
with the boundary conditions
Denote by the value function of the control problem:
subject to and . Define the Hamiltonian:
The HJB equation can be written as
with the terminal condition . The co-state and the closed-loop optimal control is given in terms of the value function by
To obtain an accurate approximation to the closed-loop control, we need to solve the control problem for a large set of initial conditions, if not all. The formulation (49) is for a single initial condition. To extend it to all initial conditions, we consider instead the problem:
subject to . Here the optimization is over all possible policy functions . One question that naturally arises is how we should choose the distribution for the initial condition. Clearly we are only interested in states whose value functions are not very big. Therefore one possible choice is the Gibbs distribution for the value function:
where is a normalization factor. is a positive hyper-parameter.
A neural network model is trained for the value function.
The first is called “warm start”. The basic idea is to choose initializations for the iterative algorithms for (54)-(55) to help guarantee convergence. For example one can start with small values of in which case the convergence of the iterative algorithms is much less of an issue. One can use simple extrapolations of these solutions on longer time intervals as initializations and obtain converged solutions on longer intervals. This process can be continued. In addition, once a reasonable approximation of the policy and value functions is obtained, one can use that to help initializing the two-point boundary value problem.
The second is to explore adaptive sampling. It has been explored in a similar context . As for all adaptive algorithms, the key issue is an error indicator: The larger the error, the more data are needed. 
uses the variance of the predictions from an ensemble of similar machine learning models as the error indicator, A sophisticated error indicator that makes use of the variance of the gradient of the loss function was proposed in. Another idea is to simply use the magnitude of the gradient of the value function as the error indicator.
5 Ritz, Galerkin, and least squares
The Ritz, Galerkin, and least squares formulations are among the most commonly used frameworks for designing numerical algorithms for PDEs. The Ritz formulation is based on a variational principle. The Galerkin formulation is based on the weak formulation of a PDE that involves both the trial and test functions. Least squares formulation is a very general approach for turning a PDE problem into a variational problem by minimizing the squared residual of the PDE. It has the advantage of being general and straightforward to think about. However, in classical numerical analysis, it is often the least preferred since the numerical problem obtained this way tends to be worse conditioned than the ones using Ritz or Galerkin formulation. Designing machine learning-based algorithms using Ritz and least square formulations is rather straightforward. Since there is a variational principle behind both the Ritz and least square formulations, one can simply replace the space of trial functions for these variational principles by the hypothesis space in machine learning models. Since machine learning is also a fundamentally optimization-based approach, the integration of machine learning with variational methods for PDEs is quite seamless. Indeed these were among the first set of ideas that were proposed for machine learning-based numerical algorithms for PDEs [22, 41, 132]. For the same reason, designing machine learning-based algorithms using the Galerkin formulation is a different matter, since Galerkin is not an optimization-based approach. Rather it is based on a weak formulation using test functions. The closest machine learning model to the Galerkin formulation is the Wasserstein GAN (WGAN) [1, 2]: In WGAN, the discriminator plays the role of the test function; the generator plays the role of the trial function.
5.1 The Deep Ritz method
and is the set of admissible functions (also called trial function, here represented by ), is a given function, representing external forcing to the system under consideration. It is understood that boundary conditions are incorporated into the definition of . The Deep Ritz method consists of the following components:
Deep neural network-based representation of the trial function.
A numerical quadrature rule for the functional.
An algorithm for solving the final optimization problem.
Each component is relatively straightforward. One can take the usual neural network models to represent the trial function. In high dimensions one needs an effective Monte Carlo algorithm to discretize the integral in (64). The interplay between the discretization of the integral and the discretization of the trial function using neural network models is an interesting issue that requires further attention. Finally, SGD can be used naturally, similar to the situation in Deep BSDE: The integral in the functional (64) plays the role of the expectation in Deep BSDE. One notable issue is the choice of the activation function. ReLU activation does not perform well due to the discontinuity in its derivative. It has been observed that the activation function performs much better than ReLU. More careful study is needed on this issue also. One feature of the Deep Ritz method that potentially makes it interesting even for low dimensional problems is that it is mesh-free and naturally adaptive. To examine this we consider the well-known crack problem: Computing the displacement around a crack. To this end, we consider the Poisson equation:
where . The solution to this problem suffers from the well-known “corner singularity” caused by the nature of the domain 
. A simple asymptotic analysis shows that at the origin, the solution behaves as. Models of this type have been extensively used to help developing and testing adaptive finite element methods. Here the essential boundary condition causes some problems. The simplest idea is to just use a penalty method and consider the modified functional
An acceptable choice is . The results from the Deep Ritz method with parameters in the neural network model and the finite difference method with (degrees of freedom) are shown in Figure 5. More quantitative comparisons can be found in . Of course adaptive numerical methods are very well developed for solving problems with corner singularities and even more general singular problems. Nevertheless, this example shows that Deep Ritz is potentially a naturally adaptive algorithm.
There are also a number of problems that need to be addressed in future work:
The variational problem that results from Deep Ritz is usually not convex even when the original problem is.
At the present time, there are no consistent conclusions about the convergence rate.
The treatment of the essential boundary condition is not as simple as the traditional methods.
Some analysis of the Deep Ritz method has been carried out in .
5.2 The least square formulation
The least square approach was used in  for solving the dynamic Schrödinger equation and was subsequently developed more systematically in  (although  referred to it as Galerkin method). The basic idea is very simple: Solving the PDE
over a domain in can be formulated equivalently as solving the variational problem for the functional
where is a suitably chosen probability distribution on . should be non-degenerate and readily sampled. With this, the least square formulation looks very similar to the Ritz formulation with replacing the functional .
5.3 The Galerkin formulation
The starting point of the Galerkin approximation is the weak form of (67):
where and are the trial and test function spaces respectively, is an arbitrary test function in , is the standard inner product for functions. Usually some integration by parts is applied. For example, if , then except for boundary terms, one has
Therefore this formulation only involves first order derivatives. The most important feature of the Galerkin formulation is that involves the test function. In this spirit, the Wasserstein GAN can also be regarded as an example of the Galerkin formulation. Given a set of data in and a reference probability distribution on , we look for the mapping (the generator) from to , such that 
for all Lipschitz functions . The test function is called the discriminator in this context. Like GAN, the most obvious reformulation of (69) is a min-max problem:
Unfortunately this formulation is not easy to work with. The problems encountered are similar to those in WGAN. Despite this some very encouraging progress has been made and we refer to  for the details.
6 Multilevel Picard approximation methods for nonlinear PDEs
In the articles E et al.  and Hutzenthaler et al.  so-called fully history recursive multilevel Picard approximation methods have been introduced and analyzed (in the following we abbreviate fully history recursive multilevel Picard by MLP). The error analysis in the original article Hutzenthaler et al.  is restricted to semilinear heat PDEs with Lipschitz nonlinearities. By now in the scientific literature there are, however, a series of further articles on such MLP approximation methods (see [90, 7, 53, 6, 9, 86, 91, 38, 87]) which analyze, extend, or generalize the MLP approximation methods proposed in [37, 89] to larger classes of PDE problems such as semilinear Black-Scholes PDEs (see [90, 9]), semilinear heat PDEs with gradient dependent nonlinearities (see [86, 91]), semilinear elliptic PDE problems (see ), semilinear heat PDEs with non-Lipschitz continuous nonlinearities (see [7, 9]), and semilinear second-order PDEs with varying coefficient functions (see [90, 87]).
In the remainder of this section we sketch the main ideas of MLP approximation methods and to keep the presentations as easy as possible we restrict ourselves in the following presentations to semilinear heat PDEs with Lipschitz continuous nonlinearities with bounded initial values. The next result, Theorem 3 below, provides a complexity analysis for MLP approximations in the case of semilinear heat PDEs with Lipschitz continuous nonlinearities. Theorem 3 is strongly based on Hutzenthaler et al. [89, Theorem 1.1] and Beck et al. [7, Theorem 1.1].
Let , , let be Lipschitz continuous, for every let be at most polynomially growing, assume for every , , that
let be a probability space, let , , be independent -distributed random variables, let , , , be independent standard Brownian motions, assume that and are independent, for every , , , , let satisfy , let , , , satisfy for every , , , , that
and for every , let be the number of function evaluations of and and the number of realizations of scalar random variables which are used to compute one realization of (cf. [87, Corollary 4.4] for a precise definition). Then there exist and such that for all , it holds that and .
Theorem 3 provides a complexity analysis for MLP approximations in the case of semilinear heat PDEs with Lipschitz continuous nonlinearities. In (74) in Theorem 3 the employed MLP approximations are specified. The MLP approximations in (74) aim to approximate the solutions of the PDEs in (73). The strictly positive real number in Theorem 3 describes the time horizon of the PDEs in (73). The function in Theorem 3 describes the nonlinearity of the PDEs in (73). For simplicity we restrict ourselves in Theorem 3 in this article to Lipschitz continuous nonlinearities which do only depend on the solution of the PDE but not on the time variable , not on the space variable , and also not on the derivatives of the PDE solution. In the more general MLP analyses in the scientific literature (cf., e.g., [89, 90, 7, 53, 6, 9, 86, 87]) the nonlinearity of the PDE is allowed to depend on the time variable , on the space variable , on the PDE solution, and also on the derivatives of the PDE solution (see ), and the nonlinearity of the PDE may also be not Lipschitz continuous (see [7, 9]).