Uniform Convergence Guarantees for the Deep Ritz Method for Nonlinear Problems

We provide convergence guarantees for the Deep Ritz Method for abstract variational energies. Our results cover non-linear variational problems such as the p-Laplace equation or the Modica-Mortola energy with essential or natural boundary conditions. Under additional assumptions, we show that the convergence is uniform across

Authors

• 3 publications
• 18 publications
• 5 publications
12/03/2019

Deep Nitsche Method: Deep Ritz Method with Essential Boundary Conditions

We propose a method due to Nitsche (Deep Nitsche Method) from 1970s to d...
03/01/2021

Error Estimates for the Variational Training of Neural Networks with Boundary Penalty

We establish estimates on the error made by the Ritz method for quadrati...
11/28/2021

Convergence Analysis For Non Linear System Of Parabolic Variational Inequalities

This work aims to provide a comprehensive and unified numerical analysis...
05/23/2019

A Smoothness Energy without Boundary Distortion for Curved Surfaces

Current quadratic smoothness energies for curved surfaces either exhibit...
06/26/2020

Spiral capacitor calculation using FEniCS

The paper shows how to optimize a water level sensor consisting of a cyl...
06/28/2021

An augmented Lagrangian deep learning method for variational problems with essential boundary conditions

This paper is concerned with a novel deep learning method for variationa...
03/04/2022

Constructing Nitsche's method for variational problems

Nitsche's method is a well-established approach for weak enforcement of ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The idea of the Deep Ritz Method is to use variational energies as an objective function for neural network training to obtain a finite dimensional optimization problem that allows to solve the underlying partial differential equation approximately. The idea of deriving a finite dimensional optimization problem from variational energies dates back to

Ritz (1909), was widely popularised in the context of finite element methods (see, e.g., Braess, 2007) and was recently revived by E and Yu (2018) using deep neural networks. In the following, we give a more thorough introduction to the Deep Ritz Method. Let be a bounded domain and consider the variational energy corresponding to the Lagrangian and a force

 E:X→R,E(u)=∫ΩL(∇u(x),u(x),x)−f(x)u(x)dx, (1)

defined on a suitable function space , usually a Sobolev space . One is typically interested in minimizers of on subsets where encodes further physical constraints, such as boundary conditions. Here, we consider either unconstrained problems or zero Dirichlet boundary conditions and use the notation for the latter case. In other words, for zero boundary conditions, one aims to find

 u∈argminv∈X0∫ΩL(∇v(x),v(x),x)−f(x)v(x)dx. (2)

To solve such a minimization problem numerically, the idea dating back to Ritz (1909) is to use a parametric ansatz class

 A\coloneqq{uθ∈X∣θ∈Θ⊂RP}⊂U (3)

and to consider the finite dimensional minimization problem of finding

 θ∗∈argminθ∈Θ∫ΩL(∇vθ(x),vθ(x),x)−f(x)vθ(x)dx

which can be approached by different strategies, depending on the class . For instance, if is chosen to be a finite element ansatz space or polynomials and the structure of is simple enough, one uses optimality conditions to solve this problem.

In this manuscript, we focus on ansatz classes that are given through (deep) neural networks. When choosing such ansatz functions, the method is known as the Deep Ritz Method and was recently proposed by E and Yu (2018). Neural network type ansatz functions possess a parametric form as in (3), however, it is difficult to impose zero boundary conditions on the ansatz class . To circumvent this problem, one can use a penalty approach, relaxing the energy to the full space, but penalizing the violation of zero boundary conditions, to include these. This means, for a penalization parameter one aims to find

 θ∗λ∈argminθ∈Θ∫ΩL(∇vθ(x),vθ(x),x)−f(x)vθ(x)dx+λ∫∂Ωu2θds. (4)

The idea of using neural networks for the approximate solution of PDEs can be traced back at least to the works of Lee and Kang (1990); Dissanayake and Phan-Thien (1994); Takeuchi and Kosugi (1994); Lagaris et al. (1998). Since the recent successful application of neural network based methods to stationary and instationary PDEs by E et al. (2017); E and Yu (2018); Sirignano and Spiliopoulos (2018), there is an ever growing body of theoretical works contributing to the understanding of these approaches. For a collection of the different methods we refer to the overview articles by Beck et al. (2020); Han et al. (2020).

The error in the deep Ritz method, which decomposes into an approximation, optimization and generalization term, has been studied by Luo and Yang (2020); Xu (2020); Duan et al. (2021); Hong et al. (2021); Jiao et al. (2021); Lu et al. (2021a, b); Müller and Zeinhofer (2021). However, those works either consider non essential boundary conditions or they require a term with a positive potential, apart from Müller and Zeinhofer (2021). This excludes the prototypical Poisson equation, which was originally treated by the deep Ritz method by E and Yu (2018). More importantly, those works only study linear problems, which excludes many important applications.

In this work, we thus study the convergence of the Deep Ritz Method when a sequence of growing ansatz classes , given through parameter sets and a penalization of growing strength with , is used in the optimization problem (4) with more modest assumptions on , , and .

Denote a sequence of (almost) minimizing parameters of problem (4) with parameter set and penalization by . We then see that under mild assumptions on and , the sequence of (almost) minimizers converges weakly in to the solution of the continuous problem, see Theorem 7 in Section 3. We then strengthen this result in Section 4 where we show that the aforementioned convergence is uniform across certain bounded families of right-hand sides , see Theorem 12

. This means that a fixed number of degrees of freedom in the ansatz class can be used independently of the right hand side to achieve a given accuracy. Alternatively, given a discretization of the space of right hand sides, one may discretize the solution operator that maps

to the minimizer of (2) and still obtain a convergence guarantee (although this is not necessarily a viable numerical approach).

To the best of our knowledge, our results currently comprise the only convergence guarantees for the Deep Ritz Method for non-linear problems. However, since we prove these results using

-convergence methods, no rates of convergence are be obtained – as mentioned above, for linear elliptic equations some error decay estimates are known. Our results also do not provide insight into the finite dimensional optimization problem (

4) which is a challenging problem in its own right, see for instance Wang et al. (2021); Courte and Zeinhofer (2021). However, they guarantee that given one is able to solve (4) to a reasonable accuracy, one is approaching the solution of the continuous problem (2).

Our results are formulated for neural network type ansatz functions due to the current interest in using these in numerical simulations, yet other choices are possible. For instance, our results do apply directly to finite element functions.

The remainder of this work is organized as follows. Section 2 discusses some preliminaries and the used notation. The main results, namely -convergence and uniformity of convergence are provided in Sections 3 and 4, respectively. Finally, in Section 5 we discuss how the -Laplace and a phase field model fit into our general framework.

2 Notation and preliminaries

We fix our notation and present the tools that our analysis relies on.

2.1 Notation of Sobolev spaces and Friedrich’s inequality

We denote the space of functions on that are integrable in -th power by , where we assume that . Endowed with

 ∥u∥pLp(Ω)\coloneqq∫Ω|u|pdx

this is a Banach space, i.e., a complete normed space. If is a multivariate function with values in we interpret as the Euclidean norm. We denote the subspace of of functions with weak derivatives up to order in by , which is a Banach space with the norm

 ∥u∥pWk,p(Ω)\coloneqqk∑l=0∥Dlu∥pLp(Ω).

This space is called a Sobolev space and we denote its dual space, i.e., the space consisting of all bounded and linear functionals on by . The closure of all compactly supported smooth functions in is denoted by . It is well known that if has a Lipschitz continuous boundary the operator that restricts a Lipschitz continuous function on to the boundary admits a linear and bounded extension . This operator is called the trace operator and its kernel is precisely . Further, we write whenever we mean . In the following we mostly work with the case and write instead of .

In order to study the boundary penalty method we use the Friedrich inequality which states that the norm of a function can be estimated by the norm of its gradient and boundary values. We refer to Gräser (2015) for a proof.

Proposition 1 (Friedrich’s inequality).

Let be a bounded and open set with Lipschitz boundary and . Then there exists a constant such that

 ∥u∥pW1,p(Ω)≤cp⋅(∥∇u∥pLp(Ω)+∥u∥pLp(∂Ω))for all u∈W1,p(Ω). (5)

2.2 Neural Networks

Here we introduce our notation for the functions represented by a feedforward neural network. Consider natural numbers and let

 θ=((A1,b1),…,(AL,bL))

be a tupel of matrix-vector pairs where

and . Every matrix vector pair induces an affine linear map . The neural network function with parameters and with respect to some activation function is the function

 uρθ:Rd→Rm,x↦TL(ρ(TL−1(ρ(⋯ρ(T1(x)))))).

The set of all neural network functions of a certain architecture is given by , where collects all parameters of the above form with respect to fixed natural numbers . If we have for some we say the function can be realized by the neural network . Note that we often drop the superscript if it is clear from the context.

A particular activation function often used in practice and relevant for our results is the rectified linear unit or ReLU activation function, which is defined via . Arora et al. (2016)

showed that the class of ReLU networks coincides with the class of continuous and piecewise linear functions. In particular they are weakly differentiable. Since piecewise linear functions are dense in

we obtain the following universal approximation result which we prove in detail in the appendix.

Theorem 2 (Universal approximation with zero boundary values).

Consider an open set and fix a function with . Then for all there exists that can be realized by a ReLU network of depth such that

 ∥u−uε∥W1,p(Ω)≤ε.

To the best of our knowledge this is the only available universal approximation results where the approximating neural network functions are guaranteed to have zero boundary values. This relies on the special properties of the ReLU activation function and it is unclear for which classes of activation functions universal approximation with zero boundary values hold.

2.3 Gamma Convergence

We recall the definition of -convergence with respect to the weak topology of reflexive Banach spaces. For further reading we point the reader towards Dal Maso (2012).

Definition 3 (Γ-convergence).

Let be a reflexive Banach space as well as . Then is said to be -convergent to if the following two properties are satisfied.

1. Liminf inequality: For every and with we have

 F(x)≤liminfn→∞Fn(xn).
2. Recovery sequence: For every there is with such that

 F(x)=limn→∞Fn(xn).

The sequence is called equi-coercive if the set

 ⋃n∈N{x∈X∣Fn(x)≤r}

is bounded in (or equivalently relatively compact with respect to the weak topology) for all . We say that a sequence are quasi minimizers of the functionals if we have

 Fn(xn)≤infx∈XFn(x)+δn

where .

We need the following property of -convergent sequences. We want to emphasise the fact that there are no requirements regarding the continuity of any of the functionals and that the functionals are not assumed to admit minimizers.

Theorem 4 (Convergence of quasi-minimizers).

Let be a reflexive Banach space and be an equi-coercive sequence of functionals that -converges to . Then, any sequence of quasi-minimizers of is relatively compact with respect to the weak topology of and every weak accumulation point of is a global minimizer of . Consequently, if possesses a unique minimizer , then converges weakly to .

3 Abstract Γ-Convergence Result for the Deep Ritz Method

For the abstract results we work with an abstract energy , instead of an integral functional of the form (1). This reduces technicalities in the proofs and separates abstract functional analytic considerations from applications.

Setting 5.

Let and be reflexive Banach spaces and be a continuous linear map. We set to be the kernel of , i.e., . Let be some activation function and denote by a sequence of neural network parameters. We assume that any function represented by such a neural network is a member of and we define

 An\coloneqq{xθ∣θ∈Θn}⊂X.

Here, denotes the function represented by the neural network with the parameters . Let be a functional and a sequence of real numbers with . Furthermore, let and be fixed and define the functional by

 Ffn(x)={E(x)+λn∥γ(x)∥pB−f(x)for x∈An,∞otherwise ,

as well as by

 Ff(x)={E(x)−f(x)for x∈X0,∞otherwise .

Then assume the following holds:

1. [label=(A0)]

2. For every there is such that and for .

3. The functional is bounded from below, weakly lower semi-continuous with respect to the weak topology of and continuous with respect to the norm topology of .

4. The sequence is equi-coercive with respect to the norm .

Remark 6.

We discuss the Assumptions 1 to 3 in view of their applicability to concrete problems.

1. In applications, will usually be a Sobolev space with its natural norm, the space contains boundary values of functions in and the operator is a boundary value operator, e.g. the trace map. However, if the energy is coercive on all of , i.e. without adding boundary terms to it, we might choose and obtain . This is the case for non-essential boundary value problems.

2. The Assumption 1 compensates that in general, we cannot penalize with arbitrary strength. However, if we can approximate any member of by a sequence then any divergent sequence can be chosen. This is for example the case for the ReLU activation function and the space . More precisely, we can choose to be the class of functions expressed by a (fully connected) ReLU network of depth and width , see Theorem 2.

Theorem 7 (Γ-convergence).

Assume we are in Setting 5. Then the sequence of functionals -converges towards . In particular, if is a sequence of non-negative real numbers converging to zero, any sequence of -quasi minimizers of is bounded and all its weak accumulation points are minimizers of . If additionally possesses a unique minimizer , any sequence of -quasi minimizers converges to in the weak topology of .

Proof.

We begin with the limes inferior inequality. Let in and assume that . Then converges to as real numbers and converges weakly to in . Combining this with the weak lower semicontinuity of we get, using the boundedness from below, that

 liminfn→∞Ffn(xn)≥infx∈XE(x)+liminfn→∞λn∥γ(xn)∥pB−limn→∞f(xn)=∞.

Now let . Then by the weak lower semicontinuity of we find

 liminfn→∞Ffn(xn)≥liminfn→∞E(xn)−f(x)≥E(x)−f(x)=Ff(x).

Now let us have a look at the construction of the recovery sequence. For we can choose the constant sequence and estimate

 Ffn(xn)≥E(x)+λn∥γ(x)∥pB−f(x).

Hence we find that . If we approximate it with a sequence according to Assumption 1, such that and in and . It follows that

 Ffn(xn)=E(xn)+λn∥xn∥pB−f(xn)→E(x)−f(x)=Ff(x).

A sufficient criterion for equi-coercivity of the sequence from Assumption 3 in terms of the functional is given by the following lemma.

Lemma 8 (Criterion for Equi-Coercivity).

Assume we are in Setting 5. If there is a constant such that it holds for all that

 E(x)+∥γ(x)∥pB≥c⋅(∥x∥pX−∥x∥−1),

then the sequence is equi-coercive.

Proof.

It suffices to show that the sequence

 Gfn:X→RwithGfn(x)=E(x)+λn∥γ(x)∥pB−f(x)

is equi-coercive, as . So let be given and assume that . We estimate assuming without loss of generality that

 r ≥E(x)+λn∥γ(x)∥pB−f(x) ≥~c⋅(∥x∥pX−∥x∥X−1).

As , a scaled version of Young’s inequality clearly implies a bound on the set

 ⋃n∈N{x∈X∣Gn(x)≤r}

and hence the sequence is seen to be equi-coercive. ∎

4 Abstract Uniform Convergence Result for the Deep Ritz Method

In this section we present an extension of Setting 5 that allows to prove uniform convergence results over certain bounded families of right-hand sides.

Setting 9.

Assume we are in Setting 5. Furthermore, let there be an additional norm on such that the dual space is reflexive. However, we do not require to be complete. Then, let the following assumptions hold

1. [label=(A0)]

2. The identity is completely continuous, i.e., maps weakly convergent sequences to strongly convergent ones.

3. For every , there is a unique minimizer of and the solution map

 S:X∗0→X0with f↦xf

is demi-continuous, i.e. maps strongly convergent sequences to weakly convergent ones.

Remark 10.

As mentioned earlier, is usually a Sobolev space with its natural norm. The norm may then chosen to be an or norm, where is strictly smaller than the differentiability order of . In this case, Rellich’s compactness theorem provides Assumption 4.

Lemma 11 (Compactness).

Assume we are in Setting 9. Then the solution operator is completely continuous, i.e., maps weakly convergent sequences to strongly convergent ones.

Proof.

We begin by clarifying what we mean with being defined on . Denote by the inclusion map and consider

By abusing notation, always when we refer to as defined on we mean the above composition, i.e., . Having explained this, it is clear that it suffices to show that maps weakly convergent sequences to strongly convergent ones since is continuous, demi-continuous and strongly continuous. This, however, is a consequence of Schauder’s theorem, see for instance Alt (1992), which states that a linear map between Banach spaces is compact if and only if is. Here, compact means that maps bounded sets to relatively compact ones. Let denote the completion of . Then, using the reflexivity of it is easily seen that is compact. Finally, using that the desired compactness of is established. ∎

The following theorem is the main result of this section. It shows that the convergence of the Deep Ritz method is uniform on bounded sets in the space . The proof of the uniformity follows an idea from Cherednichenko et al. (2018), where in a different setting a compactness result was used to amplify pointwise convergence to uniform convergence across bounded sets, compare to Theorem 4.1 and Corollary 4.2 in Cherednichenko et al. (2018).

Theorem 12 (Uniform Convergence of the Deep Ritz Method).

Assume that we are in Setting 9 and let be a sequence of real numbers. For we set

 Sn(f)\coloneqq{x∈X∣∣Ffn(x)≤infz∈XFfn(z)+δn},

which is the approximate solution set corresponding to and . Furthermore, denote the unique minimizer of in by and fix . Then we have

 sup{|xfn−xf|∣∣xfn∈Sn(f), ∥f∥(X,|⋅|)∗≤R}→0% for n→∞.

In the definition of this supremum, is measured in the norm of the space . This means that is continuous which is a more restrictive requirement than the continuity with respect to . Also the computation of this norm takes place in the unit ball of , i.e.

 ∥f∥(X,|⋅|)∗=sup|x|≤1f(x).

Before we prove Theorem 12 we need a -convergence result similar to Theorem 7. The only difference is, that now also the right-hand side may vary along the sequence.

Proposition 13.

Assume that we are in Setting 9, however, we do not need Assumption 5 for this result. Let such that in the weak topology of the reflexive space . Then the sequence of functionals -converges to in the weak topology of . Furthermore, the sequence is equi-coercive.

Proof.

The proof is almost identical to the one of Theorem 7 but since it is brief, we include it for the reader’s convenience. We begin with the limes inferior inequality. Let in and . Then with respect to which implies that converges to . Using that in combined with the weak lower semicontinuity of we get

 liminfn→∞Ffnn(xn)≥infx∈XE(x)+liminfn→∞λn∥γ(xn)∥pB−limn→∞fn(xn)=∞.

Now let . Then by the weak lower semicontinuity of we find

 liminfn→∞Ffnn(xn)≥liminfn→∞E(xn)−f(x)≥E(x)−f(x)=Ff(x).

Now let us have a look at the construction of the recovery sequence. For we can choose the constant sequence and estimate

 Ffnn(x)≥infx∈XE(x)+λn∥γ(x)∥pB−∥fn∥(X,|⋅|)′⋅|x|.

As is bounded we find . If we approximate it with a sequence according to Assumption 1, such that and in and . It follows that

 Ffnn(xn)=E(xn)+λn∥xn∥pB−fn(xn)→E(x)−f(x)=Ff(x).

The equi-coercivity was already assumed in 3 so it does not need to be shown. ∎

Proof of Theorem 12.

We can choose and and such that

 sup∥f∥(X,|⋅|)∗≤Rxfn∈Sn(f)∣∣xfn−xf∣∣≤∣∣xfnn−xfn∣∣+1n.

Now it suffices to show that converges to zero. Since is bounded in and this space is reflexive we can without loss of generality assume that in . This implies by Lemma 11 that in . The -convergence result of the previous proposition yields in and hence with respect to which concludes the proof. ∎

5 Examples

We discuss different concrete examples that allow the application of our abstract results and focus on non-linear problems. In particular, we consider a phase field model illustrating the basic -convergence result of Section 3 and the -Laplacian as an example for the uniform results of Section 4.

5.1 A Phase Field Model

Let be fixed, a bounded Lipschitz domain and consider the following energy

 E:H1(Ω)∩L4(Ω)→[0,∞),E(u)=ε2∫Ω|∇u|2dx+1ε∫ΩW(u)dx,

where is a non-linear function, given by

 W(u)=14u2(u−1)2=14u4−12u3+14u2.

The functional constitutes a way to approximately describe phase separation and the parameter

encodes the length-scale of the phase transition, see

Cahn and Hilliard (1958). We describe now how the Setting 5 is applicable to fully connected ReLU neural network ansatz functions. For the Banach spaces in Setting 5 we choose

 X=H1(Ω)∩L4(Ω),B=L2(∂Ω),∥⋅∥X=∥⋅∥H1(Ω)+∥⋅∥L4(Ω),∥⋅∥B=∥⋅∥L2(∂Ω).

These spaces are clearly reflexive and the trace operator meets the requirements of continuity and linearity and is our choice for , together with . For the sets we use the ReLU activation function and define

 An\coloneqq{uθ∣θ∈Θn}⊂H1(Ω)∩L4(Ω),

where encodes that we use scalar valued neural networks with input dimension and depth . The width of all other layers is set to . Then it holds for all and Theorem 2 shows that Assumption 1 is satisfied.

The continuity of with respect to is clear, hence we turn to the weak lower semi-continuity. To this end, we write in the following form

 E(u)=ε2∫Ω|∇u|2dx+14ε∫Ωu4dx\eqqcolonE1(u)+1ε∫Ω14u2−12u3dx\eqqcolonE2(u)

and treat and separately. The term is continuous with respect to and convex, hence weakly lower semi-continuous. To treat , note that we have the compact embedding

 H1(Ω)∩L4(Ω)↪↪L3(Ω).

This implies that a sequence that converges weakly in converges strongly in and consequently shows that the term is continuous with respect to weak convergence in . Finally, for fixed , we need to show that the sequence is equi-coercive with respect to . To this end, it suffices to show that the sequence

is equi-coercive as it holds . Let be fixed and consider all with . Then, without losing generality, we may assume and estimate

 r≥Gfn(u) ≥ε2∫Ω|∇u|2dx+∫∂Ωu2ds+1ε∫ΩW(u)dx−f(u) ≥c∥u∥2H1(Ω)−∥f∥X∗(∥u∥H1(Ω)+∥u∥L4(Ω))+14ε∥u∥4L4(Ω)−13ε∥u∥3L3(Ω) ≥c∥u∥2H1(Ω)−∥f∥X∗∥u∥H1(Ω)+14ε∥u∥4L4(Ω)−|Ω|1/43ε∥u∥3/4L4(Ω)−∥f∥X∗∥u∥L4(Ω),

where we used Friedrich’s inequality, see Proposition 1 and the estimate

 ∥u∥3L3(Ω)≤|Ω|1/4∥u∥3/4L4(Ω)

due to Hölder’s inequality. This clearly implies a bound on the set

 ⋃n∈N{u∈H1(Ω)∩L4(Ω)∣Gfn(u)≤r}

and hence is equi-coercive.

Remark 14 (Stability under Compact Perturbations).

With a similar – even simpler – approach we may also show that energies of the form

 ^E(u)=E(u)+F(u)

fall in the Setting 5 provided does and is bounded from below and continuous with respect to weak convergence in . Note also, that in the space dimension this includes the above example, however, the slightly more involved proof presented here works independently of the space dimension .

Remark 15.

Figure 1 shows two exemplary numerical realizations of the Deep Ritz Method with right-hand sides

 fi=χBri(0,−1/2)−χBri(0,1/2)

for and corresponding to the left and right picture. Note that in the case of , a phase transition around the ball is energetically more favorable than the configuration in the right figure, where the radius is much larger.

5.2 The p-Laplacian

As an example for the uniform convergence of the Deep Ritz method we discuss the -Laplacian. To this end, consider the -Dirichlet energy for given by

 E:W1,p(Ω)→R,u↦1p∫Ω|∇u|pdx.

Note that for the associated Euler-Lagrange equation – the -Laplace equation – is nonlinear. In strong formulation it is given by

 −div(|∇u|p−2∇u) =fin Ω u =0on ∂Ω,

see for example Struwe (1990) or Růžička (2006). Choosing the ReLU activation function, the abstract setting is applicable as we will describe now. For the Banach spaces we choose

where the norms and are chosen to be the natural ones. Clearly, endowed with the norm is reflexive by our assumption . Note that it holds

 (W1,p(Ω),∥⋅∥Lp(Ω))∗=Lp(Ω)∗≅Lp′(Ω),

which is also reflexive. We set , i.e.

 tr:W1,p(Ω) →Lp(∂Ω)withu↦u|∂Ω

We use the same ansatz sets as in the previous example, hence Assumption 1 holds. Rellich’s theorem provides the complete continuity of the embedding

 (W1,p(Ω),∥⋅∥W1,p(Ω))→(W1,p(Ω),∥⋅∥Lp(Ω))

which shows Assumption 4. As for Assumption 3, Friedrich’s inequality provides the assumptions of Lemma 8. Furthermore, is continuous with respect to and convex, hence also weakly lower semi-continuous. By Poincaré’s and Young’s inequality we find for all that

 Ff(u) =1p∫Ω|∇u|pdx−f(u) ≥C∥u∥pW1,p(Ω)−∥f∥W1,p(Ω)′∥u∥W1,p(Ω) ≥C∥u∥pW1,p(Ω)−~C.

Hence, a minimizing sequence in for is bounded and as is strictly convex on it possesses a unique minimizer. Finally, to provide the demi-continuity we must consider the operator mapping to the unique minimizer of on . By the Euler-Lagrange formalism, minimizes if and only if

 ∫Ω|∇u|p−2∇u⋅∇vdx=f(v)for all v∈W1,p0(Ω).

Hence, the solution map is precisely the inverse of the mapping

 W1,p0(Ω)→W1,p0(Ω)∗,u↦(v↦∫Ω|∇u|p−2∇u⋅∇vdx)

and this map is demi-continuous, see for example Růžička (2006).

Remark 16.

Figure 2 shows two numerical realizations of the Deep Ritz Method for the -Laplacian with right-hand side and in the left picture and in the right picture. The penalization value is set to in both simulations to approximately enforce zero boundary values. Note that the exact solution to the homogeneous -Laplace problem on the disk with is given by

 up(x)=C⋅(1−|x|pp−1)

for a suitable constant that depends on the spatial dimension and the value of . We see that the solution converges pointwise to zero for and for the function tends to . This asymptotic behavior is clearly visible in our simulations.

Appendix A Universal approximation with zero boundary values

Here we prove the universal approximation result which we stated as Theorem 2 in the main text. Our proof uses that every continuous, piecewise linear function can be represented by a neural network with ReLU activation function and then shows how to approximate Sobolev functions with zero boundary conditions by such functions. The precise definition of a piecwise linear function is the following.

Definition 17 (Continuous piecewise linear function).

We say a function is continuous piecewise linear or shorter piecewise linear if there exists a finite set of closed polyhedra whose union is , and is affine linear over each polyhedron. Note every piecewise linear functions is continuous by definition since the polyhedra are closed and cover the whole space , and affine functions are continuous.

Theorem 18 (Universal expression).

Every ReLU neural network function is a piecewise linear function. Conversely, every piecewise linear function