 # Existence of dynamical low rank approximations for random semi-linear evolutionary equations on the maximal interval

An existence result is presented for the dynamical low rank (DLR) approximation for random semi-linear evolutionary equations. The DLR solution approximates the true solution at each time instant by a linear combination of products of deterministic and stochastic basis functions, both of which evolve over time. A key to our proof is to find a suitable equivalent formulation of the original problem. The so-called Dual Dynamically Orthogonal formulation turns out to be convenient. Based on this formulation, the DLR approximation is recast to an abstract Cauchy problem in a suitable linear space, for which existence and uniqueness of the solution in the maximal interval are established.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

This paper is concerned with the existence of solutions of the so called Dynamical Low Rank Method [22, 18, 19, 8, 9] to a semi-linear random parabolic evolutionary equation. For a separable -Hilbert space

and a probability space

, let be the Bochner space of equivalence classes of -valued measurable functions on

, with finite second moments. We consider the following equation in

:

 ∂u∂t(t)=Λu(t)+F(u(t)),t>0,% with u(0)=u0, (1.1)

with a closed linear operator , and a mapping , where the domain is dense in .

Our interest in this paper is a reduced basis method for this problem called the Dynamically Low Rank (DLR) approximation [22, 18, 19, 8, 9]. The idea is to approximate the solution of (1.1) at each time as a linear combination of products of deterministic and stochastic basis functions, both of which evolve over time: the approximate solution is of the form , for some positive integer called the rank of the solution, where are linearly independent in , and are linearly independent in the space

of square-integrable random variables. We note that both bases depend on the temporal variable

. This dependence is intended to approximate well, with a fixed (possibly small) rank, the solution of stochastic dynamical systems such as (1.1), whose stochastic and spatial dependence may change significantly in time. Numerical examples and error analysis suggests the method does indeed work well in a certain number of practical applications [22, 19].

A fundamental open question regarding this approach is the unique existence of DLR solutions. The DLR approximation is given as a solution of a system of differential equations, and available approximation results are built upon the assumption that this solution exists, e.g. [18, 8]. Nonetheless, to the best of our knowledge, the existence—let alone the uniqueness—of DLR solutions for an equation of the type (1.1) is not known. In this paper, we will establish a unique existence result.

A difficulty in proving the existence is the fact that the solution propagates in an infinite-dimensional manifold, and that we have an unbounded operator in the equation. Indeed, the DLR equations are derived so that the aforementioned approximation keeps the specified form in time, with the fixed rank . By now it is well known that the collection of functions of this form admits an infinite-dimensional manifold structure [7, Section 3]. Besides the unbounded operator , the resulting system of equations involves also a non-linear projection operator onto the tangent space to the manifold, which makes its analysis difficult and non-standard.

Our strategy is to work with a suitable set of parameters describing the manifold, that are elements of a suitable ambient Hilbert space, and invoke results for the evolutionary equations in linear spaces. In utilising such results, the right choice of parametrisation turns out to be crucial. Our choice of parameters leads us to the so-called Dual DO formulation introduced in .

A method similar to the DLR approximation is the multi-configuration time-dependent Hartree (MCTDH) method, which has been considered in the context of computational quantum chemistry to approximate a deterministic Schrödinger equation. For the MCTDH method, several existence results have been established, e.g. [15, 2, 14]. The strategy used in these papers, first proposed by Koch and Lubich , is to consider a constraint called the gauge condition that is defined by the differential operator in the equation. With their choice of the gauge condition and their specific setting, the differential operator appears outside the projection operator, and this was a crucial step in [15, 2, 14] to apply the standard theory of abstract Cauchy problems. However, as we will see later in Section 2.4, the same approach does not work in our setting.

As mentioned above, our strategy in this paper is to work with the Dual DO formulation, by which we are able to show that the DLR approximation exists as long as a suitable full rank condition is satisfied. Further, we discuss the extendability of the approximation, beyond the point where we lose the full rankness.

The rest of this paper is organised as follows. In Section 2, we introduce the problem under study: the DLR equation and its equivalent formulation called Dual DO equation. Section 3 introduces a parameter-equation that is equivalent to the Dual DO equations. Then, in Section 4 we prove our main result, namely the existence and uniqueness of a DLR solution on the maximal interval. The solution evolves in a manifold up to a maximal time. The solution cannot be continued in this manifold, but we will show that it can be extended in the ambient space, and the resulting continuation will take values in a different manifold with lower rank. Finally, in Section 5 we draw some conclusions.

## 2 DLR formulation

In this section, we introduce the setting and recall some facts on the Dynamical Low Rank (DLR) approach that will be needed later.

We detail in Section 2.3 the precise assumptions on , and the initial conditions we will work with. For the moment, we just assume that a solution of (1.1) exists. We note, however, that the existence and uniqueness can be established by standard arguments. For instance, if is self-adjoint and satisfies for all , by extending the definition of to random functions , where is applied pointwise in , we have that is densely defined, closed, and satisfies

 E[⟨−Λv,v⟩]≥0for all v∈D(Λ)⊂L2(Ω;H).

Together with a local Lipschitz continuity of , existence of solutions can be established by invoking a standard theory of semi-linear evolution equations, see for example [20, 23].

The DLR approach seeks an approximate solution of the equation (1.1) defined by deterministic and random basis functions. To be more precise, we define an element to be an -rank random field if can be expressed as a linear combination of (and not less than ) linearly independent elements of , and (and not less than ) linearly independent elements of . Further, we let be the collection of all the -rank random fields:

 ^MS :={uS=S∑j=1UiYi∣∣ ∣∣spanR{{Uj}Sj=1}% is an S dimensional subspace of HspanR{{Yj}Sj=1}is an S % dimensional subspace of L2(Ω)}.

It is known that can be equipped with a differentiable manifold structure, see [19, 7] and references therein. The idea behind the DLR approach is to approximate the curve defined by the solution of the equation (1.1) by a curve given as a solution of the following problem: find such that , a suitable approximation of in , and for (almost) all we have and

 E[⟨∂uS∂t(t)−(ΛuS(t)+F(uS(t))),v⟩]=0, for all v∈TuS(t)^MS, (2.1)

where is the tangent space of at , and denotes expectation with respect to the underlying probability measure .

In this paper, we search for the solution in the same set as but with a different parametrisation that is easier to work with. The set

 MS:={uS=S∑j=1UiYi∣∣ ∣∣{Uj}Sj=1is linear independent in H{Yj}Sj=1is orthonormal in L2(Ω)} (2.2)

is the same subset of as , and thus the above problem is equivalent when we seek solutions in instead of . This leads us to the so-called Dual Dynamically Orthogonal (DO) formulation of the problem (2.1).

For , we define the operator by

 PuS:=PU+PY−PUPY,

where, for an arbitrary -orthonormal basis of the operator is defined by

 PUf=S∑j=1⟨f,ϕj⟩ϕj for% f∈L2(Ω;H),

and for an arbitrary -orthonormal basis of the operator is defined by

 PYf=S∑j=1E[fψj]ψj for f∈L2(Ω;H). (2.3)

This operator turns out to be the -orthogonal projection to the tangent space at , see [18, Proposition 3.3] together with . We note that the operator is independent of the choice of the representation of : for any full rank matrix we have , but also and .

Using the above definitions, the problem we consider, equivalent to (2.1), can be formulated as follows:

###### Problem.

Find such that and for we have

 ∂uS∂t(t)=PuS(t)(ΛuS(t)+F(uS(t))). (2.4)

In this paper, we consider two notions of solutions of this problem: the strong and classical solution.

###### Definition 2.0.

A function is called a strong solution of the initial value problem (2.4) if , is absolutely continuous on , and (2.4) is satisfied a.e. on . Further, we call a strong solution on if it is a strong solution on any subinterval .

In practice, further regularity of may be of interest.

###### Definition 2.0.

A function is called a classical solution of (2.4) on if , is absolutely continuous on , continuously differentiable on , for , and (2.4) is satisfied on . Further, we call a classical solution on when it is a classical solution on any subinterval .

### 2.1 Dual DO formulation

Our aim is to establish the unique existence of a solution to problem (2.4). A difficulty is that propagates in a non-linear manifold . Our strategy is to choose a suitable parametrisation of , and work in a linear space which the parameters belong to. For the parametrisation, we will choose the one proposed in , which results in a formulation of (2.4) called Dual DO, where we seek an approximate solution of the form for any . Here, the parameter is a solution to the following problem:

1. the components of are linearly independent in for any ;

2. the components of are orthonormal in , and satisfy the so-called gauge condition: for any ,

 E[∂Yj∂tYk]=0for j,k=1,…,S, equivalently, E[∂Y∂tY⊤]=0∈RS×S;
3. satisfies the equation

 ⎧⎨⎩∂∂tU=E[L(uS)Y]ZU∂∂tY=(I−PY)[⟨L(uS),U⟩], (2.5)

where , is as in (2.3), and is the Gram matrix defined by ;

4. satisfies the initial condition for some such that .

Noting that, since the operator is deterministic and linear,

 PY(⟨Λ(uS),U⟩)=PY(⟨Λ(U⊤)Y,U⟩)=⟨Λ(uS),U⟩

and , the equation (2.5) reads

 ⎧⎨⎩∂∂tU=Λ(U)+E[F(U⊤Y)Y]=:Λ(U)+G1(Y)(U)∂∂tY=(I−PY)(⟨F(U⊤Y),Z−1UU⟩)=:G2(U)(Y). (2.6)

We define two notions of solutions to the initial value problem of (2.6) that correspond to those of the original problem as in Definitions 2.12.2.

###### Definition 2.0.

A function is called a Dual DO solution of the problem (2.4) in the strong sense if satisfies the following conditions:

1. for some such that ;

2. satisfies the equation (2.6) a.e. on ;

3. the curve is absolutely continuous on ;

4. the curve is absolutely continuous on ;

5. is linear independent in for almost every ; and

6. is orthonormal in for almost every .

Notice, in particular, that the condition 5 above implies that the matrix is invertible for almost every . Further, from (2.6) we necessarily have

 E[(∂∂tY)Y⊤]=E[⟨F(U⊤Y),Z−1UU⟩(I−PY)Y⊤]=0. (2.7)
###### Definition 2.0.

A function is called a Dual DO solution of the problem (2.4) in the classical sense if satisfies the following conditions:

1. for some such that ;

2. satisfies the equation (2.6) on ;

3. the curve is absolutely continuous on , continuously differentiable on ;

4. the curve is absolutely continuous on , continuously differentiable on ;

5. for any , ;

6. is linear independent in for any ;

7. is orthonormal in for any .

###### Definition 2.0.

If is a Dual DO solution on all subintervals in the strong (resp. classical) sense, then we call a Dual DO solution on in the strong (resp. classical) sense.

As we will see in the next section, establishing the unique existence of the Dual DO solution is equivalent to establishing the unique existence of solutions to the original equation (2.4). Thus, for the rest of this paper we will work with the Dual DO formulation.

### 2.2 Equivalence with the original formulation

In this section, we establish the equivalence of the original equation (2.4) and the Dual DO formulation as in Definitions 2.32.4. Our first step is to show that if a solution of the original equation (2.4) is given, then there exists a unique solution of (2.6) that is also the unique Dual DO solution of (2.4) such that , see Lemma 2.10.

We will need a proposition which states that if

is differentiable, then there exists a differentiable parametrisation. This result is a generalisation of the existence of smooth singular value decompositions of matrix-valued curve considered, for example, in

[6, 3]. We start with the following lemma, which shows the existence of the singular value decomposition for elements in .

###### Lemma 2.6.

Let be given. Then, with some and orthonormal in and , respectively, and , , we have

 uS=S∑j=1σj~VjWj.

Moreover, such is unique: for any other representation with and orthonormal, upon relabelling if necessary, we have , . Furthermore, if is continuous, then the corresponding values satisfy

 0
###### Proof.

The linear operator defined by is a finite-rank operator with rank , with the image being independent of the representation of . Thus, with some and orthonormal in and , respectively, admits the canonical decomposition

 Kw=S∑j=1σjE[wWj]~Vj,

with singular values , , see e.g. [12, Sections III.4.3 and V.2.3]. If we have another representation , then upon relaballing if necessary, we must have . To see this, first note that the adjoint operator is a finite-rank operator with rank . The operator is also rank and admits the spectral decomposition

 K∗Kw=S∑j=1σ2jE[wWj]Wj,

with eigenvalues

and the corresponding eigenfuncitons . Similarly, if we have a representation , then

are also eigenfunctions of

corresponding to the eigenvalues . Thus, for the image of to be -dimesnional, we must have , and moreover each eigenvalue must have the same (geometric) multiplicity.

To show (2.8), relabel in the non-decreasing order and denote it by . Then, for any and such that we have

 |αj(t+h)−αj(t)| ≤∥K(uS(t+h))−K(uS(t))∥L2(Ω)→Hfor j=1,…,S,

see for example [21, Proposition II.7.6 and Theorem IV.2.2]. But for any we have

 ∥K(uS(t+h))w−K(uS(t))w∥H ≤(E[∥uS(t+h)−uS(t)∥2H])1/2∥w∥L2(Ω),

and thus the continuity of implies that is continuous on . Now, since is of rank , we have for any . Hence, for any we have

 inft∈[0,T]σj(t)≥mint∈[0,T]α1(t)>0.

Similarly, we have , which completes the proof. ∎

The singular value decomposition above can be made smooth.

###### Proposition 2.7.

Suppose that is absolutely continuous. Then, there exist , , and , such that

 uS(t)=S∑j=1σj(t)~Vj(t)Wj(t),for % all t∈[0,T];

and are orthonormal in and in , respectively; the curves , , and , are absolutely continuous on . Moreover, if is continuously differentiable on , then , , and are continuously differentiable on . In particular, admits a representation in with with the specified smoothness.

To show Proposition 2.7, we will use an argument similar to what we will see in Section 4 below. Thus, we will defer the proof to Section 4.

Parametrisation of

is determined by parameters up to a unique orthogonal matrix.

###### Lemma 2.8.

Let be given. Suppose that admits two representations with some satisfying the linear independence and orthonormality conditions as in (2.2). Then, we have

 (~V,~W)=(Θ⊤V,Θ⊤W),

for a unique .

###### Proof.

From , we have

 ~W=(⟨~V,~V⊤⟩)−1⟨~V,V⊤⟩W=:Θ⊤W,

so that . From the -orthonormality of and , taking the expectation of both sides we conclude that is an orthogonal matrix. To see the uniqueness, suppose

 ~W=~Θ⊤W for some ~Θ∈RS×S.

But from and , we must have . ∎

The above lemma implies the following corollary, which states that if both a solution of the original problem (2.4) and a Dual DO solution of (2.4) exist, and if further the solution of the original problem is unique, then is determined by up to a unique orthogonal matrix. We stress that the following corollary does not guarantee the uniqueness of the Dual DO solution.

###### Corollary 2.9.

Suppose that the equation (2.4) has a unique strong solution , . Let be any representation of , namely , satisfying the linear independence and orthonormality conditions defined in (2.2). Furthermore, suppose that a Dual DO solution exists in the strong sense. Then, we have

 (U(t),Y(t))=(Θ(t)⊤V(t),Θ(t)⊤W(t)), (2.9)

for a unique . In words, if a Dual DO solution exists, then it must be of the form with an arbitrarily chosen representation of and the corresponding unique orthogonal matrix .

###### Proof.

We first show that the function satisfies the original equation (2.4). Since is a Dual DO solution in the strong sense, from (2.6) a.e. on we have

 ddt^uS =ddtU⊤Y+U⊤ddtY =Λ(^uS)+Y⊤E[F(^uS)Y]+(I−PY)(U⊤Z−1U⟨F(^uS),U⟩) =Λ(^uS)+PY(F(^uS))+(I−PY)PU(F(^uS))∈L2(Ω;H).

Now, notice that and thus . Together with we obtain

 ddt^uS=(PY+(PU−PUPY))Λ(^uS)+(PY+PU−PUPY)F(^uS),

which is (2.4).

Then, from the uniqueness of the solution of the original problem we have . Thus, Lemma 2.8 implies (2.9), as claimed.

In the above corollary, we assumed the existence of both the solution of the original problem and the Dual DO formulation, and deduced the existence of a unique orthogonal matrix. The following lemma shows that such an orthogonal matrix exists, showing that the unique existence of the solution of the original problem (2.4) implies that of the Dual DO formulation as in Definitions 2.32.4. The proof is inspired by [13, Proof of Proposition II.3.1]. We will use the following lemma to show the equivalence of the original problem (2.4) and the Dual DO formulation (2.6), see Proposition 2.11 below.

###### Lemma 2.10.

Suppose that is absolutely continuous, , and satisfies the equation (2.4) a.e. on . Let be such that . Then, there exists a Dual DO solution in the strong sense with the initial condition . Further, is the unique Dual DO solution such that for all .

###### Proof.

From Proposition 2.7, there exists a curve such that for all ; is linear independent in ; is orthonormal in ; is absolutely continuous on ; and is absolutely continuous on . In general, and , but from Lemma 2.8, one can find a unique orthogonal matrix such that

 Ξ~V(0)=V(0)  and  Ξ~W(0)=W(0).

Now, let and , so that . Notice that and are absolutely continuous. From Corollary 2.9, if the Dual DO solution exists then we necessarily have

 (U(t),Y(t))=(Θ(t)⊤V(t),Θ(t)⊤W(t)),  for some unique Θ(t)∈O(S). (2.10)

We show that such , i.e. an orthogonal matrix for which is a Dual DO solution, uniquely exists. Note that again from Corollary 2.9, it suffices to consider an arbitrarily fixed representation .

We will obtain

as a solution of an ordinary differential equation we will now derive. If

is a Dual DO solution, then the equality (2.10) implies

 (˙U(t),˙Y(t))=(ddt(Θ(t)⊤V(t)),˙Θ(t)⊤W(t)+Θ(t)⊤˙W(t)),

and from (2.7) we must have

 0=E[Y(t)˙Y(t)⊤] =Θ(t)⊤E[W(t)W(t)⊤]˙Θ(t)+Θ(t)⊤E[W(t)˙W(t)⊤]Θ(t) =Θ(t)⊤(˙Θ(t)+E[W(t)˙W(t)⊤]Θ(t)),

where in the last line we used