# Continuous time integration for changing time systems

We consider variational time integration using continuous Galerkin Petrov methods applied to evolutionary systems of changing type. We prove optimal-order convergence for the projected error and conclude the paper with some numerical examples and conclusions.

## Authors

• 3 publications
02/15/2021

### Higher-Order Space-Time Continuous Galerkin Methods for the Wave Equation

We consider a space-time variational formulation of the second-order wav...
03/09/2020

### Variational Time Discretizations of Higher Order and Higher Regularity

We consider a family of variational time discretizations that are genera...
04/15/2020

### On Dissipative Symplectic Integration with Applications to Gradient-Based Optimization

Continuous-time dynamical systems have proved useful in providing concep...
05/14/2021

### Unified Analysis for Variational Time Discretizations of Higher Order and Higher Regularity Applied to Non-stiff ODEs

We present a unified analysis for a family of variational time discretiz...
09/10/2020

### Analysis of Theoretical and Numerical Properties of Sequential Convex Programming for Continuous-Time Optimal Control

Through the years, Sequential Convex Programming (SCP) has gained great ...
12/27/2021

### Under-Approximate Reachability Analysis for a Class of Linear Uncertain Systems

Under-approximations of reachable sets and tubes have received recent re...
06/04/2014

### Integration of a Predictive, Continuous Time Neural Network into Securities Market Trading Operations

This paper describes recent development and test implementation of a con...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Most of the classical linear partial differential equations arising in mathematical physics can be written in a common operator form. It has been shown in

[4] that this form is an evolutionary problem, given by

 (∂tM0+M1+A)U=F, (1.1)

where stands for the derivative with respect to time, and are bounded linear operators on some Hilbert space ,

is an unbounded skew-selfadjoint operator on

and is a given source term.

We are interested in a unique solution of above equation. The theory presented in [4] deals vanishing initial conditions at , but any non-zero initial condition can be incorporated as data into . For this purpose let and define the weighted -function space

 Hρ(R;H):={f:R→H:f meas.,∫R∥f(t)∥2Hexp(−2ρt)dt<∞}.

The space is a Hilbert space endowed with the natural inner product given by

for all , where is the inner product of . The associated weighted -function spaces are denoted by for . Now the solution theory of [4] states: If there exists a and a such that

 ρM0+M1≥γfor all ρ≥ρ0,

then for all right hand sides exists a unique solution .

In [3] this class of problems was investigated numerically using a discontinuous Galerkin approach for the discretisation in time. Here we want to apply a continuous approach, namely the continuous Galerkin-Petrov method [7, 2, 5, 1]. Like in [3] we will consider a special subclass of the general class of evolutionary problems, although the approach of the time-discretisation is suitable in the general case.

For the precise class of problems let be the spatial dimension and be partitioned into measurable, disjoint sets and . Denoting by

the characteristic function of a domain

we define the linear operators

Here denotes the homogeneous Dirichlet boundary conditions. The Hilbert space can now be specified to

 H(Ω)=H10(Ω)⊗Hdiv(Ω).

The resulting evolutionary problem is now of mixed type. More precisely, on we get an equation of elliptic type, on the equations becomes parabolic while on the problem is hyperbolic.

The solution theory of [4] does not only give for a final time and the existence of a unique solution . It holds in addition and therefore, by the Sobolev embedding theorem, is continuous in time. If the right-hand side is smoother, say , and and are in the range of , then the continuity in time of the solution is everywhere and we have, see [6]

 U∈H1ρ([0,T],H)⊂C([0,T],H).

In the following we want to utilise this additional regularity. Section 2 gives the precise formulation of the method considered. Section 3 then deals with the existence of discrete solutions, Section 4

presents error estimates and finally Section

5 gives some numerical examples and conclusions.

## 2 Numerical method

Let us start wit a semi-discretisation in time. The semi-discrete variational form of (1.1) uses a decomposition of into disjoint intervals of length for and the piecewise polynomial-in-time spaces

 Uτ :={u∈H1ρ(R,H):u∣∣Im∈Pr(Im,H),m∈{1,…,M}}, Vτ :={v∈Hρ(R,H):v∣∣Im∈Pr−1(Im,H),m∈{1,…,M}}

for the trial and the test functions, where is the space of polynomials of degree up to on with values in . Let us localise the scalar product in to the time intervals by

 ⟨f,g⟩ρ,m:=∫Im⟨f(t),g(t)⟩exp(−2ρt)dt.

Then the variational formulation using the continuous Galerkin-Petrov method reads:
Find such that for all and

 ⟨(∂tM0+M1+A)Uτ,Vτ⟩ρ,m =⟨f,Vτ⟩ρ,m=⟨Πr−1f,Vτ⟩ρ,m, (2.1) Uτ(t+m−1) =Uτ(t−m−1),

where denotes the weighted -projection into and is the initial value.

In order to compute an approximation to the solution of problem (1.1), we need also to discretise the spatial domain and the function space . Let be discretised into by a regular simplicial mesh that resolves the sets and , and let be the maximal diameter of the cells of . Furthermore, let denote a polynomial degree. Then the discrete spaces are given by

 Uτh :={uh∈Uτ:uh|Im∈Pr(Im,V1⊗V2),m∈{1,…,M}}, Vτh :={vh∈Vτ:vh|Im∈Pr−1(Im,V1⊗V2),m∈{1,…,M}},

where the spatial spaces are

 V1 :={v∈H10(Ω):v|σ∈Pk(σ)∀σ∈Ωh}, V2 :={w∈Hdiv(Ω):w|σ∈RTk−1(σ)∀σ∈Ωh}.

Here is the space of polynomials of degree up to on the cell and is the Raviart-Thomas-space, defined by

 RTk−1(σ)=(Pk−1(σ))n+xPk−1(σ)⊂Pk(σ)n.

The fully discrete method then reads similarly to (2.1):
Find such that for all and

 Bm(Uτh,Vτh):=⟨(∂tM0+M1+A)Uτh,Vτh⟩ρ,m =⟨f,Vτh⟩ρ,m, (2.2) Uτh(t+m−1) =Uτh(t−m−1),

where and is the initial value.

## 3 Existence of discrete solution

Let us start by choosing , the -projection of into the test space and denote by

 ∥f∥2ρ,[0,tℓ]:=ℓ∑m=1⟨f,f⟩ρ,m

a localised version of .

###### Lemma 3.1.

It holds for all

 ⟨M0Uτh(tℓ),Uτh(tℓ)⟩He−2ρtℓ+γ∥Πr−1Uτh∥2ρ,[0,tℓ]≤ℓ∑k=1Bk(Uτh,Πr−1Uτh)+⟨M0U0,U0⟩H.
###### Proof.

Let us consider any interval . Then it holds

 Bm(Uτh,Πr−1Uτh)=⟨∂tM0Uτh,Uτh⟩ρ,m+⟨M1Πr−1Uτh,Πr−1Uτh⟩ρ,m,

where the skew-symmetry of and the definition of was used.

For the first term we apply integration by parts and obtain due to the exponential weight

 ⟨∂tM0Uτh,Uτh⟩ρ,m=ρ⟨M0Uτh,Uτh⟩ρ,m+⟨M0Uτh(t),Uτh(t)⟩He−2ρt∣∣tmtm−1.

By the -orthogonality it follows

 ⟨M0Uτh,Uτh⟩ρ,m =⟨M0(Uτh−Πr−1Uτh),Uτh−Πr−1Uτh⟩ρ,m+⟨M0Πr−1Uτh,Πr−1Uτh⟩ρ,m ≥⟨M0Πr−1Uτh,Πr−1Uτh⟩ρ,m

and therefore

With the general existence assumption and we obtain

 Bm(Uτh,Πr−1Uτh)≥γ∥Πr−1Uτh∥2ρ,m+⟨M0Uτh(t),Uτh(t)⟩He−2ρt∣∣tmtm−1. (3.1)

Summing over the intervals the statement follows. ∎

With Lemma 3.1 we have control over the -projection of the solution and of the final value of . In order to obtain control of the full solution we consider two approaches.

1) has a trivial null space
In this case we employ the norm equivalence in discrete spaces

 ∥Uτh∥2ρ,m∼∥Πr−1Uτh∥2ρ,m+τm⟨M0Uτh(tm),Uτh(tm)⟩He−2ρtm

with constants independent of and and we obtain

 ∥Uτh∥2ρ,[0,T] ≤C(1γ+T)[M∑m=1Bm(Uτh,Πr−1Uτh)+⟨M0U0,U0⟩H]

Applying a Young inequality we have

 ∥Uτh∥2ρ,[0,T]

Therefore, in the case of with only trivial null-space the linear system is solvable and we also have continuous dependence of the solution on the data.

2) may have a non-trivial null space
Here we can apply another norm-equivalence to show the solvability of the discrete systems. Using we have the norm equivalence

 ∥Uτh∥2ρ,m∼∥Πr−1Uτh∥2ρ,m+τm⟨(ρM0+M1)Uτh(tm−1),Uτh(tm−1)⟩He−2ρtm−1

with constants independent of and . Using a similar approach as before we obtain

 ∥Uτh∥2ρ,[0,T] ≤C[1γ(M∑m=1Bm(Uτh,Πr−1Uτh)+⟨M0U0,U0⟩H) +M−1∑m=1ρτm+1(m∑n=1Bn(Uτh,Πr−1Uτh)+⟨M0U0,U0⟩H)+ρτ1⟨M0U0,U0⟩H +M∑m=1τm⟨M1Uτh(tm−1),Uτh(tm−1)⟩He−2ρtm−1] ≤C[1γ(⟨Πr−1f,Uτh⟩ρ,[0,T]+⟨M0U0,U0⟩H) +M∑m=1τm⟨M1Uτh(tm−1),Uτh(tm−1)⟩He−2ρtm−1] ≤C[(1γ+ρT)∥Πr−1f∥ρ,[0,T]∥Uτh∥ρ,[0,T]+(1γ+ρ(T+τ1))⟨M0U0,U0⟩H

With a Young inequality follows

 ∥Uτh∥2ρ,[0,T]≤C2(1γ+ρT)2∥f∥2ρ,[0,T]+2C(1γ+ρ(T+τ1))⟨M0U0,U0⟩H+2CTmaxm∈{1,…,M}⟨M1U(tm−1),U(tm−1)⟩He−2ρtm−1.

Therefore, also in the general case the linear system is solvable and the solution on each time interval depends continuously on the data and the initial value of the time interval.

## 4 Error-estimation

### 4.1 Semi discretisation in time

Note that we do have the Galerkin orthogonality

 Bm(U−Uτ,V)=0 (4.1)

for the solution of (1.1) for and of (2.1). We now want to estimate the error . Let

be the interpolation operator fulfilling locally for all

and

 (Prv−v)(tm−1) =0, (Prv−v)(tm) =0, ⟨Pru−u,w⟩ρ,m =0∀w∈Pr−2(Im,H).

We decompose the error into , where

 η=u−Pru,ξ=Pru−U∈Uτ.
###### Lemma 4.1.

It holds for any

 ⟨(∂tM0+M1+A)ξ,V⟩ρ,m≤(∥(2ρM0+M1)η∥ρ,m+∥Aη∥ρ,m)∥V∥ρ,m. (4.2)
###### Proof.

Using the Galerkin orthogonality (4.1) it follows the error equality

 ⟨(∂tM0+M1+A)ξ,V⟩ρ,m=−⟨(∂tM0+M1+A)η,V⟩ρ,m.

Then we have by integration by parts and the properties of for all and

 ⟨∂tM0(v−Prv),w⟩ρ,m =2ρ⟨M0(v−Prv),w⟩ρ,m−⟨v−Prv,∂tM0w⟩ρ,m+⟨v−Prv,w⟩He−2ρt∣∣tmtm−1 =2ρ⟨M0(v−Prv),w⟩ρ,m.

Thus we get the error equation

 ⟨(∂tM0+M1+A)ξ,V⟩ρ,m=−⟨(2ρM0+M1+A)η,V⟩ρ,m

from which (4.2) follows by a Cauchy-Schwarz inequality. ∎

In the following estimates we will bound the terms of interest by a norm of the interpolation error given by

 ∥η∥∗,[0,tm]:=∥(2ρM0+M1)η∥ρ,[0,tm]+∥Aη∥ρ,[0,tm].
###### Lemma 4.2.

It holds for any

 γ∥Πr−1ξ∥2ρ,[0,tm]+⟨M0ξ(tm),ξ(tm)⟩He−2ρtm ≤2γ∥η∥2∗,[0,tm]
###### Proof.

Let be the test function in (4.2). From (3.1) and (4.2) we have

 γ∥Πr−1ξ∥2ρ,m+12⟨M0ξ(t),ξ(t)⟩He−2ρt∣∣tmtm−1 ≤⟨(∂tM0+M1+A)ξ,Πr−1ξ⟩ρ,m ≤(∥(2ρM0+M1)η∥ρ,m+∥Aη∥ρ,m)∥Πr−1ξ∥ρ,m.

which gives upon summation the statement because . ∎

Thus we have control over the weighted -projection of the error and its final value.

###### Corollary 4.3.

In the case of having only a trivial null-space we have immediate control of the full norm due to

 ∥ξ∥2ρ,m≤C(∥Πr−1ξ∥2ρ,m+τm⟨M0ξ(tm),ξ(tm)⟩He−2ρtm)

for some implying

 ∥ξ∥2ρ,[0,T]≤C(2γ2+2Tγ)∥η∥2∗,[0,T].

In the case of having a non-trivial null-space the norm equivalence considered in the existence proof of Section 3 do only provide a bound exponentially increasing in .

### 4.2 Full discretisation

Let be the maximal time-step length. Similarly to [3, Section 4] we can now proceed and estimate the error of the full discretisation. Here the only difference is the time-discretisation method.

###### Theorem 4.4.

We assume for the solution of (1.1) the regularity

 U∈H1ρ(R;Hk(Ω)×Hk(Ω))∩Hr+3ρ(R;L2(Ω)×L2(Ω))

as well as

 AU∈Hρ(R;Hk(Ω)×Hk(Ω)).

Then we have for the error of the numerical solution by (2.2)

and if has only a trivial null space

 ∥U−Uτh∥2ρ,[0,T]≤C(τ2(r+1)+Th2k).
###### Proof.

This is a direct consequence of the results in [3, Section 4] combined with the estimates of the previous section. ∎

## 5 Numerical examples

We consider two examples with unknown solutions. Simulations with known smooth solutions were also made and the theoretical orders were observed. The two following examples show a more realistic behaviour in the case of changing type systems. All computations were done in the finite-element framework , see github.com/SOFE-Developers/SOFE.

### 5.1 1+1d example

Let us consider as first example one spatial dimension and combine a hyperbolic and an elliptic region. To be more precise, let , , and . As final time we set . The problem is now given by

 [∂t([c]χΩhyp00χΩhyp)+([c]χΩell00χΩell)+([c]0∂x˚∂x0)]U=F (5.1)

with homogeneous Dirichlet-conditions for the first component of , the initial condition and a right-hand side , where is the characteristic function of the non-negative time line and

 f(t,x) =15sin(3t)+min{t,π}cos(3x), g(t,x) =sin(t)(1−x2π2).

Thus, is continuous and it holds for . Therefore, the solution theory of [4] gives the existence of a unique solution that is continuous in time. Figure 1

shows plots of the components of the solution in the domain. Note that the first component has a kink along – it is continuous but not differentiable in . As mesh we use an equidistant mesh of cells in and cells in . In order to calculate the errors we use a reference solution instead of the unknown solution . The reference solution is computed on an mesh with polynomial degrees and . Table 1

shows the results for different values of and and polynomial degrees and . We coupled as the theory gives for smooth the convergence order if and are proportional. We observe for the continuous Galerkin-Petrov method in the first column only a convergence rate between 1 and 2. Increasing the polynomial degree reduces the error, but does not influence the rate much. A reason for this behaviour could be that is not smooth enough for the error estimates to hold.
For comparison we also computed approximations with the discontinuous Galerkin method from [3]. The errors given in the remaining columns show a similar behaviour with convergence rates between 1 and 2. Nevertheless, the errors are smaller for the discontinuous approach.

### 5.2 1+2d example

As second example we consider the last example of [3]. Let , , and The problem is given by

where

 f(t,x)=2sin(πt)⋅χR<1/2×R(x).

Figure 2

shows some snapshots of the first component of the solution , approximated by a numerical simulation. Again we use equidistant meshes with cells in each dimension of space and cells in . As reference solution replacing the unknown exact solution we use an approximation calculated with and , resp. Table 2

shows the results. Similarly to the previous example we do not achieve the optimal convergence order for both methods. And again the discontinuous Galerkin method has smaller errors.

## Conclusions

The continuous solution of an evolutionary system with continuous right hand side can be approximated by several methods. Here we investigated the continuous Galerkin-Petrov method, that has optimal convergence order for smooth solutions. We have proved this for the -projection of the error into the test space. For operators

having only a trivial null space it also follows for the full error. The benefit of the continuous method compared to the discontinuous Galerkin method is the continuity that implies a non-dissipative behaviour. In our examples with unknown solutions, that are probably not smooth, the discontinuous Galerkin method is slightly better. Furthermore, these examples show that an increase of the polynomial degree in space over 2 and in time over 1 gives no huge benefit. This is different for smooth solutions – here both methods achieve the theoretical high convergence orders.

## References

• [1] N. Ahmed and G. Matthies. Higher order continuous Galerkin-Petrov time stepping schemes for transient convection-diffusion-reaction equations. ESAIM Math. Model. Numer. Anal., 49(5):1429–1450, 2015.
• [2] A. K. Aziz and P. Monk. Continuous finite elements in space and time for the heat equation. Math. Comp, 52(186):255–274, 1989.
• [3] S. Franz, S. Trostorff, and M. Waurick. Numerical methods for changing type systems. IMAJNA, 39(2):1009–1038, 2019.
• [4] R. Picard. A structural observation for linear material laws in classical mathematical physics. Math. Methods Appl. Sci., 32(14):1768–1803, 2009.
• [5] F. Schieweck. -stable discontinuous Galerkin-Petrov time discretization of higher order. J. Numer. Math., 18(1):25–57, 2010.
• [6] S. Trostorff and M. Waurick. On higher index differential-algebraic equations in infinite dimensions. In A. Böttcher, D. Potts, P. Stollmann, and D. Wenzel, editors, The Diversity and Beauty of Applied Operator Theory, pages 477–486, Cham, 2018. Springer International Publishing.
• [7] R. Winther. A stable finite element method for initial-boundary value problems for first-order hyperbolic systems. Math. Comp., 36(153):65–86, 1981.