DeepAI
Log In Sign Up

Linearized Continuous Galerkin hp-FEM Applied to Nonlinear Initial Value Problems

09/05/2020
by   Mario Amrein, et al.
ZHAW
0

In this note we consider the continuous Galerkin time stepping method of arbitrary order as a possible discretization scheme of nonlinear initial value problems. In addition, we develop and generalize a well known existing result for the discrete solution by applying a general linearizing procedure to the nonlinear discrete scheme including also the simplified Newton solution procedure. In particular, the presented existence results are implied by choosing sufficient small time steps locally. Furthermore, the established existence results are independent of the local approximation order. Moreover, we will see that the proposed solution scheme is able to significantly reduce the number of iterations. Finally, based on existing and well known a priori error estimates for the discrete solution, we present some numerical experiments that highlight the proposed results of this note.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

09/09/2021

An oscillation free local discontinuous Galerkin method for nonlinear degenerate parabolic equations

In this paper, we develop an oscillation free local discontinuous Galerk...
04/01/2020

Existence results and numerical method of fourth order convergence for solving a nonlinear triharmonic equation

In this work, we consider a boundary value problem for nonlinear triharm...
12/14/2019

Improving the initial guess for the Newton-Raphson protocol in time-dependent simulations

A general linearisation procedure for the consistent tangent of a small-...
10/08/2019

A Global Newton-Type Scheme Based on a Simplified Newton-Type Approach

Globalization concepts for Newton-type iteration schemes are widely used...
07/05/2022

Convergence of a time discrete scheme for a chemotaxis-consumption model

In the present work we propose and study a time discrete scheme for the ...
05/26/2021

Augmented KRnet for density estimation and approximation

In this work, we have proposed augmented KRnets including both discrete ...

1. Introduction

In this note we consider the continuous Galerkin (cG) method of arbitrary order applied to the—possibly—nonlinear initial value problem given by

(1)

Here, for a final time , signifies the unknown solution and is the initial data that determines at time . In addition, is a—possibly—nonlinear function. It is well known that for being continuous, the local existence of a solution is implied by Peano’s Theorem (see, e.g., [10]). Moreover, in case of being—locally—Lipschitz, the solution is even unique by the Theorem of Picard and Lindelöf (see again, e.g., [10]). In general, problem (1) can only be solved approximately, i.e. numerically. Such a numerical solution scheme relies mainly on two different procedures which we address in this note: Firstly, the problem at hand needs to be discretized over some finite dimensional subspaces of the solution space. This leads to a series of nonlinear systems that need to be solved again numerically. Thus in a second step, the numerical solution procedure of the nonlinear systems can again only be solved approximately using a—suitable—linearization scheme applied to the nonlinear systems. Typically, such a linearization scheme is given by Banach’s fixed point iteration procedure—also termed Picard iteration—. Here we recall that—in case of being Lipschitz continuous—the proof of the local well posedness of problem (1)—by the famous result of Picard and Lindelöf—is constructive and relies mainly on Banach’s fixed point theorem. It is noteworthy that this result can also be achieved constructively using more general iteration schemes. Indeed, with the aim of proving local well posedness of (1), in [3] for example, the standard Picard iteration procedure is replaced by the simplified Newton iteration. Of course in this case, the assumption of being Lipschitz continuous needs to be replaced by stronger assumptions, mainly on the derivative of . However, from a computational point of view a benefit of such an approach that uses more advanced iteration schemes—as for example general Newton-type iteration schemes—, should be given by a lower number of iteration steps within the applied numerical solution procedure.

In this note, the underlying discretization scheme for the approximation of (1) is given by a continuous Galerkin time stepping method of arbitrary order. The idea of such an approximation scheme relies essentially on a weak formulation of (1). Subsequently, this weak formulation will be restricted on some finite dimensional subspaces in order to end up with a discretization of (1). In particular, since the test space of the weak formulation can be chosen of polynomials that are discontinuous at the nodal points, this discretization scheme can be interpreted as an implicit one-step scheme, see e.g. [13, 7, 1, 12] for further details. Thus, starting from the initial data , each time step implies a nonlinear system that needs to be solved iteratively. Thus, if the underlying continuous problem (1) is well posed for some , it is reasonable that a suitable iterative solution procedure—resolving the nonlinearity—is itself well posed as long as the time steps are sufficiently small. Indeed, as we will see, the proposed analysis for the well posedness of the discrete version of (1) implied by the continuous Galerkin methodology depends solely on the local time steps and is independent of the local approximation order, i.e. the local polynomial degree of the solution space.

Outline.

The outline of this work is as follows: In Section The hp-cG Time Stepping Methodology we present the continuous Galerkin (cG) time stepping scheme used for the discretization of the underlying initial value problem (1). Subsequently, we introduce the proposed iterative linearization scheme used for the solution of the nonlinear discrete system. The purpose of Section Convergence of the simplified linearized hp-cG iteration scheme is the well posedness of the discretized and now linearized problem. This will be accomplished by our main result given in Theorem 1.6. Since the proof of this result relies on a fixed point iteration argument, we end up with an iterative solution procedure that can be tested on some numerical experiments in Section 2. We further discuss the and version of the cG methodology and compare the number of iterations between the standard Banach fixed point iteration procedure and the simplified Newton iteration scheme. Finally, we summarize and comment on our findings in Section 3.

Notation.

Throughout this article is the Euclidean product of with the induced norm . For an interval we denote by the usual space of square integrable functions on with values in and norm . The set is the usual space of bounded functions with norm . In addition, let be the standard Sobolev space with corresponding norm . For a Banach space we signify by the dual of . In addition, will be used for the dual pairing in , i.e. the value of at the point . We further assume that the right side in (1) is Lipschitz continuous with respect to the second variable, i.e.

(2)

The hp-cG Time Stepping Methodology

Let be a partition of the interval into open sub-intervals . By we denote the local time steps with and represents the local polynomial degree, i.e. the local approximation order. Furthermore, the space of all polynomials of degree will be given by

We further introduce the vector

in order to allocate the local approximation orders. The following approximation

and

test spaces will be used in the sequel. Notice that since the approximation space consists of continuous elements we need to choose a discontinuous test space (as can be seen by the fact that the order of the test space is ).

1.1. Discretization.

The -cG time stepping scheme—first introduced in [8, 9, 5, 4]—now reads as follows: Find such that there holds

(3)

It is noteworthy, that the discontinuous character of allows us to choose discontinuous elements and therefore problem (3) decouples on each subinterval into

(4)

with , i.e. the discretization can be interpreted as an implicit one-step scheme.

1.2. Linearization

For and we introduce the operator by

In addition, let denote a map such that for , there exists a uniform constant with

(5)

and define further the operator

by

Notice that if is Fréchet differentiable in with derivative with respect to , then the Gâteaux derivative in direction is given by

Based on these definitions we introduce the following linearized hp-cG iteration scheme at time node : For and given solve

(6)

for and compute the update

In this note we freeze the second variable in the operator , i.e. we set forall . This implies the following simplified linearized hp-cG iteration scheme at time node :

For and given solve

(7)

for and compute the update

In case of the above iteration procedure (7) can be interpreted as a simplified Newton iteration scheme.

Convergence of the simplified linearized hp-cG iteration scheme

The aim of this Section is to show the existence of a solution for (4). Our strategy is to show that for sufficiently small time steps, the iteration scheme (7) is well defined and convergent. Before we start, we need to collect some auxiliary results.

Lemma 1.1 (Poincaré inequality).

Let with and . Then there holds the Poincaré inequality

(8)

with . The constant is independent of and .

Proof.

This result is a direct consequence of the standard Poincaré inequality in ; (see [2] or [6], for example). ∎

Let us further introduce the following set

(9)

and notice that Poincaré’s inequality (8) holds true for all , i.e. we have

The following result addresses the invertibility of the operator .

Lemma 1.2.

Let . Then, if the operator

is invertible on .

Proof.

Since the linear map operates over a finite dimensional space, we show that its kernel is trivial. Suppose there exists such that

holds . Choosing , we conclude

Employing the Cauchy-Schwarz inequality we get

Hence, the above estimate implies

Since we can invoke the Poincaré inequality and obtain, together with our assumption , the following contradiction

Remark 1.3.

Note that if the operator is invertible without any condition on . Indeed, let with

i.e. for there holds

which implies and therefore .

Next, for we introduce the following map

(10)

defined by

Notice that is well defined by the above Lemma 1.2, i.e.

The next result shows that is a Lipschitz continuous map.

Lemma 1.4.

If , then there holds

(11)
Proof.

Let , set and notice that . By definition of there holds

with

and

Hence we arrive at

(12)

By (12) and the Cauchy-Schwarz inequality we obtain

i.e., there holds

Using that and invoking again the Poincaré inequality we end up with

(13)

Solving (13) for by using the assumption , we obtain (11). ∎

Remark 1.5.

Notice that for , estimate (11) is simply

Using the above results we are now ready to prove the main result of this note:

Theorem 1.6.

Suppose there holds

(14)

and assume further that is Lipschitz continuous in the second variable with Lipschitz constant . Then, the cG method (3) admits a unique solution .

Proof.

On each interval there holds by assumption (14)

Therefore, for each time step the map with is well defined by Lemma 1.2. In addition, for each time step the map is a contraction on by (14), i.e. there exists a unique with which is the desired zero for . ∎

Remark 1.7.

This solution can be obtained iteratively by employing the iterative scheme from (7).

2. Numerical Experiments

In this Section we show some numerical experiments illustrating the theoretical convergence results from Section Convergence of the simplified linearized hp-cG iteration scheme. In doing so we provide some apriori error results given in [13]

. As a preparation, we need to define the number of degrees of freedom DOF for the cG method, which is simply the sum of all DOF’s on each subinterval

that are needed to compute the numerical solution on . Thus

Here we fix the polynomial degree, i.e. we set and therefore . In addition, we assume that the solution of (1) belongs to , and that the partition of is quasi-uniform. Then the apriori error results with respect to the -norm is given in [13] through

(15)

The constant depends on and is independent of and . Let us further point out the following two aspects arising from the apriori error result (15):

h-version

The h-version of the cG scheme means that convergence is implied by increasing the number of time steps at a fixed approximation order on each interval . In case that the solution of (1) is analytic and therefore is large, the error estimate (15) then reads

For or equivalently , we see that the rate of convergence with respect to the -norm is .

p-version

For the -version of the cG method we keep the time partition fixed but let the approximation order be variable. Thus, convergence is obtained by increasing the approximation order. In addition, it can be shown that for analytic solutions the -version admits high order convergence (even exponentially with respect to ). We refer here to [13] and [11] for further details.

Example 2.1.

The first example is given by the initial value problem

(16)

The exact solution is given by . We test the simplified Newton scheme, i.e. we consider (7) using . Notice that and therefore we can expect convergence by Theorem 1.6. First we present the -version of the cG method in Figure 1.The numerical test was obtained by bisection of the time interval . In Figure 1 we depict the decay of the -error from where we can clearly see the convergence order according to the error estimate given in (15). Table 1 shows the -error as well as the convergence order and the number of elements. In addition, Tables 2 and 3 show also the number of iterations compared with the number of iterations when solving (7) with which is simply the standard Banach-fixed point iteration procedure. As it can be seen, the simplified Newton scheme needs a significant lower number of iterations in order to accomplish the -error given in Table 1. Moreover, in Figure 2 we depict the -version of the cG method from where we observe that the convergence rates are even exponential. Here we choose a fixed partition of using and elements, i.e. we increase the approximation order on a fixed partition of .

#el. Ord. Ord. Ord. Ord. Ord.
1 2.94e-02 - 2.83e-3 - 4.51e-4 - 8.54e-6 - 6.78e-6 -
2 7.54e-03 1.97 4.39e-4 2.72 2.41e-5 4.23 1.30e-6 2.71 8.76e-8 6.27
4 1.90e-03 1.99 5.48e-5 2.97 1.51e-6 3.99 4.18e-8 4.96 1.34e-9 6.02
8 4.77e-04 2.00 6.89e-6 2.99 9.48e-8 4.00 1.31e-9 4.99 2.11e-11 5.99
16 1.19e-04 2.00 8.62e-7 3.00 5.93e-9 4.00 4.10e-11 5.00 3.31e-14 6.00
32 2.98e-05 2.00 1.07e-7 3.00 3.70e-10 4.00 1.28e-12 5.00
64 7.46e-06 2.00 1.35e-8 3.00 2.31e-11 4.00 4.01e-14 5.00
128 1.86e-06 2.00 1.68e-9 3.00 1.45e-12 4.00
256 4.66e-07 2.00 2.10e-10 3.00 8.80e-14 4.00
512 1.17e-07 2.00 2.63e-11 3.00
1024 2.91e-08 2.00 3.29e-12 3.00
2048 7.28e-09 2.00
4096 1.82e-09 2.00
8192 4.55e-10 2.00
Table 1. Example 1: The performance data of the -version of the cG method with use of the simplified Newton method.
#el. It. It. It. It. It.
1 2.94e-02 27 2.83e-3 19 4.51e-4 17 8.54e-6 16 6.78e-6 15
2 7.54e-03 28 4.39e-4 23 2.41e-5 22 1.30e-6 22 8.76e-8 21
4 1.90e-03 39 5.48e-5 35 1.51e-6 33 4.18e-8 34 1.34e-9 34
8 4.77e-04 62 6.89e-6 56 9.48e-8 56 1.31e-9 56 2.11e-11 56
16 1.19e-04 96 8.62e-7 96 5.93e-9 96 4.10e-11 96 3.31e-14 96
32 2.98e-05 185 1.07e-7 177 3.70e-10 173 1.28e-12 173
64 7.46e-06 320 1.35e-8 320 2.31e-11 319 4.01e-14 319
128 1.86e-06 628 1.68e-9 624 1.45e-12 627 -
256 4.66e-07 1024 2.10e-10 1024 8.80e-14 1024
512 1.17e-07 2048 2.63e-11 2047
1024 2.91e-08 4089 3.29e-12 4090
2048 7.28e-09 7970
4096 1.82e-09 12288
8192 4.55e-10 24579
Table 2. Example 1: The performance data of the -version of the cG method including the number of iterations (with use of the simplified Newton method).
#el. It. It. It. It. It.
1 2.94e-02 27 2.83e-3 19 4.51e-4 18 8.54e-6 16 6.78e-6 15
2 7.54e-03 35 4.39e-4 28 2.41e-5 27 1.30e-6 24 8.76e-8 24
4 1.90e-03 51 5.48e-5 44 1.51e-6 45 4.18e-8 40 1.34e-9 40
8 4.77e-04 83 6.89e-6 74 9.48e-8 77 1.31e-9 69 2.11e-11 69
16 1.19e-04 138 8.62e-7 127 5.93e-9 135 4.10e-11 120 3.31e-14 120
32 2.98e-05 239 1.07e-7 222 3.70e-10 242 1.28e-12 216
64 7.46e-06 425 1.35e-8 393 2.31e-11 440 4.01e-14 398
128 1.86e-06 753 1.68e-9 731 1.45e-12 848 -
256 4.66e-07 1427 2.10e-10 1315 8.80e-14 1523
512 1.17e-07 2517 2.63e-11 2498
1024 2.91e-08 4903 3.29e-12 4800
2048 7.28e-09 8922
4096 1.82e-09 16316
8192 4.55e-10 32384
Table 3. Example 1: The performance data of the -version of the cG method including the number of iterations (with use of the Banach fixed point iteration procedure).
Figure 1. Example 2.1: The -version of the cG method (with use of the simplified Newton method).
Figure 2. Example 2.1: The -version of the cG method (with use of the simplified Newton method).
Example 2.2.

The second example is given by

(17)

The exact solution is given by . Notice, that (2) and (5) are not satisfied. However, we again depict in Figure 3 the -version of the cG method (again by bisection of the time interval ) from where we can clearly see the convergence order according to the error estimate given in (15). Furthermore, in Figure 4 we see the -version of the cG method from where we observe that the convergence rates are again exponential (again we choose a fixed partition of using and elements, i.e. we increase the approximation order on a fixed partition of ).

Figure 3. Example 2.2: The -version of the cG method (with use of the simplified Newton method).
Figure 4. Example 2.2: The -version of the cG method (with use of the simplified Newton method).
Example 2.3.

We finally consider the following nonlinear initial value problems:

Given the initial data we seek such that for there holds

(18)

We use the exact solution as reference solution. We remark that, although the assumptions (2) and (5) are again not necessarily satisfied for this problem, our approach still delivers good results as can be seen from the Figures 5 and 6.

Figure 5. Example 2.3: The -version of the cG method (with use of the simplified Newton method).
Figure 6. Example 2.3: The -version of the cG method (with use of the simplified Newton method).

3. Conclusions

The aim of this note was the numerical solution of initial value problems by means of the continuous Galerkin method for the discretization of the underlying nonlinear problem. We further have shown the existence of the discrete solution under reasonable assumptions. In addition, we have proved that for sufficiently small time steps, the linearized continuous Galerkin scheme admits a unique solution that can be obtained iteratively. Moreover, we have tested the proposed iteration scheme on a series of numerical examples. Our numerical experiments clearly illustrate the ability of our approach. Specifically, the simplified Newton iteration scheme was able to significantly reduce the computational amount by means of the number of iterations.

References

  • [1] W. Bangerth and R. Rannacher (2013) Adaptive finite element methods for differential equations. Lectures in Mathematics. ETH Zürich, Birkhäuser Basel. External Links: ISBN 9783034876056, Link Cited by: §1.
  • [2] D. Braess and L.L. Schumaker (2001) Finite elements: theory, fast solvers, and applications in solid mechanics. Cambridge University Press. External Links: ISBN 9780521011952, LCCN 00069656 Cited by: Convergence of the simplified linearized hp-cG iteration scheme.
  • [3] P. Deuflhard (2011) Newton methods for nonlinear problems: affine invariance and adaptive algorithms. Springer Series in Computational Mathematics, Springer Berlin Heidelberg. External Links: ISBN 9783642238994 Cited by: §1.
  • [4] D. Estep and D. French (1994)

    Global error control for the continuous Galerkin finite element method for ordinary differential equations

    .
    RAIRO Modélisation Mathématique et Analyse Numérique 28 (7), pp. 815–852. External Links: ISSN 0764-583X, Link Cited by: §1.1.
  • [5] D. Estep (1995) A posteriori error bounds and global error control for approximation of ordinary differential equations. SIAM Journal on Numerical Analysis 32 (1), pp. 1–48. External Links: Document, ISSN 0036-1429, Link Cited by: §1.1.
  • [6] L.C. Evans (2010) Partial differential equations. Graduate studies in mathematics, American Mathematical Society. External Links: ISBN 9780821849743, LCCN 2009044716 Cited by: Convergence of the simplified linearized hp-cG iteration scheme.
  • [7] B. Holm and T. P. Wihler (2018-03-01) Continuous and discontinuous galerkin time stepping methods for nonlinear initial value problems with application to finite time blow-up. Numerische Mathematik 138 (3), pp. 767–799. External Links: ISSN 0945-3245, Document, Link Cited by: §1.
  • [8] B. L. Hulme (1972) Discrete Galerkin and related one-step methods for ordinary differential equations. Mathematics of Computation 26, pp. 881–891. External Links: ISSN 0025-5718, Link Cited by: §1.1.
  • [9] B. L. Hulme (1972) One-step piecewise polynomial Galerkin methods for initial value problems. Mathematics of Computation 26, pp. 415–426. External Links: ISSN 0025-5718, Link Cited by: §1.1.
  • [10] N.G. Markley (2004) Principles of differential equations. Pure and Applied Mathematics: A Wiley Series of Texts, Monographs and Tracts, Wiley. External Links: ISBN 9780471649564, LCCN 2004040890, Link Cited by: §1.
  • [11] C. Schwab and S.C. H (1998) P- and hp- finite element methods: theory and applications in solid and fluid mechanics. G.H.Golub and others, Clarendon Press. External Links: ISBN 9780198503903, LCCN 98023129 Cited by: §2.
  • [12] V. Thomée (1984) Galerkin finite element methods for parabolic problems. Lecture notes in mathematics, Springer. External Links: ISBN 9783540632368, Link Cited by: §1.
  • [13] T. P. Wihler (2005) An a priori error analysis of the -version of the continuous Galerkin FEM for nonlinear initial value problems. Journal of Scientific Computing 25 (3), pp. 523–549. External Links: Document, ISSN 0885-7474, Link Cited by: §1, §2, §2.