1. Introduction
In this note we consider the continuous Galerkin (cG) method of arbitrary order applied to the—possibly—nonlinear initial value problem given by
(1) 
Here, for a final time , signifies the unknown solution and is the initial data that determines at time . In addition, is a—possibly—nonlinear function. It is well known that for being continuous, the local existence of a solution is implied by Peano’s Theorem (see, e.g., [10]). Moreover, in case of being—locally—Lipschitz, the solution is even unique by the Theorem of Picard and Lindelöf (see again, e.g., [10]). In general, problem (1) can only be solved approximately, i.e. numerically. Such a numerical solution scheme relies mainly on two different procedures which we address in this note: Firstly, the problem at hand needs to be discretized over some finite dimensional subspaces of the solution space. This leads to a series of nonlinear systems that need to be solved again numerically. Thus in a second step, the numerical solution procedure of the nonlinear systems can again only be solved approximately using a—suitable—linearization scheme applied to the nonlinear systems. Typically, such a linearization scheme is given by Banach’s fixed point iteration procedure—also termed Picard iteration—. Here we recall that—in case of being Lipschitz continuous—the proof of the local well posedness of problem (1)—by the famous result of Picard and Lindelöf—is constructive and relies mainly on Banach’s fixed point theorem. It is noteworthy that this result can also be achieved constructively using more general iteration schemes. Indeed, with the aim of proving local well posedness of (1), in [3] for example, the standard Picard iteration procedure is replaced by the simplified Newton iteration. Of course in this case, the assumption of being Lipschitz continuous needs to be replaced by stronger assumptions, mainly on the derivative of . However, from a computational point of view a benefit of such an approach that uses more advanced iteration schemes—as for example general Newtontype iteration schemes—, should be given by a lower number of iteration steps within the applied numerical solution procedure.
In this note, the underlying discretization scheme for the approximation of (1) is given by a continuous Galerkin time stepping method of arbitrary order. The idea of such an approximation scheme relies essentially on a weak formulation of (1). Subsequently, this weak formulation will be restricted on some finite dimensional subspaces in order to end up with a discretization of (1). In particular, since the test space of the weak formulation can be chosen of polynomials that are discontinuous at the nodal points, this discretization scheme can be interpreted as an implicit onestep scheme, see e.g. [13, 7, 1, 12] for further details. Thus, starting from the initial data , each time step implies a nonlinear system that needs to be solved iteratively. Thus, if the underlying continuous problem (1) is well posed for some , it is reasonable that a suitable iterative solution procedure—resolving the nonlinearity—is itself well posed as long as the time steps are sufficiently small. Indeed, as we will see, the proposed analysis for the well posedness of the discrete version of (1) implied by the continuous Galerkin methodology depends solely on the local time steps and is independent of the local approximation order, i.e. the local polynomial degree of the solution space.
Outline.
The outline of this work is as follows: In Section The hpcG Time Stepping Methodology we present the continuous Galerkin (cG) time stepping scheme used for the discretization of the underlying initial value problem (1). Subsequently, we introduce the proposed iterative linearization scheme used for the solution of the nonlinear discrete system. The purpose of Section Convergence of the simplified linearized hpcG iteration scheme is the well posedness of the discretized and now linearized problem. This will be accomplished by our main result given in Theorem 1.6. Since the proof of this result relies on a fixed point iteration argument, we end up with an iterative solution procedure that can be tested on some numerical experiments in Section 2. We further discuss the and version of the cG methodology and compare the number of iterations between the standard Banach fixed point iteration procedure and the simplified Newton iteration scheme. Finally, we summarize and comment on our findings in Section 3.
Notation.
Throughout this article is the Euclidean product of with the induced norm . For an interval we denote by the usual space of square integrable functions on with values in and norm . The set is the usual space of bounded functions with norm . In addition, let be the standard Sobolev space with corresponding norm . For a Banach space we signify by the dual of . In addition, will be used for the dual pairing in , i.e. the value of at the point . We further assume that the right side in (1) is Lipschitz continuous with respect to the second variable, i.e.
(2) 
The hpcG Time Stepping Methodology
Let be a partition of the interval into open subintervals . By we denote the local time steps with and represents the local polynomial degree, i.e. the local approximation order. Furthermore, the space of all polynomials of degree will be given by
We further introduce the vector
in order to allocate the local approximation orders. The following approximationand
test spaces will be used in the sequel. Notice that since the approximation space consists of continuous elements we need to choose a discontinuous test space (as can be seen by the fact that the order of the test space is ).
1.1. Discretization.
The cG time stepping scheme—first introduced in [8, 9, 5, 4]—now reads as follows: Find such that there holds
(3)  
It is noteworthy, that the discontinuous character of allows us to choose discontinuous elements and therefore problem (3) decouples on each subinterval into
(4)  
with , i.e. the discretization can be interpreted as an implicit onestep scheme.
1.2. Linearization
For and we introduce the operator by
In addition, let denote a map such that for , there exists a uniform constant with
(5) 
and define further the operator
by
Notice that if is Fréchet differentiable in with derivative with respect to , then the Gâteaux derivative in direction is given by
Based on these definitions we introduce the following linearized hpcG iteration scheme at time node : For and given solve
(6) 
for and compute the update
In this note we freeze the second variable in the operator , i.e. we set forall . This implies the following simplified linearized hpcG iteration scheme at time node :
For and given solve
(7) 
for and compute the update
In case of the above iteration procedure (7) can be interpreted as a simplified Newton iteration scheme.
Convergence of the simplified linearized hpcG iteration scheme
The aim of this Section is to show the existence of a solution for (4). Our strategy is to show that for sufficiently small time steps, the iteration scheme (7) is well defined and convergent. Before we start, we need to collect some auxiliary results.
Lemma 1.1 (Poincaré inequality).
Let with and . Then there holds the Poincaré inequality
(8) 
with . The constant is independent of and .
Proof.
Let us further introduce the following set
(9) 
and notice that Poincaré’s inequality (8) holds true for all , i.e. we have
The following result addresses the invertibility of the operator .
Lemma 1.2.
Let . Then, if the operator
is invertible on .
Proof.
Since the linear map operates over a finite dimensional space, we show that its kernel is trivial. Suppose there exists such that
holds . Choosing , we conclude
Employing the CauchySchwarz inequality we get
Hence, the above estimate implies
Since we can invoke the Poincaré inequality and obtain, together with our assumption , the following contradiction
∎
Remark 1.3.
Note that if the operator is invertible without any condition on . Indeed, let with
i.e. for there holds
which implies and therefore .
Next, for we introduce the following map
(10) 
defined by
Notice that is well defined by the above Lemma 1.2, i.e.
The next result shows that is a Lipschitz continuous map.
Lemma 1.4.
If , then there holds
(11) 
Proof.
Remark 1.5.
Notice that for , estimate (11) is simply
Using the above results we are now ready to prove the main result of this note:
Theorem 1.6.
Suppose there holds
(14) 
and assume further that is Lipschitz continuous in the second variable with Lipschitz constant . Then, the cG method (3) admits a unique solution .
Proof.
Remark 1.7.
This solution can be obtained iteratively by employing the iterative scheme from (7).
2. Numerical Experiments
In this Section we show some numerical experiments illustrating the theoretical convergence results from Section Convergence of the simplified linearized hpcG iteration scheme. In doing so we provide some apriori error results given in [13]
. As a preparation, we need to define the number of degrees of freedom DOF for the cG method, which is simply the sum of all DOF’s on each subinterval
that are needed to compute the numerical solution on . ThusHere we fix the polynomial degree, i.e. we set and therefore . In addition, we assume that the solution of (1) belongs to , and that the partition of is quasiuniform. Then the apriori error results with respect to the norm is given in [13] through
(15) 
The constant depends on and is independent of and . Let us further point out the following two aspects arising from the apriori error result (15):
hversion
The hversion of the cG scheme means that convergence is implied by increasing the number of time steps at a fixed approximation order on each interval . In case that the solution of (1) is analytic and therefore is large, the error estimate (15) then reads
For or equivalently , we see that the rate of convergence with respect to the norm is .
pversion
For the version of the cG method we keep the time partition fixed but let the approximation order be variable. Thus, convergence is obtained by increasing the approximation order. In addition, it can be shown that for analytic solutions the version admits high order convergence (even exponentially with respect to ). We refer here to [13] and [11] for further details.
Example 2.1.
The first example is given by the initial value problem
(16) 
The exact solution is given by . We test the simplified Newton scheme, i.e. we consider (7) using . Notice that and therefore we can expect convergence by Theorem 1.6. First we present the version of the cG method in Figure 1.The numerical test was obtained by bisection of the time interval . In Figure 1 we depict the decay of the error from where we can clearly see the convergence order according to the error estimate given in (15). Table 1 shows the error as well as the convergence order and the number of elements. In addition, Tables 2 and 3 show also the number of iterations compared with the number of iterations when solving (7) with which is simply the standard Banachfixed point iteration procedure. As it can be seen, the simplified Newton scheme needs a significant lower number of iterations in order to accomplish the error given in Table 1. Moreover, in Figure 2 we depict the version of the cG method from where we observe that the convergence rates are even exponential. Here we choose a fixed partition of using and elements, i.e. we increase the approximation order on a fixed partition of .
#el.  Ord.  Ord.  Ord.  Ord.  Ord.  

1  2.94e02    2.83e3    4.51e4    8.54e6    6.78e6   
2  7.54e03  1.97  4.39e4  2.72  2.41e5  4.23  1.30e6  2.71  8.76e8  6.27 
4  1.90e03  1.99  5.48e5  2.97  1.51e6  3.99  4.18e8  4.96  1.34e9  6.02 
8  4.77e04  2.00  6.89e6  2.99  9.48e8  4.00  1.31e9  4.99  2.11e11  5.99 
16  1.19e04  2.00  8.62e7  3.00  5.93e9  4.00  4.10e11  5.00  3.31e14  6.00 
32  2.98e05  2.00  1.07e7  3.00  3.70e10  4.00  1.28e12  5.00  –  – 
64  7.46e06  2.00  1.35e8  3.00  2.31e11  4.00  4.01e14  5.00  –  – 
128  1.86e06  2.00  1.68e9  3.00  1.45e12  4.00  –  –  –  – 
256  4.66e07  2.00  2.10e10  3.00  8.80e14  4.00  –  –  –  – 
512  1.17e07  2.00  2.63e11  3.00  –  –  –  –  –  – 
1024  2.91e08  2.00  3.29e12  3.00  –  –  –  –  –  – 
2048  7.28e09  2.00  –  –  –  –  –  –  –  – 
4096  1.82e09  2.00  –  –  –  –  –  –  –  – 
8192  4.55e10  2.00  –  –  –  –  –  –  –  – 
#el.  It.  It.  It.  It.  It.  

1  2.94e02  27  2.83e3  19  4.51e4  17  8.54e6  16  6.78e6  15 
2  7.54e03  28  4.39e4  23  2.41e5  22  1.30e6  22  8.76e8  21 
4  1.90e03  39  5.48e5  35  1.51e6  33  4.18e8  34  1.34e9  34 
8  4.77e04  62  6.89e6  56  9.48e8  56  1.31e9  56  2.11e11  56 
16  1.19e04  96  8.62e7  96  5.93e9  96  4.10e11  96  3.31e14  96 
32  2.98e05  185  1.07e7  177  3.70e10  173  1.28e12  173  –  – 
64  7.46e06  320  1.35e8  320  2.31e11  319  4.01e14  319  –  – 
128  1.86e06  628  1.68e9  624  1.45e12  627    –  –  – 
256  4.66e07  1024  2.10e10  1024  8.80e14  1024  –  –  –  – 
512  1.17e07  2048  2.63e11  2047  –  –  –  –  –  – 
1024  2.91e08  4089  3.29e12  4090  –  –  –  –  –  – 
2048  7.28e09  7970  –  –  –  –  –  –  –  – 
4096  1.82e09  12288  –  –  –  –  –  –  –  – 
8192  4.55e10  24579  –  –  –  –  –  –  –  – 
#el.  It.  It.  It.  It.  It.  

1  2.94e02  27  2.83e3  19  4.51e4  18  8.54e6  16  6.78e6  15 
2  7.54e03  35  4.39e4  28  2.41e5  27  1.30e6  24  8.76e8  24 
4  1.90e03  51  5.48e5  44  1.51e6  45  4.18e8  40  1.34e9  40 
8  4.77e04  83  6.89e6  74  9.48e8  77  1.31e9  69  2.11e11  69 
16  1.19e04  138  8.62e7  127  5.93e9  135  4.10e11  120  3.31e14  120 
32  2.98e05  239  1.07e7  222  3.70e10  242  1.28e12  216  –  – 
64  7.46e06  425  1.35e8  393  2.31e11  440  4.01e14  398  –  – 
128  1.86e06  753  1.68e9  731  1.45e12  848    –  –  – 
256  4.66e07  1427  2.10e10  1315  8.80e14  1523  –  –  –  – 
512  1.17e07  2517  2.63e11  2498  –  –  –  –  –  – 
1024  2.91e08  4903  3.29e12  4800  –  –  –  –  –  – 
2048  7.28e09  8922  –  –  –  –  –  –  –  – 
4096  1.82e09  16316  –  –  –  –  –  –  –  – 
8192  4.55e10  32384  –  –  –  –  –  –  –  – 
Example 2.2.
The second example is given by
(17) 
The exact solution is given by . Notice, that (2) and (5) are not satisfied. However, we again depict in Figure 3 the version of the cG method (again by bisection of the time interval ) from where we can clearly see the convergence order according to the error estimate given in (15). Furthermore, in Figure 4 we see the version of the cG method from where we observe that the convergence rates are again exponential (again we choose a fixed partition of using and elements, i.e. we increase the approximation order on a fixed partition of ).
Example 2.3.
We finally consider the following nonlinear initial value problems:
3. Conclusions
The aim of this note was the numerical solution of initial value problems by means of the continuous Galerkin method for the discretization of the underlying nonlinear problem. We further have shown the existence of the discrete solution under reasonable assumptions. In addition, we have proved that for sufficiently small time steps, the linearized continuous Galerkin scheme admits a unique solution that can be obtained iteratively. Moreover, we have tested the proposed iteration scheme on a series of numerical examples. Our numerical experiments clearly illustrate the ability of our approach. Specifically, the simplified Newton iteration scheme was able to significantly reduce the computational amount by means of the number of iterations.
References
 [1] (2013) Adaptive finite element methods for differential equations. Lectures in Mathematics. ETH Zürich, Birkhäuser Basel. External Links: ISBN 9783034876056, Link Cited by: §1.
 [2] (2001) Finite elements: theory, fast solvers, and applications in solid mechanics. Cambridge University Press. External Links: ISBN 9780521011952, LCCN 00069656 Cited by: Convergence of the simplified linearized hpcG iteration scheme.
 [3] (2011) Newton methods for nonlinear problems: affine invariance and adaptive algorithms. Springer Series in Computational Mathematics, Springer Berlin Heidelberg. External Links: ISBN 9783642238994 Cited by: §1.

[4]
(1994)
Global error control for the continuous Galerkin finite element method for ordinary differential equations
. RAIRO Modélisation Mathématique et Analyse Numérique 28 (7), pp. 815–852. External Links: ISSN 0764583X, Link Cited by: §1.1.  [5] (1995) A posteriori error bounds and global error control for approximation of ordinary differential equations. SIAM Journal on Numerical Analysis 32 (1), pp. 1–48. External Links: Document, ISSN 00361429, Link Cited by: §1.1.
 [6] (2010) Partial differential equations. Graduate studies in mathematics, American Mathematical Society. External Links: ISBN 9780821849743, LCCN 2009044716 Cited by: Convergence of the simplified linearized hpcG iteration scheme.
 [7] (20180301) Continuous and discontinuous galerkin time stepping methods for nonlinear initial value problems with application to finite time blowup. Numerische Mathematik 138 (3), pp. 767–799. External Links: ISSN 09453245, Document, Link Cited by: §1.
 [8] (1972) Discrete Galerkin and related onestep methods for ordinary differential equations. Mathematics of Computation 26, pp. 881–891. External Links: ISSN 00255718, Link Cited by: §1.1.
 [9] (1972) Onestep piecewise polynomial Galerkin methods for initial value problems. Mathematics of Computation 26, pp. 415–426. External Links: ISSN 00255718, Link Cited by: §1.1.
 [10] (2004) Principles of differential equations. Pure and Applied Mathematics: A Wiley Series of Texts, Monographs and Tracts, Wiley. External Links: ISBN 9780471649564, LCCN 2004040890, Link Cited by: §1.
 [11] (1998) P and hp finite element methods: theory and applications in solid and fluid mechanics. G.H.Golub and others, Clarendon Press. External Links: ISBN 9780198503903, LCCN 98023129 Cited by: §2.
 [12] (1984) Galerkin finite element methods for parabolic problems. Lecture notes in mathematics, Springer. External Links: ISBN 9783540632368, Link Cited by: §1.
 [13] (2005) An a priori error analysis of the version of the continuous Galerkin FEM for nonlinear initial value problems. Journal of Scientific Computing 25 (3), pp. 523–549. External Links: Document, ISSN 08857474, Link Cited by: §1, §2, §2.