In this note we consider the continuous Galerkin (cG) method of arbitrary order applied to the—possibly—nonlinear initial value problem given by
Here, for a final time , signifies the unknown solution and is the initial data that determines at time . In addition, is a—possibly—nonlinear function. It is well known that for being continuous, the local existence of a solution is implied by Peano’s Theorem (see, e.g., ). Moreover, in case of being—locally—Lipschitz, the solution is even unique by the Theorem of Picard and Lindelöf (see again, e.g., ). In general, problem (1) can only be solved approximately, i.e. numerically. Such a numerical solution scheme relies mainly on two different procedures which we address in this note: Firstly, the problem at hand needs to be discretized over some finite dimensional subspaces of the solution space. This leads to a series of nonlinear systems that need to be solved again numerically. Thus in a second step, the numerical solution procedure of the nonlinear systems can again only be solved approximately using a—suitable—linearization scheme applied to the nonlinear systems. Typically, such a linearization scheme is given by Banach’s fixed point iteration procedure—also termed Picard iteration—. Here we recall that—in case of being Lipschitz continuous—the proof of the local well posedness of problem (1)—by the famous result of Picard and Lindelöf—is constructive and relies mainly on Banach’s fixed point theorem. It is noteworthy that this result can also be achieved constructively using more general iteration schemes. Indeed, with the aim of proving local well posedness of (1), in  for example, the standard Picard iteration procedure is replaced by the simplified Newton iteration. Of course in this case, the assumption of being Lipschitz continuous needs to be replaced by stronger assumptions, mainly on the derivative of . However, from a computational point of view a benefit of such an approach that uses more advanced iteration schemes—as for example general Newton-type iteration schemes—, should be given by a lower number of iteration steps within the applied numerical solution procedure.
In this note, the underlying discretization scheme for the approximation of (1) is given by a continuous Galerkin time stepping method of arbitrary order. The idea of such an approximation scheme relies essentially on a weak formulation of (1). Subsequently, this weak formulation will be restricted on some finite dimensional subspaces in order to end up with a discretization of (1). In particular, since the test space of the weak formulation can be chosen of polynomials that are discontinuous at the nodal points, this discretization scheme can be interpreted as an implicit one-step scheme, see e.g. [13, 7, 1, 12] for further details. Thus, starting from the initial data , each time step implies a nonlinear system that needs to be solved iteratively. Thus, if the underlying continuous problem (1) is well posed for some , it is reasonable that a suitable iterative solution procedure—resolving the nonlinearity—is itself well posed as long as the time steps are sufficiently small. Indeed, as we will see, the proposed analysis for the well posedness of the discrete version of (1) implied by the continuous Galerkin methodology depends solely on the local time steps and is independent of the local approximation order, i.e. the local polynomial degree of the solution space.
The outline of this work is as follows: In Section The hp-cG Time Stepping Methodology we present the continuous Galerkin (cG) time stepping scheme used for the discretization of the underlying initial value problem (1). Subsequently, we introduce the proposed iterative linearization scheme used for the solution of the nonlinear discrete system. The purpose of Section Convergence of the simplified linearized hp-cG iteration scheme is the well posedness of the discretized and now linearized problem. This will be accomplished by our main result given in Theorem 1.6. Since the proof of this result relies on a fixed point iteration argument, we end up with an iterative solution procedure that can be tested on some numerical experiments in Section 2. We further discuss the and version of the cG methodology and compare the number of iterations between the standard Banach fixed point iteration procedure and the simplified Newton iteration scheme. Finally, we summarize and comment on our findings in Section 3.
Throughout this article is the Euclidean product of with the induced norm . For an interval we denote by the usual space of square integrable functions on with values in and norm . The set is the usual space of bounded functions with norm . In addition, let be the standard Sobolev space with corresponding norm . For a Banach space we signify by the dual of . In addition, will be used for the dual pairing in , i.e. the value of at the point . We further assume that the right side in (1) is Lipschitz continuous with respect to the second variable, i.e.
The hp-cG Time Stepping Methodology
Let be a partition of the interval into open sub-intervals . By we denote the local time steps with and represents the local polynomial degree, i.e. the local approximation order. Furthermore, the space of all polynomials of degree will be given by
We further introduce the vectorin order to allocate the local approximation orders. The following approximation
test spaces will be used in the sequel. Notice that since the approximation space consists of continuous elements we need to choose a discontinuous test space (as can be seen by the fact that the order of the test space is ).
It is noteworthy, that the discontinuous character of allows us to choose discontinuous elements and therefore problem (3) decouples on each subinterval into
with , i.e. the discretization can be interpreted as an implicit one-step scheme.
For and we introduce the operator by
In addition, let denote a map such that for , there exists a uniform constant with
and define further the operator
Notice that if is Fréchet differentiable in with derivative with respect to , then the Gâteaux derivative in direction is given by
Based on these definitions we introduce the following linearized hp-cG iteration scheme at time node : For and given solve
for and compute the update
In this note we freeze the second variable in the operator , i.e. we set forall . This implies the following simplified linearized hp-cG iteration scheme at time node :
For and given solve
for and compute the update
In case of the above iteration procedure (7) can be interpreted as a simplified Newton iteration scheme.
Convergence of the simplified linearized hp-cG iteration scheme
The aim of this Section is to show the existence of a solution for (4). Our strategy is to show that for sufficiently small time steps, the iteration scheme (7) is well defined and convergent. Before we start, we need to collect some auxiliary results.
Lemma 1.1 (Poincaré inequality).
Let with and . Then there holds the Poincaré inequality
with . The constant is independent of and .
Let us further introduce the following set
and notice that Poincaré’s inequality (8) holds true for all , i.e. we have
The following result addresses the invertibility of the operator .
Let . Then, if the operator
is invertible on .
Since the linear map operates over a finite dimensional space, we show that its kernel is trivial. Suppose there exists such that
holds . Choosing , we conclude
Employing the Cauchy-Schwarz inequality we get
Hence, the above estimate implies
Since we can invoke the Poincaré inequality and obtain, together with our assumption , the following contradiction
Note that if the operator is invertible without any condition on . Indeed, let with
i.e. for there holds
which implies and therefore .
Next, for we introduce the following map
Notice that is well defined by the above Lemma 1.2, i.e.
The next result shows that is a Lipschitz continuous map.
If , then there holds
Notice that for , estimate (11) is simply
Using the above results we are now ready to prove the main result of this note:
Suppose there holds
and assume further that is Lipschitz continuous in the second variable with Lipschitz constant . Then, the cG method (3) admits a unique solution .
This solution can be obtained iteratively by employing the iterative scheme from (7).
2. Numerical Experiments
In this Section we show some numerical experiments illustrating the theoretical convergence results from Section Convergence of the simplified linearized hp-cG iteration scheme. In doing so we provide some apriori error results given in 
. As a preparation, we need to define the number of degrees of freedom DOF for the cG method, which is simply the sum of all DOF’s on each subintervalthat are needed to compute the numerical solution on . Thus
Here we fix the polynomial degree, i.e. we set and therefore . In addition, we assume that the solution of (1) belongs to , and that the partition of is quasi-uniform. Then the apriori error results with respect to the -norm is given in  through
The constant depends on and is independent of and . Let us further point out the following two aspects arising from the apriori error result (15):
The h-version of the cG scheme means that convergence is implied by increasing the number of time steps at a fixed approximation order on each interval . In case that the solution of (1) is analytic and therefore is large, the error estimate (15) then reads
For or equivalently , we see that the rate of convergence with respect to the -norm is .
For the -version of the cG method we keep the time partition fixed but let the approximation order be variable. Thus, convergence is obtained by increasing the approximation order. In addition, it can be shown that for analytic solutions the -version admits high order convergence (even exponentially with respect to ). We refer here to  and  for further details.
The first example is given by the initial value problem
The exact solution is given by . We test the simplified Newton scheme, i.e. we consider (7) using . Notice that and therefore we can expect convergence by Theorem 1.6. First we present the -version of the cG method in Figure 1.The numerical test was obtained by bisection of the time interval . In Figure 1 we depict the decay of the -error from where we can clearly see the convergence order according to the error estimate given in (15). Table 1 shows the -error as well as the convergence order and the number of elements. In addition, Tables 2 and 3 show also the number of iterations compared with the number of iterations when solving (7) with which is simply the standard Banach-fixed point iteration procedure. As it can be seen, the simplified Newton scheme needs a significant lower number of iterations in order to accomplish the -error given in Table 1. Moreover, in Figure 2 we depict the -version of the cG method from where we observe that the convergence rates are even exponential. Here we choose a fixed partition of using and elements, i.e. we increase the approximation order on a fixed partition of .
The second example is given by
The exact solution is given by . Notice, that (2) and (5) are not satisfied. However, we again depict in Figure 3 the -version of the cG method (again by bisection of the time interval ) from where we can clearly see the convergence order according to the error estimate given in (15). Furthermore, in Figure 4 we see the -version of the cG method from where we observe that the convergence rates are again exponential (again we choose a fixed partition of using and elements, i.e. we increase the approximation order on a fixed partition of ).
We finally consider the following nonlinear initial value problems:
The aim of this note was the numerical solution of initial value problems by means of the continuous Galerkin method for the discretization of the underlying nonlinear problem. We further have shown the existence of the discrete solution under reasonable assumptions. In addition, we have proved that for sufficiently small time steps, the linearized continuous Galerkin scheme admits a unique solution that can be obtained iteratively. Moreover, we have tested the proposed iteration scheme on a series of numerical examples. Our numerical experiments clearly illustrate the ability of our approach. Specifically, the simplified Newton iteration scheme was able to significantly reduce the computational amount by means of the number of iterations.
-  (2013) Adaptive finite element methods for differential equations. Lectures in Mathematics. ETH Zürich, Birkhäuser Basel. External Links: Cited by: §1.
-  (2001) Finite elements: theory, fast solvers, and applications in solid mechanics. Cambridge University Press. External Links: Cited by: Convergence of the simplified linearized hp-cG iteration scheme.
-  (2011) Newton methods for nonlinear problems: affine invariance and adaptive algorithms. Springer Series in Computational Mathematics, Springer Berlin Heidelberg. External Links: Cited by: §1.
Global error control for the continuous Galerkin finite element method for ordinary differential equations. RAIRO Modélisation Mathématique et Analyse Numérique 28 (7), pp. 815–852. External Links: Cited by: §1.1.
-  (1995) A posteriori error bounds and global error control for approximation of ordinary differential equations. SIAM Journal on Numerical Analysis 32 (1), pp. 1–48. External Links: Cited by: §1.1.
-  (2010) Partial differential equations. Graduate studies in mathematics, American Mathematical Society. External Links: Cited by: Convergence of the simplified linearized hp-cG iteration scheme.
-  (2018-03-01) Continuous and discontinuous galerkin time stepping methods for nonlinear initial value problems with application to finite time blow-up. Numerische Mathematik 138 (3), pp. 767–799. External Links: Cited by: §1.
-  (1972) Discrete Galerkin and related one-step methods for ordinary differential equations. Mathematics of Computation 26, pp. 881–891. External Links: Cited by: §1.1.
-  (1972) One-step piecewise polynomial Galerkin methods for initial value problems. Mathematics of Computation 26, pp. 415–426. External Links: Cited by: §1.1.
-  (2004) Principles of differential equations. Pure and Applied Mathematics: A Wiley Series of Texts, Monographs and Tracts, Wiley. External Links: Cited by: §1.
-  (1998) P- and hp- finite element methods: theory and applications in solid and fluid mechanics. G.H.Golub and others, Clarendon Press. External Links: Cited by: §2.
-  (1984) Galerkin finite element methods for parabolic problems. Lecture notes in mathematics, Springer. External Links: Cited by: §1.
-  (2005) An a priori error analysis of the -version of the continuous Galerkin FEM for nonlinear initial value problems. Journal of Scientific Computing 25 (3), pp. 523–549. External Links: Cited by: §1, §2, §2.