Walking into the complex plane to 'order' better time integrators

10/08/2021 ∙ by Jithin D. George, et al. ∙ Microsoft 0

Most numerical methods for time integration use real time steps. Complex time steps provide an additional degree of freedom, as we can select the magnitude of the step in both the real and imaginary directions. By time stepping along specific paths in the complex plane, integrators can gain higher orders of accuracy and achieve expanded stability regions. Complex time stepping also allows us to break the Runge-Kutta order barrier, enabling 5th order accuracy using only five function evaluations. We show how to derive these paths for explicit and implicit methods, discuss computational costs and storage benefits, and demonstrate clear advantages for complex-valued systems like the Schrodinger equation.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The real line is a tiny sliver in the whole of the complex plane. Wandering into the complex plane has often been employed to solve real problems. Two direct examples are the solution of complicated real-valued integrals via contour integrals [brown2009complex] and the usage of the intuition gained from poles in the complex plane in control-theory applications [nise2020control]. The complex plane provides wonderful insights to real problems, allowing mathematicians to impact practical applications. One of the motivations for this paper was a simple numerical application highlighted in a 2018 SIAM News article [high18-diff] where Nick Higham describes how taking a numerical complex derivative neatly gets rid of round-off error. However, most methods for numerically solving a differential equation from time to time , construct a path utilizing only the real line. We show that we can use the complex plane to our advantage when it comes to numerical time integration by taking time steps in the complex plane.

This paper is not the first to talk about complex time steps. Early work on time stepping in the complex plane focused on systematically avoiding singularities during numerical integration of singular differential equations [corliss1980integrating] and as techniques to achieve superconvergence for Runge-Kutta methods [orendt2009geometry]. A use of complex time steps most similar to our work was in [chambers2003symplectic], where Chambers explores operator splitting in symplectic integrators using complex substeps. Taking real substeps often results in negative substeps and substeps larger than the original step causing the symplectic integrator go back and forth along the direction of integration. By using complex substeps, Chambers showed how to avoid negative and unstable substeps, significantly reducing the truncation error. Chambers’s work inspired others to consider complex time steps as part of operator splitting methods for various parabolic and nonlinear evolution problems [castella2009splitting, auzinger2017practical, casas2021compositions, blanes2010splitting, hansen2009high]. Concurrent with our work, a very recent paper [casas2021high] in this area uses complex coefficients for high order composition methods. Another recent paper develops time integration schemes using nodes in the complex plane based on Cauchy’s integral formula and using the complex nodes to facilitate parallel BDF formulation [buvoli2019constructing]. Rather than focusing on a specific application or numerical scheme as has been done previously, we ask fundamental questions about the utility of numerical time-stepping in the complex plane, including: ‘What paths lead to the least error?’ and ‘Are there general strategies and trade offs for complex time-stepping?’. Here we show that stepping into the complex plane provides an extra dimension which can be exploited to improve accuracy, design expanded stability regions and even break through the order barrier [Butcher:2007, butcher2009fifth]. We compare against existing Runge-Kutta methods and demonstrate improvement for particular proof of concept cases. Finally we analyze increased operation counts due to complex operations, potential storage benefits, extensions to stiff solvers, and strong stability preserving properties. The code to reproduce the numerical experiments in this paper is available on GitHub 111https://github.com/Dirivian/complex_time_integrators .

2 Taking complex time steps with the Euler method

The idea of a complex time step can be unusual and unintuitive. To make it accessible, we first illustrate simple paths with complex time steps and demonstrate that applying the forward Euler method along the complex paths improves accuracy compared to forward Euler with real steps. The paths we choose in the complex plane must intersect the real line at the time points where we desire a solution. Our primary question is ‘what is the optimal n-step path in the complex plane over which to integrate between two real time points?’

To illustrate the simplest possible case, suppose we want to solve the equation with the initial condition at time . To find the solution , we can either take two real time steps of length or take a complex time step of length followed by a complex time step of length returning back to the real line. These two paths are shown in Fig. 1 (top). We numerically integrate the differential equations along these paths as shown in Fig. 1 (bottom). By eye, the complex time path clearly does much better than the real time path in this linear case. The improvement with a complex path can hold for nonlinear ODEs, second order ODEs, non-autonomous ODEs (Fig. 2) and nonlinear PDEs, such as the viscous Burgers equation (Fig. 3). In section 3, we describe the generalizations needed to solve nonlinear equations, but first we provide theoretical insight into the simple linear case.

Figure 1: Taking complex time steps that return to the real line results in increased accuracy.
Figure 2: Comparison of real-valued classic Euler method and 2-step complex-valued Euler method. The number of real timepoints for the classic scheme are roughly double, , the number for the 2-step complex scheme so that the net number of timesteps and function evaluations are the same for both schemes.
Figure 3: The complex time steps reduce the error for the nonlinear viscous Burgers equation.

Why does time stepping on the complex path improve accuracy of the numerical solution so significantly? To answer that, we analyze the linear problem with initial condition . Consider time stepping from the initial condition at time to time using 2 real time steps. The discrete values of are given by the following at and :


Now consider timestepping from at time to time using the two complex time steps.


The exact solution to the problem is given by the Taylor expansion


The secret behind the higher accuracy of the complex 2-step method is that the polynomial obtained through the numerical approximation matches the Taylor series of the exact solution to the second order term, making forward Euler method gain second order accuracy. This is one of two unique 2-step paths in the complex plane which has this property. The other unique path uses the other permutation of the complex time steps instead of . Both permutations of these steps will lead to the same polynomial with coefficients that match the Taylor expansion to 2nd order.

By stepping into the complex plane, we gain an extra degree of freedom in the coefficients that allows us to design time-steps with improved accuracy. This extra dimension can be used to construct higher order paths for linear and nonlinear differential equations (Section 3), design improved stability (Section 6) and even break through the order barrier for Runge-Kutta methods (Section 5). The paths can be combined with other methods, such as the implicit midpoint method (Runge-Kutta-2) as demonstrated in Section 8. In the next section, we demonstrate how to extend the linear example to construct paths with higher order accuracy.

3 Constructing a higher order time integrator

Is it possible then to construct 3-step path on which forward Euler would be 3rd order accurate? Would it be nth order accurate on an n-step complex path? To build intuition for the simplest higher order construction, we continue analyzing the linear problem in Subsection 3.1, and then prove that achieving higher order requires complex time-stepping. In Subsection 3.2, we extend the construction of complex time-steppers to the general nonlinear problem.

3.1 Higher order integrators for linear differential equations

In this subsection, we show how to construct a third-order time integrator for the ode by taking 3 forward Euler time steps of variable sizes in the complex plane that return to the real line at every desired time point.

If we take the three complex steps and to timestep from at to , the approximate numerical solution at time is given by


where the ’s are complex variables. Collecting terms results in


The exact solution is given by


For the numerical approximation to be third-order accurate, the coefficients of for in equations (5) and (6) need to match. This results in the following order conditions:


One of the solutions to the above nonlinear system is


However, as the nonlinear system 3.1 is symmetric about and , all 6 permutations of result in unique 3-step 3rd order complex time stepping paths. In Fig 4, all six 3-step 3rd order paths are shown along with the two 2-step 2nd order paths and the single step forward Euler path.

For a general n-step nth order path, the order conditions would be


For higher orders, the solutions take on a complicated form and solving the nonlinear system becomes a tough symbolic task. However, we only need to find one solution for high-order approximations and such these solution can be found numerically and be stored in a library for general use.

Figure 4: All possible first, second, and third-order complex Euler methods for linear differential equations. Each order requires an additional step and adds an additional constraint on the coefficients.

These results beg the theoretical question: is there any way to take steps of varying size on the real line and achieve a higher-order time integrator? or is it only possible to do so by going into the complex plane?

Theorem 1

It is impossible to construct a time integrator with order greater than or equal to 2 using real-valued Euler time steps of variable sizes.


We prove the above theorem by showing that it is impossible to construct a second-order time integrator using forward Euler steps of variable size on the real line.

Suppose we use time steps of variable size given by . The approximate solution to given by our time integrator would be

where we assume the first order condition is satisfied,

Gathering together different order terms,

where we have substituted in our first order condition. For the time stepper to be second order, we need the coefficient on to be

Factoring out a results in

Solving the first order condition for and incorporating produces

which rearranges to

Since all the s are real, we can express them as

where all the s are real. Then, we get

Rearranging in terms of powers of one has

This is quadratic equation to solve for in the form of . For to only have real solutions, we need a positive discriminant . In our case, the discriminant, , is

Since the discriminant is always negative, cannot be a purely real time step and must to be complex to satisfy our second order condition. Therefore, not only can we achieve higher order integrators through complex time stepping, but we must step into the complex plane to achieve higher order.

3.2 Higher order integrators for general nonlinear non-autonomous differential equations

In the previous subsection, we showed how to construct a 3-step 3rd order complex path on which to time integrate using forward Euler steps. We can experimentally demonstrate that the order of accuracy is obeyed by every linear ordinary differential equation and partial differential equation that we have tested it on. However, the derivation assumed a linear problem and does not guarantee the same order of accuracy for nonlinear problems. Indeed, when we tested the complex time-stepping schemes on nonlinear and non-autonomous equation, the 3 step paths achieved only second order accuracy. To achieve higher order accuracy for nonlinear, non-autonomous equations, we generalize the procedure we described so far to obtain 3rd order accuracy.

A general nonlinear ode is given by


Let us assume the function is sufficiently analytic near the region of integration i.e it does not have any singularities close to the path of integration. At time , we have




Expanding equation 11, we get


Now, consider a complex time stepper that takes 3 complex time steps starting from to go to . These steps are are given by:


One can expand around using the multivariate Taylor series. Doing so,


For this three step complex integrator to have 3rd order accuracy, the coefficients of the partial derivatives in Eq 3.2 should match that of Eq 3.2. Equating coefficients, we get this system of nonlinear equations.


Unfortunately, this system is overconstrained and does not have a solution, implying that we may need more than 3 steps for 3rd order accuracy. However, for systems with only real solutions we can take further advantage of the degree of freedom in the imaginary part of the step and find many 3-step paths with 3rd order accuracy. If we allow the truncation error at the order of to be purely imaginary, we can effectively remove the error at the end of the 3 step sequence by taking only the real part of the numerical solution at the end of the complex path. Let be the true solution and be the approximate numerical solution. Then, with our setup, we would have


where is real. Taking the real part on both sides, we get


If the true solution to our differential equation and its initial condition are known to be real-valued at real time points, then we have and taking the real part of the solution at the end of complex path results in 3rd order accurate numerical solution. This formulation, allowing the error to be purely imaginary at has fewer constraints on the coefficients, which must now satisfy the following relaxed conditions:


There are many paths satisfying these equations. Of the 3rd order paths in Fig 4 that were found to satisfy the linear order conditions, the top and bottom (cyan and brown) paths also satisfy the 3rd order nonlinear order conditions. We ran convergence tests for various nonlinear problems listed in Section 3.3, and demonstrated 3rd order accuracy with those paths (Fig 5).

In this subsection, we have derived paths in the complex plane that satisfy the order conditions for scalar non-autonomous differential equations. A natural follow-up question is whether these paths return the same order for vector valued non-autonomous differential equations, expanding our result to systems of differential equations and partial differential equations. Butcher showed in

[butcher2009fifth] that the order conditions for scalar non-autonomous differential equations are the same as those for vector valued differential equations up to order 4. As the paths derived here are of order 3 or less, they hold for systems of differential equations and partial differential equations as seen in the next subsection. To construct a fifth order or higher complex path, we would need extract the necessary order conditions using Butcher’s rooted tree approach [butcher2009fifth].

3.3 Numerical Results

We have tested our complex time-stepping schemes on a variety of differential equations to show that they satisfy the order conditions. The equations are listed in Table 1 along with a unique label corresponding to their legend on the convergence plot in Fig.5.

Figure 5: Convergence of various ODEs and PDEs (Sec. 3.2) for 1-step (forward Euler), 2-step and 3-step complex time steps. The path/method used is displayed above the corresponding convergence plot. The orders of accuracy are as expected.
1. Linear ODEs
  • The Dahlquist test problem (‘linear’)

  • Simple harmonic oscillator (‘shm’)

2. Nonlinear ODEs
  • Nonlinear autonomous ode 1(‘square’)

  • Nonlinear autonomous ode 2 (‘exp’)

  • Nonlinear sine(‘nlsin’)

  • The Van Der Pol Oscillator (vdp”)

3. Linear PDE : The advection equation (‘wave’) This is a linear PDE given by
This PDE is solved with periodic boundary conditions and the initial condition . The exact solution is . The parameter is set to 1. The Fourier spectral method is used for the spatial differencing with 70 modes.
3. Nonlinear PDE : Viscous Burgers equation (‘burgers’) This is a nonlinear PDE given by
This PDE is solved with periodic boundary conditions and the initial condition . The exact solution is . The parameter is set to 0.1. The Fourier spectral method is used for the spatial differencing with 70 modes.
Table 1: List of differential equations

The error for all ODEs and PDEs apart from the Van Der Pol oscillator is calculated by comparing the numerical solution to the exact solution. For the Van Der Pol oscillator, the numerical solution using complex paths is compared to the numerical solution obtained by using the ‘RK44’ 4th order method from Nodepy [ketcheson2020nodepy] with .

3.4 Higher order integrators tailored to specific problems

For specific problems, it is possible to derive complex time steppers that are uniquely accurate with fewer steps. For example, we can derive a 2-step complex Euler method with 3rd order accuracy for the following nonlinear problem,


The true solution at time can be approximated to third order accuracy using Eq (3.2) to obtain


Note that one does not need the true solution to obtain Eq (26) as this expansion can be obtained from the right hand side of the differential equation.

Now, we apply the following two-step complex method to integrate from at time to at time using forward Euler steps.


where are complex time steps. Writing in terms of the initial condition, we get


This 2 step complex Euler method would have 3rd order accuracy if the coefficients in Eq. 26 and 28 match. However, there are no values that satisfy these conditions. So, we assume the case in which the true solution to (25) is real at real time points (valid if the initial condition is real). For this case, we can let the error be purely imaginary beyond order 1 and use the trick of taking the real part of the numerically calculated values every time we complete a complex path and return to the real line. This relaxed constraint produces the following a nonlinear system of equations that can be solved,


One of the many solutions to this system is


Applying this new 2-step time stepper to the nonlinear problem, it indeed has 3rd order accuracy (Fig 6). Similar 2-step paths are derived for and and they also show expected 3rd order accuracy in Fig. 6.

Figure 6: Two step complex method Euler methods that have third order accuracy for two specific problems.

In the process of constructing this integrator, we needed to plug in the right hand side of the differential equation into Eq. 3.2 to derive the expansion. This may be an intensive symbolic task for more complicated problems. However, the right complex steps could be of ‘learned’ for specific and difficult problems through approaches similar to [guo2021personalized]

where a machine learning approach is used to obtain Runge-Kutta methods tailored to particular ODE families.

In this section, we have shown the process of constructing higher order paths for explicit Runge-Kutta type integrators. Later, in Section 8, we extend it to implicit methods. In the next section, we discuss the benefits and costs when compared to equivalent classical Runge-Kutta methods.

4 Operation counts and low storage benefits

Now, we discuss a potential drawback of the complex timestepping schemes, the increased computational cost for performing complex operations, and a potential benefit, the low storage cost compared to the Runge-Kutta methods. In Sec. 4.1, we demonstrate that low storage cost is the main benefit for complex-valued systems.

Consider the forward Euler method applied to the differential equation , yielding the step . If the timestepping is done along the real line, the computational cost is one real multiplication, one real addition, and the operation cost of one real function evaluation. If the timestepping is done along a path in the complex plane, the computational cost is one complex multiplication, one complex addition and the operation cost of one complex function evaluation. A complex multiplication costs 4M + 2A or 3M+5A (Gauss’s trick), where M and A are the costs of real multiplication and addition. The storage also needs to be doubled to store a complex-valued vector compared to the real-valued vector.

Consider taking a complex timestep with the ordinary differential equation . We can express it as the following system.


where represents the complex time step and the subscripts and represent the real and imaginary parts. This results in 4M+4A cost of operations as opposed to M+A operations for purely real time steps. Based on Eq (31), the complex time stepping process shows promise for parallelization, which could reduce computational time by up to half.

Similarly, if we integrate the system of N equations with complex time steps, it becomes a system of equations. In that scenario, the dominant cost is the matrix-vector multiplication () which takes operations with real time steps and operations with complex time steps. For a k-step Complex Euler method,there are k matrix-vector multiplications and the operation count is compared to for a k stage Runge-Kutta Method.

For nonlinear systems, the dominant cost comes from function evaluations. Functions can be designed to minimize the computational cost, but the real equivalent of any complex-valued function can always be calculated using fewer operations. So, the complex time stepper will always be more expensive than an explicit Runge-Kutta method if the number of function evaluations is the same. In the next section, we explore how complex time integrators can require less function evaluations than the classical Runge-Kutta methods of the same order by breaking the Runge-Kutta order barrier [butcher2009fifth]. In that case, the complex integrators may end being the less expensive option.

Another potential advantage, is that the storage requirement for complex stepping is a vector of size 2N while general Runge Kutta scheme store the intermediate stages. For example, a 5th order complex timestepping scheme needs to storage a vector of size at all times while a classically implemented 5th order Runge-Kutta method requires memory for vector of size . However, there are low-storage implementations of Runge-Kutta schemes that use fewer registers [carpenter1994fourth, KENNEDY2000177, ketcheson2008highly]. Storage is not much of an issue for typical odes and simple systems but it becomes quite relevant in various practical settings like in computational fluid dynamics [KENNEDY2000177] where partial differential equations like the Navier Stokes equations are solved in 3 dimensions at high resolution.

4.1 The linear Schrodinger equation : A clear computational advantage for the 3-step complex Euler method

In this subsection, we demonstrate an example case where our complex methods have a clear benefit over existing methods. We compare the 3-step complex Euler method to a standard RK-3 method (Ralston’s method) for a specific problem. We choose to examine the linear Schrodinger equation because it is a partial differential equation with complex values, so the cost of complex operations is the same in both methods. We show that methods are of the same order, but because 3-step complex Euler has lower storage requirements, it is advantageous.

The linear Schrodinger equation is given by


We solve the equation from to on the domain with periodic boundary conditions. The initial condition is given by and the Fourier spectral method is used for the spatial differencing. The exact solution is given by .

Figure 7: The linear Schrodinger is solved using both Runge-Kutta 3 (Ralston’s method) and the complex 3-step method. Both methods result in nearly identical error ( using the infinity norm).
Figure 8: Runge-Kutta 3 (Ralston’s method) and the complex 3-step method require the same the number of function calls and result in nearly identical error when computing the linear Schrodinger equation.

Figs 7 and 8 show that the two methods have near identical error and require equal number of function evaluations when integrating the linear Schrodinger equation. The 3-step complex method has one advantage over the Runge-Kutta-3 method: Since the calculating the next step only requires the numerical solution at the current step, the required storage is the size of the initial condition vector. The Runge-Kutta-3 method requires thrice the storage to store the 3 stages during each function evaluation in its basic implementation.

5 Breaking the order barrier for explicit Runge-Kutta methods with complex coefficients

One can construct an stage explicit Runge-Kutta method with order where up to order . But for order 5 and beyond, the number of stages needs to be more than the order desired [butcher2009fifth]. This is known as the order barrier for Runge-Kutta methods [Butcher:2007].

Specifically for the 5th order Runge-Kutta-5 method, 6 stages are required and hence 6 function evaluations. This is because for 5th order accuracy, there are 17 order conditions and the 5 stages only provide 15 free parameters to satisfy them.

Can the order barrier be broken with complex time stepping? If we take complex Euler steps and use the trick of taking the real part and keeping the order error purely imaginary, we have free parameters ( complex variables that need to return to the real line). free parameters are insufficient even for 4th order accuracy because there are 8 order conditions and only 7 free parameters for .

Complex stepping can be applied to numerical methods apart from Forward Euler as well. What if we allowed some coefficients in a 5-stage Runge Kutta step to be complex variables and constrained the error to be purely imaginary? That easily increases the number of free parameters from 15 to 17 or higher in order to solve the 17 order conditions!

Another approach to break the order barrier is to take two complex Runge-Kutta steps: one Runge-Kutta-2 step and then one Runge-Kutta-3 step. If we require that the two complex Runge-Kutta steps come back to the real line, the total number of free parameters is 10 (3 free parameters for Runge-Kutta-2, 6 free parameters for Runge-Kutta-3 and the two time steps sizes with the constraint that they add up to (purely real)). If all 10 of those free parameters are complex valued and the order error is purely imaginary, we get 20 free parameters which are sufficient to solve the 17 order conditions with 5th order accuracy. Using this method, we can achieve 5th order accuracy with only 5 function evaluations, breaking the order barrier. As these are complex function evaluations, the cost is higher than the real equivalent, but there are problems and architectures for which this is likely not an issue as discussed in Sec. 4.

As a proof of concept, we find a 5th order accurate complex Runge-Kutta-2/Runge-Kutta-3 method for the special case of scalar autonomous ODEs. We choose this particular case because it is more tractable than the 17 order conditions for the general case, which require a more complicated derivation involving a rooted tree approach [butcher2009fifth]. Even with the scalar autonomous case, there are 12 order conditions to satisfy (which cannot be done with only 10 free parameters of the purely real version of the Runge-Kutta-2/Runge-Kutta-3 method). Here, we show that the complex Runge-Kutta-2/Runge-Kutta-3 method returns a 5th order accurate numerical solution for scalar autonomous ODEs with 5 function evaluations, something that cannot be done with its real equivalent.

Figure 9: The complex two step Runge-Kutta-2/Runge-Kutta-3 method with complex Runge-Kutta coefficients has 5th order accuracy for autonomous scalar ODEs.

The following equations describe our two step complex method. First an Runge-Kutta-2 step of size goes from to and then an Runge-Kutta-3 step of size to goes from to . The steps and stages are


Ensuring the method returns to the real line at an effective time step of introduces the constraint .

With these 9 complex variables, we solve the 12 order conditions and the 1 constraint guaranteeing that the time-stepper returns to the real line. The resulting scheme provides 5th order accuracy for scalar autonomous ODEs with only 5 function evaluations (Fig. 9).

6 Expanded regions of stability with complex time-steps

In the previous sections, we designed paths in the complex plane to achieve higher order. In this section, we leverage complex paths in an alternative way and expand the time steppers’s region of absolute stability. First, we discuss the linear stability regions of the complex integrators developed in Sec. 3.1. Next, we show how these regions can be expanded by designing the complex steps to optimize a desired stability polynomial. Expanded stability regions have been studied in a similar manner previously in the context of Runge-Kutta methods where the order of accuracy for the Runge-Kutta method is sacrificed to obtain a larger stability region [riha1972optimal, lawson1966order, abdulle2000roots, bogatyrev2004effective, ketcheson2013optimal, parsani2013optimized, kubatko2014optimal]. In most previous works, a new Runge-Kutta integrator (different integration method) was derived that had a stability function/polynomial corresponding to the desired stability region. Here, we keep the same integration method and traverse a path in the complex plane to obtain the desired stability region (different integration domain). We illustrate this approach using the forward Euler method. We also show that complex stepping may enable even larger regions designed for specific problems.

Consider the linear equation studied in Sec. 3.1. Most integrators, including those discussed here, convert the linear equation into the difference equation where is the stability function of the integrator [leveque2007finite]. The region where is the region of absolute stability. For a complex -step th order method, the stability function is given by the first terms in the Taylor series of . For the 3 step 3rd order method, the stability function is


The regions of absolute stability for the 1 step 1st order, 2 step 2nd order and 3 step 3rd order methods (Fig 10) are the same as those for the n-stage Runge-Kutta because they have the same stability functions [leveque2007finite]. Note that in all these methods refers to the step size separating the beginning and end points of the complex path and hence a real time step. Since the 1-step, 2-step and 3-step methods require a different number of steps and hence a different number of function evaluations to take a net step of size , a standard metric for comparison among methods is the effective step size where is the number of steps in the complex integrator case or the number of stages in the case of Runge Kutta methods [ketcheson2013optimal].

Figure 10: The 1-step first order, 2-step 2nd order and 3-step 3rd order complex Euler methods share the same stability regions as the 1,2 and 3 stage Runge-Kutta methods (left). Corresponding complex time-stepping paths are given on right.

We would like to expand the regions in Fig 10 by optimizing the stability function. If we relax our requirement for a 3-step integrator to be 3rd order accurate, and require only 2nd order accuracy, its stability function would be given by


The coefficient in the above expression is the same as the coefficient in front of the in Eq (5): . We can choose the complex steps and such that they satisfy the first and second order conditions ()and make equal to our desired value of . The optimal

value is the one that would allow us to take the largest stable time step. For problems with purely real and imaginary eigenvalues, the optimal

values and the associated optimal stability functions/polynomials have been long-studied [riha1972optimal, bogatyrev2004effective, abdulle2000roots]. In the past decade, Ketcheson and co-authors [parsani2013optimized, kubatko2014optimal, ketcheson2013optimal] have explored optimal stability functions in the context of where is a matrix with quite arbitrary spectra. In [ketcheson2013optimal], Ketcheson and Ahmadia describe an optimization algorithm to obtain the optimal stability polynomial that allows the maximal time step depending on the spectra of . This algorithm (RKOpt [ketcheson2020rk]) is publicly available through Nodepy [ketcheson2020nodepy].

By choosing the value that results in the largest permissible time step along the negative real eigenvalue line, we achieve a 3-step 2nd order method (green) with expanded stability along the negative real axis compared to the 3-step 3rd order method (red Fig 11). Requiring only 1st order accuracy for the 3-step method can nearly triple the size of the stability region along the negative real axis (gold) and surprisingly requires only real time-steps (Fig 11).

Figure 11: By sacrificing accuracy, one can expand the stability region (left) using complex time steps, as demonstrated by these optimized 3-step complex Euler paths (right) with 3rd, 2nd, and 1st order accuracy.

To our knowledge, our work is the first to use complex time steps to create stability polynomials, enabling us to expand the stability region. Next, we will demonstrate a potential path for even larger stability regions via complex stepping.

The use of complex time steps in an integrator allows to expand the optimal stability polynomials beyond those with real coefficients. Consider the challenge of finding a 3-step 2nd order integrator with an optimal stability region that encompasses the largest possible timestep along the eigenvalue (red dots Fig 12). The red dots further away from the origin correspond to larger values along the eigenvalue. We want to find the optimal stability polynomial/function that permits the largest for a 3 stage/step 2nd order method in this scenario.

The optimal stability polynomial returned by RKOpt[ketcheson2020rk] (magenta Fig. 12) is


Through trial and error, we find that an even better stability polynomial (brown Fig. 12) given by


The complex coefficient allows for a larger stable time step size for a problem with this eigenvalue. A trade-off is that the optimal region is no-longer symmetric. So such regions would be primarily useful for complex problems, where complex eigenvalues appear outside conjugate pair. Alternatively, better stability region designs may be possible using more sophisticated optimization methodology.

Figure 12: Using complex time steps produces stability polynomials with complex coefficients. The resulting stability regions (left) can expand further in a desired direction than those using purely real coefficients. Corresponding paths on right.

This section has been an exploration of the paths in the complex plane on which an integrator has improved linear stability properties. In the next section, we consider nonlinear stability properties particularly strong stability properties. Later, in Section 8, we discuss the derivation of implicit complex methods with properties like L-stability.

7 Remarks on Strong Stability Preserving (SSP) properties

Time integrators with SSP properties are desirable because they extend they provide guarantees of stability for nonlinear problems and non-smooth solutions. In this section, we highlight the need for new analysis for deriving practical SSP conditions for complex timesteppers and we show numerically that complex timestepping can result in larger SSP time steps.

Consider the following differential equation system,

If one use the forward Euler method is used to integrate the above system, the system would be said to satisfy a SSP property if

where the maximal SSP time step is given by

Much effort has focused on obtaining optimal SSP Runge-Kutta methods in various contexts [gottlieb2001strong, ketcheson2008highly]. The general argument used to show SSP is as follows; if forward Euler is SSP for some , these optimal SSP Runge-Kutta methods can be expressed as a convex combination of the forward Euler steps and guaranteed to be SSP for some . This is known as the SSP coefficient.

Following this line of argument, consider our step general complex Euler time stepper expressed as

Here, represent the solution at and time points on the real line. The represent the solution at the complex time points in between and the are the complex time steps. Unlike the Runge-Kutta methods in their Shu-Osher form, finding an SSP coefficient using convex combinations of forward Euler is less straightforward because of the complex nature of and .

A strict condition for an SSP complex time integrator can be derived if we require

This results in a very restrictive time step.

Notably, the lower bound on the largest possible SSP timestep can be as small as zero if vanishes. This is too strict of a condition to obtain a practical SSP timestep. We can obtain a larger timestep with a looser SSP condition if we require that only the solutions at the real time points at the end of the path in the complex plane satisfy the SSP property . For the two step second order complex time integrator which takes the complex steps and at each real time point, where are and , , this translates to the following condition.

We can also take the step first and then the step before returning to the real line. Taking different paths with the same number of steps and the same order results in different SSP conditions. However, we have not yet been able to obtain a clean expression for an upper bound on the SSP timestep using this approach.

Using numerical experiments, we find the complex timestepping methods could allow for a larger SSP stable timestep. We demonstrate for the simple case,


We consider three 2nd order timestepping methods.

  • The midpoint method:

  • The optimal SSP Runge-Kutta 2 method: This is the second order RK method with the optimal SSP coefficient of 1. [gottlieb2001strong]

  • The complex 2-step method: We assume our initial condition is real. So, we make sure to take the real part of the solution at the end of the two steps.


We numerically compute the largest timestep for each method for various values such that for , the SSP condition () is satisfied. Different methods permit the largest time-step for different sizes of . However, for most values, the complex 2-step method enables comparable maximal time-steps and for large values of , the complex 2-step method appears to enable larger maximum time-steps (Fig. 13). While we have not yet been able to prove SSP properties for complex methods, these numerical experiments show it may have desirable properties.

Figure 13: For various values of , the complex 2-step method allows us to take a larger SSP timestep.

8 Dealing with stiff systems: Implicit methods

When dealing with stiff systems, expanding the stability region of the integrator is rarely sufficient and implicit methods with properties like L-stability are required. In this section, we derive a higher order complex path for an implicit method: The implicit midpoint method. First we study linear ODEs (Sec. 8.1) and then we extend to general, nonlinear, non-autonomous differential equations (Sec. 8.2).

8.1 Linear differential equations

Consider the linear equation