1 Introduction
The goal of this work is to develop high order and efficient numerical methods for nonlinear kinetic models, such as the VlasovPoisson equations or driftkinetic models. In most situations, the nonlinearity in the transport term originates from the coupling with a Poisson type problem that is used to compute the electric field.
Historically, particle in cell methods have been extensively used to treat kinetic problems. In this approach, the unknown is sampled by discrete particles which are advanced in time using an ODE solver, whereas the electric field is computed on a spatial grid. For some problems, these methods can tackle high dimensional kinetic problems with a relatively low computational cost. However, they also suffer from numerical noise which pollutes the accuracy in low density regions of phase space. Moreover, as the number of particles is increased the error only decreases as the inverse of the square of the number of particles. Thus, convergence is slow. For a review of particle methods we refer the reader to [62].
On the other hand, Eulerian methods (e.g. finite volumes or finite differences), which directly discretize the phase space, are able to reach high order accuracy in time, space, and velocity. However, in addition to the fact that they are costly, these methods usually suffer from stability constraints that force a relation between the time step and the phase space grid sizes, the socalled Courant–Friedrichs–Lewy (CFL) condition. Hence, a large number of time steps is required to reach the long times that are often required in plasma physics applications.
To overcome this CFL condition, semiLagrangian methods have been developed during the last decades. These methods realize a compromise between the Lagrangian (i.e. particle in cell) and Eulerian approaches by exploiting the characteristics equations to overcome the CFL condition, while still performing computations on a grid in both space and velocity [9, 59, 32]
. This approach is usually combined with splitting methods to avoid a costly multidimensional interpolation step. This allows for a separate treatment of the terms in the equation and the corresponding characteristic curves can then, at least in some situations, be computed analytically. For purely hyperbolic problems, the setting we consider here, it is also possible to construct high order splitting schemes
[8].Splitting results in a very accurate and efficient scheme for the VlasovPoisson equation. This is the case because the problem is only split into two parts, the characteristics of which can then be solved exactly in time. However, this is not necessarily true for more complicated equations such as gyrokinetic or driftkinetic models. Indeed, for the driftkinetic model, a three terms splitting has to be performed so that a relatively large number of stages are required to reach high order accuracy in time. In addition, some stages can not be solved exactly in time and thus require additional numerical work to approximate them.
In [14] an alternative approach based on exponential integrators was proposed. These schemes exploit the fact that in many applications where (gyro)kinetic models are used, the most stringent CFL condition is associated with the linear part of the model. This observation serves as the basis for the numerical methods we will consider in this work. Starting from the variation of constant formula, the linear part of the model will be solved exactly as part of an exponential integrator, whereas the nonlinear part, which is very often orders of magnitudes less stiff than the linear part, will be treated explicitly in time. In practice, the linear part can then be solved in phase space by using Fourier techniques or semiLagrangian schemes and the nonlinear part is approximated by standard finite difference/finite volume/discontinuous Galerkin techniques.
The numerical results presented in [14] were generally very favorable. The authors were able to take larger time steps compared to what has been reported for splitting methods in the literature and the computational cost was significantly reduced. In addition, since exponential integrators treat the nonlinear part explicitly, they can be adapted much more easily to different models. Despite these many favorable properties, the largest stable time step size was difficult to predict and varied significantly from method to method. Moreover, as we will see, many exponential integrators can behave rather erratically depending on the specific configuration of the simulation.
Thus, the main goal in this paper is to understand the stability of exponential integrators when applied to purely hyperbolic problems. While there is a large literature and well established theory for exponential integrators applied to parabolic problems (see [41] and references therein), we will see that for purely hyperbolic problems many surprises are encountered. Based on this analysis we will then propose to use a class of exponential methods, Lawson methods, that do not suffer from the described deficiency. Our analysis explains the efficient and robust behavior of Lawson integrators for this kind of problems. We will also present numerical results for both the Vlasov–Poisson equations and a driftkinetic model that confirm the expected behavior and shows that using this approach significant performance improvements compared to the exponential integrators used in [14], and by extension compared to splitting methods, can be attained.
The paper is organized as follows. First, we offer a brief introduction to exponential methods (section 2). Then, in section 3 a linear stability analysis is performed for both the time and phase space discretization. For the explicit part, we consider both centered differences (such as Arakawa’s method) and weighted essentially nonoscillatory schemes (WENO) schemes. In sections 4 and 5 we investigate the performance of these methods for the Vlasov–Poisson equations and a fourdimensional driftkinetic model, respectively.
2 Exponential integrators and Lawson methods
Exponential methods are a class of time integration schemes that are applied to differential equations of the form
(1) 
where is a matrix and is a, in general nonlinear, function of . Usually, both and
are the result of a spatial semidiscretization of a partial differential equation. Exponential methods are applied to problems where
is stiff or otherwise poses numerical challenges, while can be treated explicitly. For the hyperbolic case, a prototypical example is the Vlasov equation (11). Exponential methods are advantageous if the largest velocity is large compared to the electric field. Then the linear part has a much more stringent CFL condition than the nonlinear part of the equation. We will consider this example in some detail later in the paper.In this paper, we will consider two types of exponential methods. The idea of exponential integrators is to use the variation of constants formula to rewrite equation (1) in the following form
where we denote the time step size by and with . This expression is still exact; i.e.no approximation has been made. Note, however, that this can not be used as a numerical method as evaluating the integral would require the knowledge of , which is not available. Approximating the integral in terms of available data results in an exponential integrator. In the simplest case we use the rectangular rule at the left endpoint to obtain
where is an entire function. This is the first order exponential Euler method. In a similar way exponential Runge–Kutta methods can be constructed. We refer to the literature, in particular the review article [41], for more details.
Another ansatz to remove the stiff linear term from equation (1) is to introduce the change of variable
Plugging this into equation (1) yields
(2) 
Now we apply an explicit Runge–Kutta method to the transformed equation. In the simplest case, applying the explicit Euler scheme yields
Reversing the change of variables yields
This is the Lawson–Euler method, also a method of order one. Lawson methods are also commonly referred to as integrating factor methods. We immediately see that any explicit Runge–Kutta method applied to equation (2) uniquely determines a Lawson scheme. We call the chosen Runge–Kutta method the underlying Runge–Kutta method. For more details we refer the reader to [47, 7, 61, 51].
The example of the Lawson–Euler method already shows the similarity between Lawson schemes and exponential integrators. In fact, Lawson methods can be considered a subclass of exponential integrators. That is, they are a type of exponential integrators that only involve the exponential, but no other matrix functions. For the purpose of this paper we keep the nomenclature distinct. A Lawson scheme is a numerical method obtained as described above, while an exponential integrator is a numerical scheme that, in addition to the matrix exponential, uses other matrix functions.
The efficiency of exponential methods crucially depends on a good method to evaluate the application of the required matrix functions to a vector. A range of methods has been developed to accomplish this. For example, Krylov methods or interpolation at Leja points can be used for a wide range of problems; see, for example,
[38, 39, 1, 5, 6]. However, often the most efficient approach is to exploit particular knowledge about the differential equation under consideration. For example, in the hyperbolic case might be a linear advection operator. In this case the application of can be computed by using Fourier techniques or semiLagrangian schemes. Much research effort has been dedicated towards improving spectral and semiLagrangian schemes for kinetic problems [19, 30, 23, 32, 33, 43, 52, 54, 59, 9, 56, 12, 13, 26, 27, 28] and obtaining good performance on state of the art HPC systems [55, 20, 4, 45, 50, 22, 18, 25].Before proceeding, let us note that for parabolic problems, i.e. where is an elliptic operator, a mature theory for exponential integrators is available. We again refer the reader to the review article [41]. In this setting there are relatively few surprises with respect to stability and even rigorous convergence results are available. In addition, exponential integrators have been considered for problems that include both hyperbolic and parabolic terms (see, for example [49, 60, 31, 29]). An interesting point to make is that in this community Lawson methods have all but lost their appeal. In fact, there are many reasons why exponential integrators are to be preferred. For example, if a Krylov method is used to compute the matrix functions, the function usually converges faster than the exponential. In addition, exponential integrators that retain their full order for nonhomogeneous boundary conditions have been constructed [40]. It has been shown that this property can not be achieved for Lawson methods [42]. However, the situation for purely hyperbolic problems is markedly different. Most of the theoretical results that have been obtained in an abstract framework do not apply and there is relatively little literature available. We will see that the stability for exponential integrators in the fully hyperbolic setting is full of surprises. Moreover, since for kinetic problems we usually have efficient methods to compute the matrix exponential and complicated boundary conditions are rather rare, Lawson methods are an attractive choice due to their improved stability, as we will see.
3 Linear analysis
Determining the stability of a numerical scheme by conducting an analysis of a linear and scalar test equation is very well established in the literature. Usually, the Dahlquist test equation is considered. The justification for this is that a linear ODE can be written as . Once the matrix is diagonalized we essentially obtain the test equation. For linear PDEs we first perform a space discretization. Then the same argument can be applied to the resulting differential equation and stability constraints, such as the famous CFL condition, can be deduced. In the nonlinear case this, of course, only gives an indication for stability. Nevertheless, in many practical problems the theory derived in this fashion agrees very well with what is observed in numerical experiments.
The situation for exponential integrators and Lawson methods is more complicated as we separate two parts of the differential equation. Thus, we will consider the following test equation
(3) 
We note that we are here exclusively interested in equations with two hyperbolic parts. The reason why we allow
to lie in the complex plane is that some space discretization schemes introduce numerical diffusion. Thus, the discretization moves the eigenvalues from the imaginary axis to the left half complex plane.
Although this test equation is used frequently in the literature, its use is also frequently criticized. The reason for this criticism is that in the linear case the equation can only be transformed to the form given in equation (3) if and are simultaneously diagonalizable. This is a severe restriction which is usually not true in practice. Thus, the test equation, even in the linear case, gives only a necessary condition for stability. While this argument is certainly correct, we emphasize that if a numerical integrator does not work for the test equation (3) there is not much hope that it would work for more complicated problems. Therefore, the test equation is still useful and in fact we will see that many of the deficiencies of exponential integrators observed in practice can be illustrated well using the test equation (3).
3.1 Lawson methods
Applying a Lawson method to the test equation (3) proceeds as follows. First, we introduce the change of variable
which yields the equation
Thus, we precisely obtain the Dahlquist test equation. We now apply an explicit Runge–Kutta method to that equation. It is well known that this results in
where is the socalled stability function. Reversing the change of variable we obtain
The condition for stability is . Thus, the linear stability characteristics of a Lawson scheme is identical to that of its underlying Runge–Kutta method.
This makes the problem rather easy as the stability constraint for explicit Runge–Kutta methods has been extensively studied in the literature. In our present application we are primarily interested in obtaining numerical methods that maximize the part of the imaginary axis that is included in the domain of stability. It is well known that an stage method can include at most . This is, for example, stated as an exercise in [37, Chapter. IV.2, exercise 3]. Thus, unfortunately, there is no analog to Runge–Kutta–Chebyshev methods for hyperbolic problems.
For the sake of completeness we plot in Figure 1 the curve given by for different RungeKutta methods. The only nonstandard method here is RK(3,2) best which is a three stage second order method that has been purposefully constructed to enhance stability on the imaginary axis (see Appendix 6.1 for its Butcher tableau). We also emphasize that the stability domain of the classic four stage fourth order Runge–Kutta method is quite close to the theoretical bound .
3.2 Exponential integrators
We now apply commonly used exponential integrators to the test equation (3). In this work we will consider the following methods: ExpRK22 (a classic two stage second order method), the method of Cox–Matthews [11], the method of Hochbruck–Ostermann [40], and the method of Krogstad [44]. We refer to [41] for more details and to Appendix 6.1 for the Butcher tableaus of these methods.
For the sake of brevity we will only detail the calculation for the ExpRK22 scheme. Applying this method to the test equation we obtain
where and are entire functions. This yields the stability function
where, as before, we use .
Our first observation is that, in contrast to Lawson methods, the behavior of this stability function can not be understood by the domain of stability of the underlying method, i.e. the explicit method we obtain if we take . In fact, as we vary the domain of stability changes drastically. The domain of stability for the four exponential integrators (ExpRK22, Cox–Matthews, Hochbruck–Ostermann, and Krogstad) is plotted in Figure 2 for and . It is most striking that for large the domain of stability does not contain a symmetric interval of the imaginary axis. It should be evident that this has the potential to causes severe stability issues.
3.3 Phase space discretization
We start from the twodimensional linear transport equation
(4) 
where refers to the truncated velocity domain. The soughtafter distribution function is and we impose periodic boundary conditions in the direction. We assume that and are constants and thus the corresponding operators commute. This is an idealization of the Vlasov equation we will consider in the next section. In preparation for that example it is most useful to think that is large and thus would induce a stringent CFL condition if discretized by an explicit scheme.
We now have to discretize this equation both in the and the direction. In the spatial direction
, we will consider a spectral approximation. Performing a Fourier transformation of equation (
4) yields(5) 
where denotes the Fourier transform of with respect to . The corresponding frequency is denoted by .
We now perform the discretization in the direction. The grid points are denoted by , with , where is the number of points. We will consider two options here. Namely, either using a centered difference scheme or an upwind scheme.
Centered scheme in .
The classic centered scheme is obtained by approximating the velocity derivative in equation (5) by
where is an approximation of . Inserting this centered approximation in (5) yields
(6) 
The system is already diagonal with respect to the index . We now also diagonalize it with respect to the index . To do that we express the function in terms of its Fourier modes with respect to . That is,
where denotes the (double) Fourier transform of with frequency in space and frequency in velocity . Inserting this into equation (6) yields
(7) 
We immediately see that this equation is precisely in the form of equation (3) as studied in the previous section. We also observe that . That is, the eigenvalues for the centered difference approximation lie exclusively on the imaginary axis. One immediate consequence is that for Lawson methods the CFL condition is given by , where is chosen such that lies in the domain of stability of the underlying Runge–Kutta method.
Linearized WENO approximation in .
A common technique to discretize hyperbolic partial differential equations is to use the socalled weighted essentially nonoscillatory schemes (WENO) schemes. These are nonlinear schemes that limit oscillations in regions where sharp gradients occur, but still yield high order accuracy in smooth regions of the phase space. In the linear case WENO schemes reduce to upwind discretizations. Here, we will consider the LW5 scheme (the linearized version of the WENO5 scheme as considered in [3, 48, 53, 63]) that is given by (from now on we assume w.l.o.g. that )
We now perform the same analysis as for the centered scheme (see [3, 16]). This yields
(8) 
We then obtain the equation
(9) 
Once again this is precisely the form of equation (3), where and . The main difference to the centered difference scheme is that is not necessarily on the imaginary axis. In fact, the eigenvalues aquire a negative real part which stabilizes the scheme and avoids spurious oscillations, but also adds unphysical dissipation to the numerical method.
3.4 Computing the CFL condition
Equipped with the knowledge of the domain of stability for the time discretization and the eigenvalues of the space discretization, we are now in a position to determine the CFL condition for the linear transport equation (4). This task will be rather easy to accomplish for the Lawson schemes, where the stability does not depend on the advection speed for the transport in the direction. However, for exponential integrators even this linear analysis is rather complicated, as we will see.
3.4.1 Centered scheme in .
In the case of centered approximation of the velocity derivative, the Fourier multiplier is a pure imaginary complex number (see equation (7)). We thus look for such that the interval , where is the domain of stability for the chosen time integrator.
Lawson integrators.
We simply look for the largest value such that . The corresponding values for a number of schemes are listed in Table 1. These values have to be understood in the following way: they induce the CFL condition for the discretized equation (7), where denotes the time step size and is the velocity mesh size.
Methods  Lawson()  Lawson()  Lawson() 

Exponential integrators.
For the exponential integrators the domain of stability is very sensitive to the value of . To get an idea of what we can expect, we consider the quantity . As before, is the largest value such that , where is the domain of stability for the chosen exponential time integrator for a given . Even for relatively simple numerical methods it is not possible to compute this quantity analytically. Thus, we resort to numerical approximations. Unfortunately, it turns out that for most exponential integrators this value is zero. This can be appreciated by considering Figure 2 once more. Clearly, there are values of such that no relevant part of the imaginary axis (or only half the imaginary axis) is part of the domain of stability. Thus, most exponential integrators are unstable in the von Neumann sense. However, this is not what we observe in practice. In fact, already the results presented in [14] indicate that we can successfully run numerical simulations using, for example, the Cox–Matthews scheme. There are two major points to consider here

The
obtained is a worst case estimate. In fact, we know that for
we regain the stability of the underlying Runge–Kutta method. Thus, for small the methods is expected to work well. 
As is usually done we have mandated that . However, strictly speaking this is not necessary for practical simulation. If we assume that and we take steps the amplification of the error is given by . In the limit this quantity diverges. However, since we usually do not take infinitely small time steps we still can hope to obtain a relatively accurate approximation, especially if is small.
To investigate this further, we propose to relax the stability condition by introducing a threshold in the definition of the stability domain
(10) 
In Figure 3, we plot the domain of stability for the Cox–Matthews method and for and . One can observe that in the latter case a nonzero is obtained. We also call attention to the fact that the part of the imaginary axis included in this relaxed stability domain is not symmetric.
In Figure 4, we plot the dependence of as a function of for and the ExpRK22 method. Let us recall that for , the method gives . One can observe that the domain of stability of this method is still symmetric with respect to the real axis. In addition, the method becomes more stable as increases. Thus, the behavior of the method is completely different from the configuration with .
In Figure 5, we plot as a function of for the Hochbruck–Ostermann method (once again for ). This schemes also gives for . In this case the domain of stability is not symmetric and the stability depends quite erratically on the value of .
In Table 2 we have summarized the values of for the four exponential integrators considered in this paper.
Methods  ExpRK22  Krogstad  Cox–Matthews  Hochbruck–Ostermann 

3.4.2 Linearized WENO5 (LW5) scheme.
In the case of a WENO5 approximation of the velocity derivative, we can not easily find a Fourier multiplier because of its nonlinearity. Recent studies about stability of WENO5 [63, 53, 48] considered the linearized version of WENO schemes by freezing the nonlinear weights, so that WENO5 reduces to a high order (linear) upwind scheme called LW5. For this LW5 scheme we can compute the eigenvalues, see equation (8), and different time stepping schemes can be studied to determine the stability limit. We consider only Lawson methods here since we found that the exponential schemes we considered (i.e. ExpRK22, Krogstad, Cox–Matthews, Hochbruck–Ostermann) lead to unstable results when they are combined with LW5 (even in the weak sense considered in the previous section).
The goal is then to determine the largest nonnegative real number such that the eigenvalues of the upwind scheme LW5 scaled by are contained in the domain of stability for the time integrator. Since the eigenvalues of LW5 are not as simple as in the case of the centered scheme, we determine numerically. The main idea of the algorithm to obtain an estimation of is:

The argument of the eigenvalues (normalized by ) given by (8) are discretized using a fine angular grid , since the real part of is negative due to its diffusive character.

A discretized version of the boundary of the stability domain of the underlying Runge–Kutta method is computed.

For each discretized eigenvalue, we look for the closest boundary point of the Runge–Kutta stability domain. This enables us to compute the associated stretching factor .

Taking the minimum over all the discretized eigenvalues yields .
In Figure 6 (left), we plot the dependence of with respect to the angle for Lawson() coupled with LW5. We also plot (Figure 6 (right)) the stability domain of Lawson(), the eigenvalues for LW5 (normalized by ) and the eigenvalues for LW5 scaled by . The CFL number for some Lawson schemes is shown in Table 3. It is interesting to note that the time step size is reduced compared to the centered scheme.
Methods  Lawson()  Lawson()  Lawson() 

4 Numerical simulation: VlasovPoisson equations
In this section we apply Lawson methods and exponential integrators to the VlasovPoisson system. We will see that the linear theory developed in the last section gives a good indication of the stability even for this nonlinear problem. We consider a distribution function depending on time , space , with periodic boundary conditions, and velocity , which satisfies the Vlasov equation
(11) 
coupled to a Poisson problem for the electric field
(12) 
We employ a Fourier approximation in space. In velocity we either use a centered discretization
or the WENO5 discretization
(13) 
where , and denote the numerical fluxes (see Appendix 6.2 for more details). Both of these phase space discretizations can be easily cast in the following form
for an appropriately defined . We can now apply an exponential integrator or a Lawson scheme. To illustrate this let us consider the exponential Euler method. This gives
Since in Fourier space the exponential and functions have only scalar arguments, their computation is easy and efficient (i.e. no matrix functions have to be computed). Due to the nonlinearity, it is favorable to compute in real space. This is done efficiently by using the fast Fourier transform. Generalizing this scheme to multiple dimensions in both space and velocity is straightforward.
To apply our theory from the linear analysis to the nonlinear VlasovPoisson case, we need a way to compute the CFL condition. Note that the coefficient of the advection depends on and thus implicitly on time. We choose the time step for the centered scheme as follows
(14) 
whereas for the WENO5 scheme we use the CFL condition computed from its linearized version (LW5)
(15) 
The value is just the maximal value of the electric field at time . The values for and are given in Tables 1, 2, and 3 according to the chosen time integrator.
4.1 Landau damping test.
We present numerical results for the standard Landau damping test case. The initial condition is given by
The numerical parameters are chosen as follows: the number of points in space is whereas the velocity domain is truncated to with and is discretized with grid points.
Let us remark that for the Landau damping test, the conditions (14) and (15) allow us to take very large time steps, since . Then, we get , where can be either or depending on the chosen time integrator. This means that in practice we can choose the time step independently from the mesh. This is clearly a desirable desirable feature of the time integrator.
In Figure 7, the time history of the electric energy (in semilog scale) with two different time steps ( and ) using Lawson() for the time integration and a WENO5 scheme for the velocity direction is shown. One can observe that the expected damping rate () is recovered for both configurations. Although, the accuracy deteriorates for , it is clear that the numerical scheme is stable.
4.2 Bump on tail test.
Next, we consider the bump on tail test for which the initial condition is
The numerical parameters are chosen as follows: the number of points in space is whereas the velocity domain (with ) is discretized with grid points. Concerning the time step, as in the Landau damping example, the conditions (14) and (15) turn out to be very light for Lawson schemes. Indeed, we found so that, with the considered velocity grid, the time step has to be smaller than in the worst case (Lawson( combined with WENO5). To capture correctly the phenomena involved in the bump on tail test, we take the following time step size
(16) 
with or depending on the chosen scheme. Thus, also in this configuration we are mostly limited by the accuracy and not by the stability constraint.
In Figure 8, the full distribution function is plotted at time for Lawson() coupled with the WENO5 scheme, Lawson() coupled with the centered scheme and Hochbruck–Ostermann coupled with the centered scheme. In these figures, we look at the impact of the velocity approximation. One can observe spurious oscillations in the centered scheme case (middle figure) whereas the slope limiters of WENO5 are able to control this phenomena so that extremas are well preserved. This is consistent with what has been observed in the literature.
In Figure 9, we plot the time evolution of , where and is the total energy defined by
This quantity is known to be preserved with time and thus allows us to look at the accuracy of the different methods considered in this paper. We observe that Lawson/centered schemes (referred as ’CD2’ in the legend) preserve this quantity well. It is well known (see for example [15]) that centered schemes are better at preserving the total energy compared to upwind schemes. The reason is that upwind schemes introduce numerical diffusion. The exponential integrators that are considered show all very similar behavior with respect to energy conservation. They seem to include less drift than the Lawson methods, but for the time scales considered here their error is larger.
Although being able to choose the time step size independently of the mesh is a desirable feature, it makes checking the sharpness of the CFL estimate derived in the previous section more difficult. To accomplish this, we consider the same parameters as before, except for the phase space mesh which now uses and grid points. Then the maximum time step becomes (since and ). We consider two different time steps: (which satisfies the linearized CFL condition) and (which violates the linearized CFL condition). The results are shown in Figure 10. There the Lawson() method has been chosen for the time discretization whereas WENO5 and centered scheme are both considered for the velocity discretization. More specifically,

for WENO5 we use (obtained from the linearized version LW5) and we compare the results obtained with (satisfies the CFL condition) and (does not satisfy the CFL condition).

for the centered scheme, we use and we compare the results obtained with (satisfies the CFL condition) with (does not satisfy the CFL condition).
In Figure 10, the time evolution of the electric energy is displayed for these two velocity discretizations. One can observe for the time step size that satisfies the CFL condition the simulation is stable and gives the expected results, whereas for the choice that violates the CFL condition the simulation blows up. Thus, the results confirm that the CFL condition obtained by the linear theory yields a good prediction for the nonlinear Vlasov–Poisson equation. On the right part of Figure 10, the time history of the quantity is shown (red) together with the time step size considered for the WENO velocity discretization. The choice (blue line) is larger than the allowed time step size around , which explains the numerical instability observed at that point in time.
5 Numerical simulation: driftkinetic equations
In this section we will consider a model motivated by the simulation of strongly magnetized plasmas, such as those found in tokamaks. In this case the dynamics is governed by gyrokinetic equations. Gyrokinetics averages out the fast oscillatory motion of the charged particles around the magnetic field lines. In a simplified slab geometry, gyrokinetic models reduce to the driftkinetic equation. In this case the unknown depends on three cylindrical spatial coordinates and one velocity direction . This model is composed of a guidingcenter dynamics in the plane orthogonal to the magnetic field lines and of a Vlasov type dynamics in the direction parallel (to the magnetic field lines). In addition to its relevance in physics, it is also a good test case for stressing exponential methods. The latter is due to the fact that after some time the nonlinearity can become strong enough such that the time step size is is dictated by stability constraints (especially for high order methods).
Our goal in this section is to find a numerical approximation of satisfying the following slab driftkinetic equation (see [33])
(17) 
for , . The selfconsistent potential is determined by solving the quasi neutrality equation
(18) 
where and the functions and depend only on and are given analytically.
In many situations the term yields the most restrictive CFL condition. In this setting exponential methods can be very successful as they remove the most stringent CFL condition, while still treating the remaining terms explicitly (which computationally is relatively cheap). The functions can be computed in Fourier space (as has been discussed in some detail for the Vlasov–Poisson system in the previous section) or using a semiLagrangian approach. Exponential integrators for the driftkinetic model have been proposed in [14]. They compare favorably to splitting schemes and have the advantage that they can be more easily adapted to different models. In [14] only a second order exponential integrator and the fourth order Cox–Matthews scheme have been considered. Due to the investigations in the present paper we now understand that this is not an ideal choice. Thus, the purpose of this section is to demonstrate that Lawson methods can be more efficient and to further corroborate the results obtained in the previous sections. The difference in stability for Lawson schemes and exponential integrators will be very evident in the numerical simulations that are presented.
5.1 Numerical discretization
First, we remark that is a periodic variable which motivates us to consider the Fourier transform in this direction. The corresponding frequencies are denoted by . Equation (17) then becomes
Setting , this equation can be written as
This is now precisely in the form to which we can apply an exponential method. In addition, computing the required matrix functions is very efficient as all the frequencies decouple (see the corresponding discussion in section 4).
5.2 Numerical results
In this section, we detail the physical parameters of the considered test case. The set up is identical to [14] (see also [10, 17]). The initial value is given by
where the equilibrium distribution is given by
(19) 
The radial profiles , , and have the analytic expressions
with the constants defined as follows
Finally, we consider the parameters of [10] (MEDIUM case)
and use a range of .
We consider two configurations. A direct formulation, where the boundary conditions are given by
Note that these are not homogeneous Dirichlet boundary conditions. It is well known (and supported by [14]) that the Arakawa scheme works better for homogeneous boundary conditions. In addition to the direct formulation, we therefore also introduce a socalled perturbation formulation (see also [17, 46]). First, we note that the equilibrium function defined in (19) is a steady state for our problem. We therefore divide into
With this formulation, our problem (17) becomes
where , and . Expanding the various terms we obtain
which can be written as
Note that the equation is very similar to equation (17). We, however, have obtained two additional source terms, which depend on the equilibrium distribution as well as on the electric field. Furthermore, the right hand side of the quasineutrality equation (18) becomes
Due to the similarity of the direct formulation and the perturbation formulation, the same code can be used for both by simply exchanging the right hand side of the quasineutrality equation, changing the boundary conditions, and adding the appropriate source terms. Thus, to implement the exponential integrator we consider the following equation
with
and proceed as before (with replaced by ). The space discretization of the source terms can be done either analytically or using a numerical approximation. In our implementation we have used standard centered differences. The Arakawa scheme that is used to discretize now employs homogeneous Dirichlet boundary conditions for in the direction.
We have seen in section 4 that for the Vlasov–Poisson equation we can derive a constraint on the time step size which ensures stability. For Lawson methods this also gives a good estimate in practice. However, for exponential integrators the situation is far more complicated, see the discussion in section 3. Thus, a natural question that arises is how large time steps can we take in practice. To do that we will employ an adaptive step size controller that uses Richardson extrapolation to obtain an error estimate. By denoting a time step as follows and considering we can construct the Richardson extrapolated numerical solution of a method of order as follows , which turns out to be an approximation of order of the exact solution. Then, it is possible to determine an estimate of the local error of the time integrator through the following expression
where the norm is considered in the variables. If the estimate for the error is larger than a specified tol we reject the step and start again from time . Otherwise, the step is accepted and we proceed with the time integration. In either case we then determine the new step size such that the local error is smaller than the tolerance. That is, we choose
(20) 
where tol is the prescribed tolerance, is the order of the method, and is a safety factor. This process is very well established in the literature and we refer the interested reader to [35, 34, 57, 58, 24]. Other strategies can also be considered such as embedded Lawson or exponential methods (see [36, 2]). Such methods may be more efficient but we restrict ourselves here to the strategy based on Richardson extrapolation since it can be applied to any time integrator.
An interesting property of this adaptive step size controller is that it forces the time step size to satisfy the stability constraint of the numerical method. This is perhaps surprising at first sight since the scheme only controls the local error. However, numerical instability are characterized by error amplification as integration proceeds in time. Thus, a single step can violate the stability constraint, but later on the error amplification increases the local error in such a way that the adaptive step size controller is forced to reduce the time step size. Thus, the controller ensures that we obtain a stable numerical simulation for which the local error is below the specified tolerance.
This procedure allows us to perform a fair comparison between Lawson methods and exponential integrators. Since we are mainly interested in the stability of the methods and, particularly in the nonlinear regime, prescribing a stringent tolerance is infeasible in any case, we will choose a relatively large tolerance for our simulation ( for the perturbation formulation). To avoid the problem of too large time steps at the beginning of the simulation, where accuracy and not stability dictates the time step, we limit the maximal step size to (coarse) and (fine) for second order methods, for third order methods and Lawson(), and for fourth order methods.
To evaluate the performances of the different time integrators, we consider the time evolution of the electric energy defined by
as well as the time evolution of the total mass and the total energy
The numerical results for the perturbation formulation are given in Figure 11. There the time history of the electric energy and the time step size as of function of time for two different discretizations in phase space, and grid points, are shown. We first see that all the time integrators agree very well; that is, we see an initial exponential growth (the rate is in good agreement with the linear theory; see, for example, [10]) in the electric energy. This phase is followed by saturation at very similar levels for all numerical methods used. We also observe that all exponential integrators, except the method of Krogstad, are forced to reduce their step size after time for the fine, i.e. grid points, case. This is particularly drastic for ExpRK33 and the Cox–Matthews method which suffer from stability issues even for the coarse discretization. In general, for the finer space discretization (see the right plot in Figure 11), the problem becomes significantly more severe. It is also worth mentioning that ExpRK33 leads to unstable results in spite of the step size controller, which clearly highlights the unstable nature of that integrator. Neither of the Lawson schemes have similar issues and the size of the stability domain on the imaginary axis gives a good indication of the relative time steps this methods can take. We also note that at later times Lawson schemes are able to take significantly larger time steps compared to exponential integrators. The only exponential integrator that performs well in this regime is the method of Krogstad. Thus, the numerical results agree well with what we would expect based on the theoretical analysis.
The corresponding numerical results using the direct formulation are shown in Figure 12. The situation for the direct formulation is very similar to the perturbation formulation, even if one can observe that the time steps are slightly larger than in the perturbation formulation. One explanation comes from the fact that the relative error computed in the perturbation case involves the norm of which can be quite different from the norm of so that equation (20) leads to different value of the time step even if the accuracy of the solution is the same.
In Figure 13, the time history of the relative error of the total mass and of the total energy are displayed with the phase space discretization and using Lawson() and Cox–Matthews time integrators (the perturbation formulation is used here). Since mass is a linear invariant it is preserved, up to machine precision, by the exponential integrator, see [21]. We also observe good conservation of energy even in the nonlinear phase, which confirms the excellent behavior of the methods.
Finally, we show slices of the distribution function and the density at different times. The simulation in Figure 14 is conducted with the Lawson() scheme and the simulation in Figure 15 with the Cox–Matthews scheme (both with the perturbation formulation). In both cases the configuration of Figure 11 and the fine space resolution has been employed. As comparison, a reference solution computed with the Lawson() scheme and a step size controller that keeps the error below per unit time step is shown in Figure 16. We remark that all simulations show good agreement: the modes in the direction are recovered and after initial growth of the unstable mode, we can observe a shearing of the structures and the appearance of small scale structures which are typical for the nonlinear phase.
Acknowledgement
We would like to thank David C. Seal (U.S. Naval Academy) and Sigal Gottlieb (University of Massachusetts, Dartmouth) for the helpful discussion.
This work has been carried out within the framework of the EUROfusion Consortium and has received funding from the Euratom research and training programme 2014 2018 and 20192020 under grant agreement No 633053. The views and opinions expressed herein do not necessarily reflect those of the European Commission. The work has been supported by the French Federation for Magnetic Fusion Studies (FRFCM) and by the Austrian Science Fund (FWF): project number P 32143N32.
References
 [1] A.H. AlMohy, N.J. Higham, Computing the action of the matrix exponential, with an application to exponential integrators, SIAM J. Sci. Comput. 33, (2011), pp. 488–511.
 [2] S. Balac, F. Mahé, Embedded RungeKutta scheme for stepsize control in the Interaction Picture method, Comput. Phys. Commun. 184, (2013), pp. 12111219.
 [3] M. Baldauf, Stability analysis for linear discretisations of the advection equation with RungeKutta time integration J. Comput. Phys. 227, (2007), pp. 6638–6659.
 [4] J. Bigot, V. Grandgirard, G. Latu, C. Passeron, F. Rozar, O. Thomine, Scaling GYSELA code beyond 32Kcores on Blue Gene/Q, ESAIM: Proceedings 43, (2013), pp. 117–135.
 [5] M. Caliari, P. Kandolf, A. Ostermann, S. Rainer, Comparison of software for computing the action of the matrix exponential, BIT Numer. Math. 54, (2014), pp. 113–128.
 [6] M. Caliari, P. Kandolf, A. Ostermann, S. Rainer, The Leja method revisited: backward error analysis for the matrix exponential, SIAM J. Sci. Comput. 38, (2016), pp. 1639–1661.
 [7] C. Canuto, M.Y. Hussaini, A. Quateroni, T.A. Zang, Spectral methods in fluid dynamics. Springer Verlag, 1988.
 [8] F. Casas, N. Crouseilles, E. Faou, M. Mehrenberger, Highorder Hamiltonian splitting for VlasovPoisson equations, Numer. Math. 135, (2017), pp. 769–801.
 [9] C. Cheng, G. Knorr, The integration of the Vlasov equation in configuration space. J. Comput. Phys. 22, (1976), pp. 330–351.
 [10] D. Coulette, N. Besse, Numerical comparisons of gyrokinetic multiwaterbag models, J. Comput. Phys. 248, (2013), pp. 1–32.
 [11] S.M. Cox, P.C. Matthews, Exponential time differencing for stiff systems, J. Comput. Phys. 176, (2002), pp. 430–455.
 [12] N. Crouseilles, L. Einkemmer, E. Faou, Hamiltonian splitting for the Vlasov–Maxwell equations, J. Comput. Phys. 283, (2015), pp. 224–240.
 [13] N. Crouseilles, L. Einkemmer, E. Faou, An asymptotic preserving scheme for the relativistic Vlasov–Maxwell equations in the classical limit. Comput. Phys. Commun. 209, (2016), pp. 13–26.
 [14] N. Crouseilles, L. Einkemmer, M. Prugger, An exponential integrator for the driftkinetic model, Comput. Phys. Commun. 224, (2019), pp. 144–153.
 [15] N. Crouseilles, F. Filbet, Numerical approximation of collisional plasmas by high order methods, J. Comput. Phys., 201, (2004), pp. 546–572.
 [16] N. Crouseilles, P. Glanc, M. Mehrenberger, C. Steiner, Finite volume schemes for Vlasov, ESAIM Proceedings, (2013), pp. 275–297.
 [17] N. Crouseilles, P. Glanc, S. Hirstoaga, E. Madaule, M. Mehrenberger, J. Pétri, A new fully twodimensional conservative semiLagrangian method: applications on polar grids, from diocotron instability to ITG turbulence, Eur. Phys. J. D 68, (2014), pp. 252–261.
 [18] N. Crouseilles, G. Latu, E. Sonnendrücker, A parallel Vlasov solver based on local cubic spline interpolation on patches. J. Comput. Phys. 228, (2009), pp. 1429–1446.
 [19] N. Crouseilles, M. Mehrenberger, F. Vecil, Discontinuous Galerkin semiLagrangian method for VlasovPoisson, ESAIM: Proceedings 32, (2011), pp. 211–230.
 [20] L. Einkemmer, High performance computing aspects of a dimension independent semiLagrangian discontinuous Galerkin code, Comput. Phys. Commun. 202, (2016), pp. 326–336.
 [21] L. Einkemmer, A. Ostermann, A splitting approach for the Kadomtsev–Petviashvili equation J. Comput. Phys. 299, (2015), pp. 716–730.
 [22] L. Einkemmer, A mixed precision semiLagrangian algorithm and its performance on accelerators, International Conference on High Performance Computing & Simulation (HPCS), 2016, pp. 74–80.
 [23] L. Einkemmer, A study on conserving invariants of the Vlasov equation in semiLagrangian computer simulations, J. Plasma Phys. 83, (2017).
 [24] L. Einkemmer, An adaptive step size controller for iterative implicit methods, Appl. Numer. Math. 132, (2018), pp. 182–204.
 [25] L. Einkemmer, SemiLagrangian Vlasov simulation on GPUs, Preprint, arXiv:1907.08316.
 [26] L. Einkemmer, A performance comparison of semiLagrangian discontinuous Galerkin and spline based Vlasov solvers in four dimensions, J. Comput. Phys. 376, (2019), pp. 937–951.
 [27] L. Einkemmer, A. Ostermann, Convergence analysis of Strang splitting for Vlasovtype equations, SIAM J. Numer. Anal. 52, (2014), pp. 140–155.
 [28] L. Einkemmer, A. Ostermann, Convergence analysis of a discontinuous Galerkin/Strang splitting approximation for the Vlasov–Poisson equations, SIAM J. Numer. Anal. 52, (2014), pp. 757–778.
 [29] L. Einkemmer, A. Ostermann, Exponential integrators on graphic processing units, High Performance Computing and Simulation (HPCS), International Conference On, 2013.
 [30] L. Einkemmer, A. Ostermann. A strategy to suppress recurrence in gridbased Vlasov solvers, Eur. Phys. J. D 68, (2014), pp. 197203.
 [31] L. Einkemmer, M. Tokman, J. Loffeld, On the performance of exponential integrators for problems in magnetohydrodynamics, J. Comput. Phys. 330, (2016), pp. 550–565.
 [32] F. Filbet, E. Sonnendrücker, Comparison of Eulerian Vlasov solvers. Comput. Phys. Commun. 150, (2003), pp. 247–266.
 [33] V. Grandgirard, M. Brunetti, P. Bertrand, N. Besse, X. Garbet, P. Ghendrih, G. Manfredi, Y. Sarazin, O. Sauter, E. Sonnendrücker, J. Vaclavik, L. Villard, A driftkinetic SemiLagrangian 4D code for ion turbulence simulation, J. Comput. Phys. 217, (2006), pp. 395–423.
 [34] K. Gustafsson, Controltheoretic techniques for stepsize selection in implicit RungeKutta methods, ACM Transactions on Mathematical Software 20, (1994), pp. 496–517.