1. Introduction
We consider the problem of the numerical resolution by splitting methods of linear partial differential equations of the form
(1) 
where , and is an inhomogeneous quadratic differential operator acting on .
First, let us precise what we hear by inhomogeneous quadratic differential operator. This terminology is used (for example by Hörmander in [22]) to qualify the Weyl quantization, , of a polynomial function, , on of degree or less. The Weyl quantization is a general way to associate an operator acting on of a manifold (here ) with a smooth function, called symbol, defined on the phase space of this manifold (here ). It is usually defined (see e.g., Section 18.5 in [23] or Chapter 1 in [25]) through the following oscillatory integral
where denotes the Schwartz space. However considering only polynomials of degree or less, it can be defined much more elementary. Indeed, considering a polynomial function on of degree or less whose decomposition in coordinates is given by
where , is a symmetric matrix of size with complex coefficients,
is a vector and
is a constant, the Weyl quantization of is the operator acting on defined byThe Weyl quantization allows in many situations to deduce some properties of the operator from properties of its symbol. For example, as stated in the following proposition, if is bounded by below on then (1) is globally well posed.
Proposition 1.1.
If is a polynomial of degree or less on whose real part is bounded by below on then generates a strongly continuous semigroup on denoted by
This proposition is very classical and relies on the HilleYosida Theorem. For example, a proof is given by Hörmander in [22] (pages 425426) when is quadratic, but its proof can clearly be extended to the case where is no more homogeneous (see e.g.Theorem 4.7 in [22]).
We recognize that the class of equations given by (1), where is a polynomial of degree or less, may seem too elementary to require the use of specific methods to solve them. However, it is usual to have to solve them as substeps of splitting methods for more sophisticated equations. Thus it is crucial to have robust methods to compute them very efficiently. Furthermore, for many of these models, in some relevant regimes, their dynamics are the leading part of the dynamics. So, it is crucial to be able to compute them with as many accuracy as possible.
For example, the linear part of some nonlinear Schrödinger equations describing some rotating Bose Einstein condensates (see e.g. [6],[9]) is of the form (1) where
where is a quadratic external potential and
is a real skew symmetric matrix of size
associated with a constant external magnetic field. The formalism of (1) allows also to consider transport equations associated with affine vectors fields. Indeed, if is a square real matrix of size and thenEven if such transport equations are essentially trivial, their resolution is required to compute, using splitting methods, the solutions of many kinetic equations (e.g. the VlasovMaxwell equations with a constant external magnetic field, see [10],[15]). Finally equations like (1) can also describe some phenomena of diffusions. For example, the generalized OrnsteinUhlenbeck operators are of the form
where are some real nonnegative symmetric matrices of size and is a real matrix . Note that these operators include FokkerPlanck and KramersFokkerPlanck operators (see e.g. [18],[21]).
Some of the equations of the form (1
) are much more easier to solve numerically using pseudospectral methods. For example, to solve the heat equation or to compute a shear, it is enough to do some Fast Fourier Transforms. Similarly, in the spirit of the splitting methods, it is not very costly to solve successively some of these equations. More precisely, let us define, in this context, the operators we consider as easily computable using standard pseudospectral methods.
Definition 1.
An operator acting on can be computed by an exact splitting if it can be factorized as a product of operators of the form
where are some real quadratic forms, is nonnegative, and . As usual, (resp. ) denotes the Fourier multiplier associated with (resp. ), i.e. .
Note that if an operator can be computed by an exact splitting then it is bounded. The following Theorem justifies why we focus on splitting methods for semigroups generated by inhomogeneous quadratic differential operators.
Theorem 1.1.
If is a polynomial of degree or less on whose real part is bounded by below on then can be computed by an exact splitting.
Remark 1.
Actually, we prove a slightly stronger result : the quadratic forms associated with and in the definition of Definition 1 can be chosen diagonal. Nevertheless, it is not necessarily relevant from a numerical point of view because the substeps require to diagonalize the quadratic forms can be costly.
This Theorem is proven in the subsection 5.1 of the Appendix. Unfortunately, its proof does not provide an efficient way to split semigroups (minimizing, for example, the number of substeps or the number Fast Fourier Transforms required to approximate the exponentials). Nevertheless, as illustrated in Section 4, on many examples, paying attention to the particular structure of each semigroups, we are able to design optimized exact splittings.
This work can be considered as the theoretical part of a more general study. Indeed, a second work [11], written in collaboration with Nicolas Crouseilles and Yingzhe Li, deals with the implementation of these methods and compares numerically their efficiency, their accuracy and their qualitative properties with respect to the existing methods. We also couple these exact splitting methods with classical methods in order to solve some nonlinear equations and some non quadratic linear equations.
Outline of the work
In Section 2
, we develop the notion of exact splitting in a more general framework and precise its links with the classical splittings. It will naturally lead to a general theorem to design efficient exact splittings for many linear ordinary differential equations. In Section
3, we explain how the theory of the Fourier Integral Operators developed by Hörmander in [22] can be used to transform exact splittings of linear ordinary differential equations into exact splittings of semigroups. And finally, in Section 4, we apply the results of the previous sections to obtain some efficient exact splittings for the magnetic linear Schrödinger equations with quadratic potentials, some transport equations and some FokkerPlanck equations.Notations and conventions
Let us precise some classical notions used in this paper.

By convention, the empty spaces in the matrix notations refer to coefficients equal to zero (see for example the definition of above).

If is a matrix denotes the transpose of .

If , denotes the algebra of the square matrices of size and denotes the vector space of the symmetric matrices.

For , we will use the following classical groups of matrices
and their associated Lie algebras
where denotes the trace and the Lie bracket is formally defined through the relation .

A real Lie algebra of matrices is a subLiealgebra of for some .

We equip the space of the polynomials of degree or less on of its structure of Lie algebra induced by the canonical Poisson bracket. This one being defined for two polynomials on by
Note that if , the space of the quadratic forms on is a Lie algebra naturally isomorphic to .

If is a Lie algebra, denotes its adjoint representation, i.e.

We consider the natural action of the analytic functions on defined by the holomorphic functional calculus (see VII3 of [19] for details). By abuses of notations, if is analytic on a domain and has its spectrum included in then is just an other way to denote . Note that, if is an analytic function on the complex disk of center and radius and has its spectrum included in then can be defined by the convergent series

denotes the Schwartz space on .

In a noncommutative setting, the notation to denote a product can be quite ambiguous. In this paper, we adopt the following natural convention. If is a totally ordered finite set and , where is a monoid^{1}^{1}1 i.e. a set equipped with an associative binary operation and an identity element., then
where is the increasing bijection from to .
2. Exact splitting methods
The original problematic of the splitting methods (see e.g [20]) consists in considering a linear equation^{2}^{2}2Note that as usual this formalism include nonlinear equations, since it is enough to consider the transport equations they generate. of the form
(2) 
whose solution is denoted and such that
are nicer than .
In this context, nicer usually means cheaper to compute. Then the approximations of at times are got compositing times an approximation of where is a composition of operators of the form with and . The most classical methods are the Lie splitting where
and the Strang splitting
These methods are respectively of order and . It means that they provide respectively an approximation of order and of . Similar methods can be derived to obtain splitting methods of arbitrarily high order. However, note that, the higher the order is, the higher the number of step is and so the higher the cost of the method is. Furthermore, the only way to get methods of order higher than where the are nonnegative is to allow the to be complex. In this case it is possible to design schemes of high order where the real parts of the are nonnegative (see [12],[14]). Note that this is crucial when irreversible equations are considered.
The problematic of the exact splitting is the same as the problematic of the classical ones. Nevertheless, we have to assume that
are nicer than because belong to some vector spaces .
This assumption is natural because usually the are nice due to a particular structure of each (e.g. nilpotent, diagonal…). Consequently, the idea of the exact splitting consists in looking for as a product of operators of the form with , and to ask that is exactly equal to . In other words, we are looking for a factorization of as a product of operators of the form with .
Remark 2.
To avoid any possible confusion note that does not denote the solution of a nonautonomous equation but the solution of at time .
Actually, the existence of an exact splitting can be seen as an inverse problem with respect to the classical backward error analysis of splitting methods. In the context of the splitting method, this analysis is realized through the BakerCampbellHausdorff formula (see [20]). It states that
(3) 
where can be expanded in powers of and the operator associated with the power is obtained as a linear combination of Lie brackets of the . For example, we have
where . Note that in general the formula (3) and the expansion of have to be understood in the sense of the formal series in . Nevertheless, if belong to a real Lie algebra of matrices then these expansions converge when is small enough (see e.g. [20]).
Now if we consider a product of operators of the form with , , where the are some vector subspace of a same real Lie algebra of matrices then the BakerCampbellHausdorff formula states that it is of the form
(4) 
where admits an expansion similar to the expansion of . Consequently, to get an exact splitting method we just have to design in order to cancel all the Poisson brackets in the expansion of , i.e. we have to solve
(5) 
Since it can be shown that is smooth with respect to and it is natural to try to solve (5) with the Implicit Function Theorem.
In this way, we prove the following Theorem for which we have many applications. From now, to get convenient notations, we denote instead of .
Theorem 2.1.
Let be a positive integer and be some complementary subspaces of a real Lie algebra of matrices. If is such that
(6) 
then there exist and an analytic function such that and
(7) 
Remark 3.
Before proving this theorem, let us do some remarks.

[leftmargin=*]

In most of the applications, can be chosen as a one of the . Consequently, assuming that , the notations of this theorem are consistent with the notations previously introduced in this section through the identifications , , , , , , where .

Since the proof of this theorem relies on the Implicit Function Theorem, the coefficients of the exact splitting (i.e. and ) can be efficiently computed by an iterative method. Unfortunately, this method require some notations introduced in the proof of Theorem 2.1 to be presented. Consequently, it is introduced just after the proof.

The assumption (6) of Theorem 2.1 is a bit too strong. The optimal assumption seems that (defined implicitly by (4)) belongs to the vector space defined in (6) for all and . Paying attention to the expansion of given by the BakerCampbellHausdorff formula, it essentially means that we do not need to ask this space to contain Lie brackets of elements belonging to a same space . Note that, such a generalization would be necessary to justify a priori the form of the exact splitting of the Example 1.3 of [3] for the KramerFokkerPlanck operator.
Proof of Theorem 2.1.
Naturally, following the assumption of the theorem, we introduce the real Lie algebra of matrices, denoted and defined by
(8) 
Applying the Baker Campbell Hausdorff formula, we get a neighborhood of the origin in , denoted , and an analytic function such that for all real and all such that , we have
Consequently, we aim at solving the equation
(9) 
To solve this equation we have to introduce some notations. First, from the decomposition (8) of , we deduce naturally that there exists a complementary space to in denoted , a subspace of denoted and a vector space isomorphism such that
(10) 
where are the canonical projections associated with the decomposition
(11) 
Then, let be defined by
(12) 
let be a neighborhood of in and let be such that
So, to solve (9), we are going to apply the Implicit Function Theorem in to the function
where
with
and
First, observe that, by construction of that if then is a solution of (9). Indeed, by construction the equation (9) can be rewritten as
Then observe that, by construction of we have
Consequently, since is clearly an analytic function, to conclude the proof applying the Implicit Function Theorem, we just have to prove that the partial differential of with respect to in is invertible. Indeed, using a natural matrix representation, this differential is
where is an explicit linear application depending on and . This matrix being triangular, it is clearly invertible and its invert is
∎
Now, let us present an iterative method to determine the coefficients and given by the Theorem 2.1.
Proposition 2.1.
Assume that the assumption of Theorem 2.1 is satisfied, let be defined by (10), be defined by (12) and be the projections canonically associated with the decomposition (11).
There exists , such that for all the sequence if well defined by induction as follow :
Initially, we have and and for
(13) 
where
Furthermore, there exist independent of and such that for all and all , we have
Note that, here, as usual, the function denotes the principal determination of the matrix logarithm. We have chosen to present an iterative method as elementary as possible. However, computing the Lie brackets associated with the operators and it would be possible to get a natural iterative method whose convergence rate would be instead of with a linear function of .
Proof of Proposition 2.1.
It follows of the proof of the theorem 2.1 that the sequence is defined by a relation of the kind
where for all , is a smooth function on a same ball of center and is smooth. Consequently, to prove that the sequence is well defined if is small enough and that has a fix point we just have to prove that is a contraction mapping for a well chosen norm. Indeed, it follows of the proof of the theorem 2.1 that the differential of in , denoted , is
Thus, since it is nilpotent, applying for example the Lemma 5.6.10 of [24], we get a matrix norm such that
Now, since is smooth on and , we deduce of the mean value inequality that there exist and a ball , for the norm , centered in such that for all , is Lipschitzian on , and . Consequently, has an unique fix point in and the sequence converges to this fix point with the rate .
Finally, we conclude this proof observing that by construction and continuity, there exists such that for all , belongs to and is a fix point of . ∎
3. Exact classicalquantum correspondance
As we have seen in Section 2, exact splittings can be designed (using for example Theorem 2.1) to solve linear ordinary differential equations. We aim at designing exact splittings to solve some partial differential equations. So, first, it is natural to focus on linear transport equations. Indeed, the formula of the characteristics
(14) 
where and is square matrix of size , provides a natural way to transform an exact splitting at the level of the linear ordinary differential equations into an exact splitting at the level of the linear transport equations.
For example, we have the following exact splitting for the twodimensional rotations
(15) 
where . At the level of the associated transport equation, this formula can be written
This factorization is very useful to compute rotations since, using semiLagrangian methods, it only requires one dimensional interpolations instead of a two dimensional interpolations as we could expect. The formula is well known in image processing and has been used for decades to rotate images (see e.g.
[26]). Recently, two papers have been written on the applications of this decomposition for the numerical resolution of kinetic equations (see [4],[10]). This factorization has also been extended to compute dimensional rotations with only one dimensional interpolations (see e.g. [16],[31]). Note that in Subsection 4.1, we extend this kind of decomposition in any dimension and for more general transforms (including rotations and dilatations).The inhomogeneous quadratic differential operators and the semigroups generate enjoy some strong and specific properties providing a more general way to transform an exact splitting at the level of the linear ordinary differential equations into an exact splitting at the level of the linear transport equations.
First, there is the Mehler formula (see Thm 4.2. in [22]) stating that if is a quadratic form on whose real part is nonnegative on then if is small enough then
where and is a quadratic form on both of them being given by some explicit formulas with respect to and . This formula provides a representation of the semigroup as an oscillatory integral and can be really useful to describe its properties. For example, in [13], it is used, to study the dispersion of Schrödinger equations with attractive or repulsive confining potentials whereas in [1] it is used to study the regularizing effects of the semigroup.
The second crucial property holds on the Lie bracket. Usually, the Lie bracket of two pseudodifferential operators is also a pseudodifferential operator. Its symbol is given by the Moyal bracket of the symbols. In many applications, the Moyal bracket admits a natural expansion whose the leading part^{3}^{3}3in some specific sense depending on the problem. is given by the Poisson bracket of the symbols (see e.g. [23]). However, in the particular case of the inhomogeneous quadratic operators all the higher order terms vanish and we have
(16) 
where are some polynomial of degree or less on .
The formula (16) is especially useful to realize some change of unknown in order to put some operators in normal form (see e.g. [8] for an application to the reducibility that can also be seen as factorization of exponentials). Note that, applying formally the BackerCampbellHausdorff formula at the level of the operator, (16) suggests a way to transform an exact splitting at the level of the linear ordinary differential equations into an exact splitting at the level of the semigroups generated by inhomogeneous quadratic forms. We refer the reader to the beginning of the Section of [3]
for details about this heuristic. It is used in
[17],[7] and [6] to design some exact splitting methods to solve linear Schrödinger equations with harmonic potential and rotating terms. The result suggested by this formal computation is made rigorous in Proposition 3.2 below, and relies on the Fourier Integral Operators.The third property holds on the representation of the semigroups generated by inhomogeneous quadratic differential operators as Fourier integral operators. It is the more relevant for us to realize some exact splitting. However to present this notion, we need to introduce some basic notations and associated properties.
Definition 2.
is a nonnegative complex symplectic linear bijection on , and we denote , if and
Note that this set is naturally equipped with a structure of monoid (i.e. it is stable by product and the identity belongs to ).
Definition 3.
If is a quadratic form on then its Hamiltonian flow at time , denoted , is the flow at time of the linear ordinary differential equation
(17) 
where .
Note that, in particular, the linear ordinary differential equation (17) can be solved using an exponential and thus we have
(18) 
where is the matrix of . The following proposition summarize some elementary properties of these Hamiltonians flows that will be used all along this paper.
Proposition 3.1.
Let be some quadratic forms on and then the following properties holds

, ,

,

, .
Usually, in this context, the second property is usually called Noether’s theorem because it is also equivalent to have for all . Proofs of these properties can be found, for example, in a nonlinear context, in [20].
The following classical lemma (whose proof is recalled in the subsection 5.2 of the Appendix) links naturally the two previous definitions.
Lemma 3.1.
If is a quadratic form on such that is nonnegative on then
The following theorem, proved by Hörmander in [22] (Thm 5.12 and Prop 5.9), is the main tool we use to realize exact splittings.
Theorem 3.1 (Hörmander [22]).
There exists a monoid morphism
where and is the unit ball of , the space of the bounded operators acting on . Furthermore, if is a quadratic form on such that is nonnegative on then
In [22], is defined through an explicit but heavy formula that is not relevant for us here. An operator of the form with is called a Fourier Integral Operators. It provides a natural way to transform an exact splitting at the level of the linear ordinary differential equations into an exact splitting at the level of the semigroups generated by quadratic differential operators (up to an argument of continuity to remove the uncertainty of sign).
To the best of our knowledge, the idea to use the Fourier integral operators to get an exact splitting has been introduced by Paul Alphonse and the author in [3]. We aimed at characterizing the regularizing effects of the semigroups generated by quadratic differential operator. Our splitting provides a quite explicit representation of the polar decomposition of this semigroup. Note that it is the decomposition we would get applying Theorem 2.1 and Theorem 3.1 with , and .
In order to give a corollary of Theorem 3.1 adapted to get exact splitting for semigroups generated by inhomogeneous quadratic differential operators, we have to introduce a last elementary notation. If is a polynomial of degree or less that we write in coordinates as
where , , and , then is the complex quadratic form defined by
Proposition 3.2.
Let be some polynomials of degree or less depending continuously on for some , whose real part is uniformly bounded by below on and satisfying
(19) 
then we have
(20) 
The proof is given at the end of this section. Let us mention that this proposition is also a corollary of the Theorem 2.3 of [30]. Note that if the polynomials are homogeneous (i.e. if they are some quadratic forms) then the can be removed. Furthermore, as a corollary of the proof, if we do not assume that the polynomials are uniformly bounded by below and that the polynomials depend continuously on , then (20) is still valid up to an uncertainty of sign.
It may also be interesting to have exact splittings to compute evolution operators generated by nonautonomous inhomogeneous quadratic differential operators (like it is done for the nonautonomous linear magnetic Schrödinger equations in [6]). Here, we chose for conciseness to do not consider the nonautonomous case. Nevertheless, let us mention that it would be possible to generalize Proposition 3.2 to deal with nonautonomous equations using the generalization of the Theorem 3.1 proven in [27]. With such a generalization, the exponential of matrices become naturally the solutions of nonautonomous linear ordinary differential equations (which can be studied similarly using Magnus expansions, see [20]).
In order to illustrate Proposition 3.2, let us give an elementary application. (Section 4 being devoted to more sophisticated applications). Indeed, the formula (15) used to compute rotations can clearly be extended analytically for where . Since, up to a transposition, this formula can be written as