# Lanczos-like algorithm for the time-ordered exponential: The ∗-inverse problem

The time-ordered exponential of a time-dependent matrix A(t) is defined as the function of A(t) that solves the first-order system of coupled linear differential equations with non-constant coefficients encoded in A(t). The authors recently proposed the first Lanczos-like algorithm capable of evaluating this function. This algorithm relies on inverses of time-dependent functions with respect to a non-commutative convolution-like product, denoted ∗. Yet, the existence of such inverses, crucial to avoid algorithmic breakdowns, still needed to be proved. Here we constructively prove that ∗-inverses exist for all non-identically null, smooth, separable functions of two variables. As a corollary, we partially solve the Green's function inverse problem which, given a distribution G, asks for the differential operator whose fundamental solution is G. Our results are abundantly illustrated by examples.

## Authors

• 6 publications
• 4 publications
• ### Lanczos-like method for the time-ordered exponential

The time-ordered exponential is defined as the function that solves any ...
09/08/2019 ∙ by Pierre-Louis Giscard, et al. ∙ 0

• ### Quantum spectral methods for differential equations

Recently developed quantum algorithms address computational challenges i...
01/04/2019 ∙ by Andrew M. Childs, et al. ∙ 0

• ### A numerical study of third-order equation with time-dependent coefficients: KdVB equation

In this article we present a numerical analysis for a third-order differ...
09/02/2020 ∙ by Cristhian Montoya, et al. ∙ 0

• ### Reconstruction of a Space-Time Dependent Source in Subdiffusion Models via a Perturbation Approach

In this article we study inverse problems of recovering a space-time dep...
02/05/2021 ∙ by Bangti Jin, et al. ∙ 0

• ### Low c-differential and c-boomerang uniformity of the swapped inverse function

Modifying the binary inverse function in a variety of ways, like swappin...
09/19/2020 ∙ by Pantelimon Stanica, et al. ∙ 0

• ### Efficient implementation of partitioned stiff exponential Runge-Kutta methods

Multiphysics systems are driven by multiple processes acting simultaneou...
12/02/2019 ∙ by Mahesh Narayanamurthi, et al. ∙ 0

• ### Optimal Estimation of Dynamically Evolving Diffusivities

The augmented, iterated Kalman smoother is applied to system identificat...
03/11/2018 ∙ by Kurt S. Riedel, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1. Introduction: Time-ordered Exponential and ∗-Lanczos Algorithm

### 1.1. Context

Consider the matrix depending on the real-time variable . The time-ordered exponential of is defined as the unique solution of the system of coupled linear differential equations with non-constant coefficients

 (1.1) A(t′)U(t′,t)=ddt′U(t′,t),U(t,t)=Id, for all t∈I,

with and

the identity matrix. Under the assumption that

commutes with itself at all times, i.e., for all , then the time-ordered exponential is an ordinary matrix exponential . In general, however, has no known explicit form in terms of . In spite of its widespread applications throughout physics, mathematics, and engineering, the time-ordered exponential function is still very challenging to calculate. Recently P.-L. G. and S. P. proposed the first Lanczos-like algorithm [5] capable of evaluating

for any two vectors

with , where is the Hermitian transpose of . The algorithm inherently relies on a non-commutative convolution-like product, denoted by , between time-dependent functions and necessitates the calculation of inverses with respect to this product. The purpose of the present contribution is to constructively establish the existence of these inverses. More generally, these results answer the Green’s function inverse problem: namely, given a function of two variables, what is the differential operator whose fundamental solution is ? Here, our results are valid even when the function is a smooth and separable function of two variables rather than depending solely on ; a simpler case for which the -product reduces to a convolution and the solution is obtained from standard Fourier analysis.

Before these results can be presented, we recall the definition and properties of the product utilized.

### 1.2. ∗-Product

Let and be time variables in an interval . Let and be time-dependent generalized functions. We define the convolution-like product between and as

 (1.2) (f2∗f1)(t′,t):=∫∞−∞f2(t′,τ)f1(τ,t)dτ.

From this definition, we find the identity element with respect to the -product to be the Dirac delta distribution, . Observe that the -product is not, in general, a convolution but may be so when both and depend only on the difference .

As a case of special interest for the -Lanczos algorithm, consider the situation where and , where stands for the Heaviside theta function (with the convention ). Here and in the rest of the paper, the tilde indicates that is an ordinary function. Then the -product between simplifies to

 (f2∗f1)(t′,t) =∫∞−∞~f2(t′,τ)~f1(τ,t)Θ(t′−τ)Θ(τ−t)dτ, =Θ(t′−t)∫t′t~f2(t′,τ)~f1(τ,t)dτ,

which makes calculations involving such functions easier to carry out.

The -product extends directly to time-dependent matrices by using the ordinary matrix product between the integrands in (1.2) (see [5] for more details). It is also well defined for functions that depend on less than two-time variables. Indeed, consider a generalized function , then

 (f3∗f1)(t′,t) =f3(t′)∫+∞−∞f1(τ,t)dτ, (f1∗f3)(t′,t) =∫+∞−∞f1(t′,τ)f3(τ)%dτ.

where is defined as before. Hence the time variable of is treated as the left time variable of a doubly time-dependent generalized function. This observation extends straightforwardly to constant functions.

### 1.3. ∗-Lanczos algorithm

As shown in [4], if is a time-dependent matrix with bounded entries for every , then the related time-ordered exponential can be expressed as

 (1.3) U(t′,t)=Θ(t′−t)∫t′tR∗(~A)(τ,t)dτ.

Here is the -resolvent, defined as

 R∗(~A) :=(Id1∗−~A)∗−1, :=Id1∗+∑k>0~A∗k.

Now we can recall the results [5] pertaining to the time-ordered-exponential. Let with a time-dependent matrix. The -Lanczos algorithm of Table 1 produces a sequence of tridiagonal matrices , , of the form

 (1.4) Tn:=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣α01∗β1α1⋱⋱⋱1∗βn−1αn−1⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦,

and such that the matching moment property is achieved:

###### Theorem 1.1 ([5]).

Let and be as described above, then

 (1.5) wH(A∗j)v=eH1(T∗jn)e1, for j=0,…,2n−1.

In particular, for , we have the exact expression

 wHU(t′,t)v=Θ(t′−t)∫t′tR∗(Tn)1,1(τ,t)dτ,

while for , the right-hand side yields an approximation to the time-ordered exponential. The method of path-sum [4] then gives explicitly

 (1.6) R∗(Tn)1,1(t′,t)=(1∗−α0−(1∗−α1−(1∗−...)∗−1∗β2)∗−1∗β1)∗−1.

The and appearing in the matrices are produced by the -Lanczos procedure through recurrence relations. A crucial step in the algorithm is the -inversion of the , i.e, the calculation of a distribution such that . The paper [5] assumed the existence of such -inverses. However, if a fails to exist, then the algorithm suffers a breakdown.

Under the assumption that all entries of the input matrix are smooth functions, we conjectured in [5] that all the coefficients and in -Lanczos algorithm are of the form , , with and separable functions (see definition in §2) and smooth in both time variables. This conjecture is justified not only by our experiments but also by observing that the set of the separable functions smooth in both and is closed under -product, summation, and differentiation. In spite of these encouraging observations, proving the conjecture is surprisingly difficult as nothing a priori precludes the and coefficients produced by the -Lanczos algorithm from being arbitrary distributions. Nonetheless, under the conjecture and its assumptions, we prove here in a constructive way that the algorithmic breakdowns due to failing to exist cannot happen unless is identically null. More generally, we show that the -inverse of a function can be obtained when is smooth, not identically null, and separable. Note that here and later, the existence of a -inverse means that it exists almost everywhere in .

The rest of this article is organized as follows: in §2, we begin by recalling necessary definitions and properties of separable functions and distributions. In §2.1, we give the -inverses of functions of a single variable. We then proceed in §2.2 with the -inverses of all functions that are polynomials in at least one variable. Encouraged by the method underlying these results, we generalize it to construct the -inverse of any piecewise smooth separable function in §2.3. Finally, in §3, we present the relation between our results and the Green’s function inverse problem.

## 2. Existence and mathematical expression of ∗-inverses

The calculation of -inverses of functions carries the gist of the difficulty inherent in obtaining explicit expressions for time-ordered exponentials. In general, given an arbitrary ordinary function and barring any further assumption, the -inverse of cannot be given explicitly.111Practical numerical questions pertaining to the behavior of -inverses under time discretization will be discussed in detail elsewhere. As observed in [5], a time-discretized -inverse is always computable using an ordinary matrix inverse. In this section, we show that the -inverse is indeed accessible from the solution of an ordinary linear differential equation provided that is a separable function that is smooth in and not identically null. A function is separable if and only if there exist ordinary functions and with

 ~f(t′,t)=k∑i=1~ai(t′)~bi(t).

We begin by recalling important properties of the Dirac delta distribution and its derivatives . The Dirac delta derivatives are characterized by the relation expounded by Schwartz [10], . From this we get that -multiplication by acts as a derivative operator

 (δ(j)∗f)(t′,t) =∫∞−∞δ(j)(t′−τ)f(τ,t)dτ, =−∫−∞∞δ(j)(q)f(t′−q,t)dq, =f(j,0)(t′,t),

while we have . The notation stands for the th -derivative and th -derivative of evaluated at . From now on, we omit the arguments of the Heaviside functions and Dirac deltas when necessary to alleviate the equations.

For functions of the form , the derivatives resulting from the -action of are taken in the sense of distributions:

 (2.1a) δ(j)∗f(t′,t) =~f(j,0)(t′,t)Θ+~f(j−1,0)(t,t)δ+⋯+~f(t,t)δ(j−1), (2.1b) f(t′,t)∗δ(j) =(−1)j(~f(0,j)(t′,t)Θ+~f(0,j−1)(t′,t′)δ+⋯+~f(t′,t′)δ(j−1));

see [10, Chapter 2, § 2]. Finally, we note the following identities between distributions for

 (2.2a) ~f(t′)δ(j)(t′−t) =(−1)j(~f(t)δ(t′−t))(0,j), (2.2b) ~f(t)δ(j)(t′−t) =(~f(t′)δ(t′−t))(j,0),

where is an ordinary function.

### 2.1. Functions of a single time variable

The -inverse of functions of a single time variable times a Heaviside function are easy to find explicitly:

###### Proposition 2.1.

Let and so that and are differentiable, and not identically null over . Then

 a∗−1(t′,t)=∂∂t′δ(t′−t)~a(t′),b∗−1(t′,t)=−δ′(t′−t)~b(t).
###### Proof.

Since is an ordinary function and , Eqs. (2.1) and [10, Chapter 2, § 2] give

 (a∗δ′)(t′,t)=~a(t′)δ(t′−t),

as . We deduce that The -inverse of is the solution of the equation , i.e., , from which we get the expression

 a∗−1(t′,t)=δ′(t′−t)∗δ(t′−t)~a(t′).

An analogous proof yields the inverse . ∎

Proposition 2.1 is particularly useful to determine the -inverse of products of functions of a single time variable such as those of [5]. We give two detailed examples of this below:

###### Example 2.1.

Let us determine the -inverse of . To this end, we remark that and thus

 ((t′−t)Θ)∗−1=(Θ∗−1∗Θ∗−1).

Since , the -inverse of is immediately provided by Proposition 2.1 as . Then , whose -action on a test function is

 (Θ∗−1∗Θ∗−1∗f)(t′,t) =f(2,0)(t′,t).
###### Example 2.2.

Let us find the left and right actions of the -inverse of on test functions. We note that with and . Hence by Proposition 2.1, the left action of the inverse on a test function is

 β∗−1∗f =b∗−11∗b∗−12∗f=∂∂t′[12(cos(t′)+1)∂∂t′f(t′,t)], =sin(t′)2(cos(t′)+1)2∂∂t′f(t′,t)+12(cos(t′)+1)∂2∂t′2f(t′,t),

and its right action is

 f∗β∗−1 =f∗b∗−11∗b∗−12=∂∂t[12(cos(t)+1)∂∂tf(t′,t)], =sin(t)2(cos(t)+1)2∂∂tf(t′,t)+12(cos(t)+1)∂2∂t2f(t′,t).

### 2.2. ∗-inverses of polynomials

The method employed in the proof of Proposition 2.1 relying on differential equations generalizes straightforwardly to polynomials in at least one time variable, here taken to be . An analogous result can be given for functions that are polynomials in .

###### Proposition 2.2.

Let be so that is a polynomial of degree in and is smooth in . If is not identically null over , then

 p(t′,t)∗−1=x(t′,t)∗δ(k+1)(t′−t),

where

is the solution of the linear homogeneous ordinary differential equation in

 k∑j=0(−1)j~p(k−j,0)(t,t)~x(0,j)(t′,t)=0,

with the boundary conditions

 ~x(0,k−1)(t′,t′)=(−1)k~p(t′,t′), ~x(0,k−2)(t′,t′)=0, …, ~x(t′,t′)=0.
###### Proof.

Observe that is a piecewise smooth function, and, as a function of , it has a discontinuity located at . Since furthermore, is of degree in , Eq. (2.1) gives

 (δ(k+1)∗p)(t′,t) =k∑j=0~p(k−j,0)(t,t)δ(j)(t′−t).

Hence where is the generalized function satisfying

 (2.3) x(t′,t)∗(k∑j=0~p(k−j,0)(t,t)δ(j)(t′−t))=δ(t′−t).

Now let us assume that the solution takes the form with a smooth function of . Then we get, for ,

 x(t′,t)∗~p(k−j,0)(t,t)δ(j) = ~p(k−j,0)(t,t)(−1)j(~x(0,j)(t′,t)Θ+j−1∑ℓ=0~x(0,j−1−ℓ)(t′,t′)δ(ℓ)).

Thus Eq. (2.3) can be rewritten as the system:

As is not identically null, the last equations imply for . Moreover, since by Eq. (2.2) we have , the second equation becomes . Since the set of zeros of is made of isolated points, the ordinary differential equation above has a solution almost everywhere (more precisely, is defined for ). Thus assuming to be of the form with smooth in is a consistent choice, which concludes the proof. ∎

###### Remark 2.1.

If is identically null over , then

 δ′(t′−t)∗p(t′,t)=~p(1,0)(t′,t)Θ(t′−t),

since is continuous at . Hence we can apply Proposition 2.2 to . In the further case in which all are identically null for and is a constant , the -inverse is obtained noting that

 δ(k+1)(t′−t)∗p(t′,t)=αδ(t′−t).

These considerations show that the condition is not necessary for to exist. Rather the condition is that itself must not be identically zero.

###### Example 2.3.

Let us determine the -inverse of the polynomial . Following Proposition 2.2 we have , where and solves

 ~x(t′,t)+t~x(0,1)(t′,t)=0,  ~x(t′,t′)=1t′.

This gives and thus

 p(t′,t)∗−1=(1tΘ)∗δ′′=2t3Θ−1t′2δ+1t′δ′.

We can now verify that this works as expected

 p(t′,t)∗−1∗p(t′,t) =(1tΘ)∗δ′′∗(t′−2t)Θ=(1tΘ)∗((t′−2t)Θ)(2,0), =(1tΘ)∗(δ−tδ′)=1tΘ−(−1)t(−1t2Θ+1t′δ), =tt′δ=δ,

where the last equality follows by virtue of Eq. (2.2). Now

 p(t′,t)∗p(t′,t)∗−1 =(t′−2t)Θ∗(1tΘ)∗δ′′, =−(t′−t)Θ∗δ′′=−(−1)2((t′−t)Θ)(0,2), =−(0−δ−0δ′)=δ.

A technique similar to the one used in the proof of Proposition 2.2 can be applied to a more general class of functions. For instance, whenever differentiating leads to an expression like

 δ(k)∗f(t′,t)=~h(t)f(t′,t)+g(t′,t),

the expression can be rewritten as

 (δ(k)−~h(t)δ)∗f(t′,t)=g(t′,t).

Then we can go on with a further combination of differentiations until there is no Heaviside function left on the right-hand side of the above equality. In particular, such a technique can be used when dealing with commonly encountered exponential or trigonometric functions.

### 2.3. ∗-inverses of piecewise smooth separable functions

The strategy used in the proof of Proposition 2.1 can be extended to give -inverses in the much more general case of functions which are separable and piecewise smooth in both time variables over the interval .

###### Theorem 2.1.

Consider a function with a smooth function in , and so that is not identically null. Assume that there exists a distribution with smooth functions depending only on and , such that

 (2.4) L(t′,t)∗~f(t′,t)=0.

Then, if , the -inverse of is

 f∗−1=~r−1(t′,t)Θ+k∑m=0~rm(t′)δ(m),

with the smooth functions

 ~r−1(t′,t) :=k+1∑j=0(−1)j~y(0,j)j(t′,t), ~rm≥0(t′) :=k+1∑j=m+1(−1)j~y(0,j−1−m)j(t′,t′),

where and is the solution of the linear homogeneous ordinary differential equation in

 k∑m=0~hm(t)~x(0,m)(t′,t)=0,

with boundary conditions

 ~x(0,k−1)(t′,t′)=(~hk(t′))−1,~x(0,k−2)(t′,t′)=0,…,~x(t′,t′)=0.

In these expressions, are smooth functions given by

 ~hm(t):=k+1∑j=m+1j−1∑ℓ=m(ℓm)(−1)ℓ~f(j−ℓ−1,0)(t,t)~g(ℓ−m)j(t).

If instead , the -inverse of is trivially given by

 f∗−1(t′,t)=1~g1(t′)~f(t′,t′)L(t′,t).

Inverting the role of and , a completely similar theorem is proven by changing all left -multiplications by with right multiplications and vice-versa. In this situation, satisfies a linear homogeneous ordinary differential equation in , and the boundary conditions involve the variable .

###### Proof.

By -multiplying by , we get

 (2.5) L(t′,t)∗f(