Lanczos-like algorithm for the time-ordered exponential: The ∗-inverse problem

10/10/2019 ∙ by Pierre-Louis Giscard, et al. ∙ ULCO Charles University in Prague 0

The time-ordered exponential of a time-dependent matrix A(t) is defined as the function of A(t) that solves the first-order system of coupled linear differential equations with non-constant coefficients encoded in A(t). The authors recently proposed the first Lanczos-like algorithm capable of evaluating this function. This algorithm relies on inverses of time-dependent functions with respect to a non-commutative convolution-like product, denoted ∗. Yet, the existence of such inverses, crucial to avoid algorithmic breakdowns, still needed to be proved. Here we constructively prove that ∗-inverses exist for all non-identically null, smooth, separable functions of two variables. As a corollary, we partially solve the Green's function inverse problem which, given a distribution G, asks for the differential operator whose fundamental solution is G. Our results are abundantly illustrated by examples.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction: Time-ordered Exponential and -Lanczos Algorithm

1.1. Context

Consider the matrix depending on the real-time variable . The time-ordered exponential of is defined as the unique solution of the system of coupled linear differential equations with non-constant coefficients

(1.1)

with and

the identity matrix. Under the assumption that

commutes with itself at all times, i.e., for all , then the time-ordered exponential is an ordinary matrix exponential . In general, however, has no known explicit form in terms of . In spite of its widespread applications throughout physics, mathematics, and engineering, the time-ordered exponential function is still very challenging to calculate. Recently P.-L. G. and S. P. proposed the first Lanczos-like algorithm [5] capable of evaluating

for any two vectors

with , where is the Hermitian transpose of . The algorithm inherently relies on a non-commutative convolution-like product, denoted by , between time-dependent functions and necessitates the calculation of inverses with respect to this product. The purpose of the present contribution is to constructively establish the existence of these inverses. More generally, these results answer the Green’s function inverse problem: namely, given a function of two variables, what is the differential operator whose fundamental solution is ? Here, our results are valid even when the function is a smooth and separable function of two variables rather than depending solely on ; a simpler case for which the -product reduces to a convolution and the solution is obtained from standard Fourier analysis.

Before these results can be presented, we recall the definition and properties of the product utilized.

1.2. -Product

Let and be time variables in an interval . Let and be time-dependent generalized functions. We define the convolution-like product between and as

(1.2)

From this definition, we find the identity element with respect to the -product to be the Dirac delta distribution, . Observe that the -product is not, in general, a convolution but may be so when both and depend only on the difference .

As a case of special interest for the -Lanczos algorithm, consider the situation where and , where stands for the Heaviside theta function (with the convention ). Here and in the rest of the paper, the tilde indicates that is an ordinary function. Then the -product between simplifies to

which makes calculations involving such functions easier to carry out.

The -product extends directly to time-dependent matrices by using the ordinary matrix product between the integrands in (1.2) (see [5] for more details). It is also well defined for functions that depend on less than two-time variables. Indeed, consider a generalized function , then

where is defined as before. Hence the time variable of is treated as the left time variable of a doubly time-dependent generalized function. This observation extends straightforwardly to constant functions.

1.3. -Lanczos algorithm

As shown in [4], if is a time-dependent matrix with bounded entries for every , then the related time-ordered exponential can be expressed as

(1.3)

Here is the -resolvent, defined as

Input: A complex time-dependent matrix , and complex vectors such that . Output: Coefficients and defining the matrix of Eq. (1.4) which satisfies Eq. (1.5).

Table 1. The -Lanczos Algorithm of [5].

Now we can recall the results [5] pertaining to the time-ordered-exponential. Let with a time-dependent matrix. The -Lanczos algorithm of Table 1 produces a sequence of tridiagonal matrices , , of the form

(1.4)

and such that the matching moment property is achieved:

Theorem 1.1 ([5]).

Let and be as described above, then

(1.5)

In particular, for , we have the exact expression

while for , the right-hand side yields an approximation to the time-ordered exponential. The method of path-sum [4] then gives explicitly

(1.6)

The and appearing in the matrices are produced by the -Lanczos procedure through recurrence relations. A crucial step in the algorithm is the -inversion of the , i.e, the calculation of a distribution such that . The paper [5] assumed the existence of such -inverses. However, if a fails to exist, then the algorithm suffers a breakdown.

Under the assumption that all entries of the input matrix are smooth functions, we conjectured in [5] that all the coefficients and in -Lanczos algorithm are of the form , , with and separable functions (see definition in §2) and smooth in both time variables. This conjecture is justified not only by our experiments but also by observing that the set of the separable functions smooth in both and is closed under -product, summation, and differentiation. In spite of these encouraging observations, proving the conjecture is surprisingly difficult as nothing a priori precludes the and coefficients produced by the -Lanczos algorithm from being arbitrary distributions. Nonetheless, under the conjecture and its assumptions, we prove here in a constructive way that the algorithmic breakdowns due to failing to exist cannot happen unless is identically null. More generally, we show that the -inverse of a function can be obtained when is smooth, not identically null, and separable. Note that here and later, the existence of a -inverse means that it exists almost everywhere in .

The rest of this article is organized as follows: in §2, we begin by recalling necessary definitions and properties of separable functions and distributions. In §2.1, we give the -inverses of functions of a single variable. We then proceed in §2.2 with the -inverses of all functions that are polynomials in at least one variable. Encouraged by the method underlying these results, we generalize it to construct the -inverse of any piecewise smooth separable function in §2.3. Finally, in §3, we present the relation between our results and the Green’s function inverse problem.

2. Existence and mathematical expression of -inverses

The calculation of -inverses of functions carries the gist of the difficulty inherent in obtaining explicit expressions for time-ordered exponentials. In general, given an arbitrary ordinary function and barring any further assumption, the -inverse of cannot be given explicitly.111Practical numerical questions pertaining to the behavior of -inverses under time discretization will be discussed in detail elsewhere. As observed in [5], a time-discretized -inverse is always computable using an ordinary matrix inverse. In this section, we show that the -inverse is indeed accessible from the solution of an ordinary linear differential equation provided that is a separable function that is smooth in and not identically null. A function is separable if and only if there exist ordinary functions and with

We begin by recalling important properties of the Dirac delta distribution and its derivatives . The Dirac delta derivatives are characterized by the relation expounded by Schwartz [10], . From this we get that -multiplication by acts as a derivative operator

while we have . The notation stands for the th -derivative and th -derivative of evaluated at . From now on, we omit the arguments of the Heaviside functions and Dirac deltas when necessary to alleviate the equations.

For functions of the form , the derivatives resulting from the -action of are taken in the sense of distributions:

(2.1a)
(2.1b)

see [10, Chapter 2, § 2]. Finally, we note the following identities between distributions for

(2.2a)
(2.2b)

where is an ordinary function.

2.1. Functions of a single time variable

The -inverse of functions of a single time variable times a Heaviside function are easy to find explicitly:

Proposition 2.1.

Let and so that and are differentiable, and not identically null over . Then

Proof.

Since is an ordinary function and , Eqs. (2.1) and [10, Chapter 2, § 2] give

as . We deduce that The -inverse of is the solution of the equation , i.e., , from which we get the expression

An analogous proof yields the inverse . ∎

Proposition 2.1 is particularly useful to determine the -inverse of products of functions of a single time variable such as those of [5]. We give two detailed examples of this below:

Example 2.1.

Let us determine the -inverse of . To this end, we remark that and thus

Since , the -inverse of is immediately provided by Proposition 2.1 as . Then , whose -action on a test function is

Example 2.2.

Let us find the left and right actions of the -inverse of on test functions. We note that with and . Hence by Proposition 2.1, the left action of the inverse on a test function is

and its right action is

2.2. -inverses of polynomials

The method employed in the proof of Proposition 2.1 relying on differential equations generalizes straightforwardly to polynomials in at least one time variable, here taken to be . An analogous result can be given for functions that are polynomials in .

Proposition 2.2.

Let be so that is a polynomial of degree in and is smooth in . If is not identically null over , then

where

is the solution of the linear homogeneous ordinary differential equation in

with the boundary conditions

Proof.

Observe that is a piecewise smooth function, and, as a function of , it has a discontinuity located at . Since furthermore, is of degree in , Eq. (2.1) gives

Hence where is the generalized function satisfying

(2.3)

Now let us assume that the solution takes the form with a smooth function of . Then we get, for ,

Thus Eq. (2.3) can be rewritten as the system:

As is not identically null, the last equations imply for . Moreover, since by Eq. (2.2) we have , the second equation becomes . Since the set of zeros of is made of isolated points, the ordinary differential equation above has a solution almost everywhere (more precisely, is defined for ). Thus assuming to be of the form with smooth in is a consistent choice, which concludes the proof. ∎


Remark 2.1.

If is identically null over , then

since is continuous at . Hence we can apply Proposition 2.2 to . In the further case in which all are identically null for and is a constant , the -inverse is obtained noting that

These considerations show that the condition is not necessary for to exist. Rather the condition is that itself must not be identically zero.

Example 2.3.

Let us determine the -inverse of the polynomial . Following Proposition 2.2 we have , where and solves

This gives and thus

We can now verify that this works as expected

where the last equality follows by virtue of Eq. (2.2). Now

A technique similar to the one used in the proof of Proposition 2.2 can be applied to a more general class of functions. For instance, whenever differentiating leads to an expression like

the expression can be rewritten as

Then we can go on with a further combination of differentiations until there is no Heaviside function left on the right-hand side of the above equality. In particular, such a technique can be used when dealing with commonly encountered exponential or trigonometric functions.

2.3. -inverses of piecewise smooth separable functions

The strategy used in the proof of Proposition 2.1 can be extended to give -inverses in the much more general case of functions which are separable and piecewise smooth in both time variables over the interval .

Theorem 2.1.

Consider a function with a smooth function in , and so that is not identically null. Assume that there exists a distribution with smooth functions depending only on and , such that

(2.4)

Then, if , the -inverse of is

with the smooth functions

where and is the solution of the linear homogeneous ordinary differential equation in

with boundary conditions

In these expressions, are smooth functions given by

If instead , the -inverse of is trivially given by

Inverting the role of and , a completely similar theorem is proven by changing all left -multiplications by with right multiplications and vice-versa. In this situation, satisfies a linear homogeneous ordinary differential equation in , and the boundary conditions involve the variable .

Proof.

By -multiplying by , we get

(2.5)