Computing Floquet Hamiltonians with Symmetries

07/12/2020 ∙ by {Terry Loring, et al. ∙ UNAH Universidad Nacional Autónoma de Honduras University of New Mexico 0

Unitary matrices arise in many ways in physics, in particular as a time evolution operator. For a periodically driven system one frequently wishes to compute a Floquet Hamilonian that should be a Hermitian operator H such that e^-iTH=U(T) where U(T) is the time evolution operator at time corresponding the period of the system. That is, we want H to be equal to -i times a matrix logarithm of U(T). If the system has a symmetry, such as time reversal symmetry, one can expect H to have a symmetry beyond being Hermitian. We discuss here practical numerical algorithms on computing matrix logarithms that have certain symmetries which can be used to compute Floquet Hamiltonians that have appropriate symmetries. Along the way, we prove some results on how a symmetry in the Floquet operator U(T) can lead to a symmetry in a basis of Floquet eigenstates.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Unitary matrices arise in critical ways in many parts of physics, for example those representing time evolution. These are not always constructed as an exponential , but can be result of an integration of a differential equation or a product of exponentials .

Given a physically meaningful unitary

, one generally wants either to understand the eigenvectors and eignevalues, or to have a single Hermitian

so that . That is, we want a matrix logarithm of . Spectral theory tells us that has a unitary diagonalization, since unitary implies normal. However, there is a lack of effective algorithms at present that find orthogonal eigenvectors of normal matrices. In the unitary case, we have at least two ways to proceed. If

, then a unitary matrix that diagonalizes

will diagonalize . Thus if we can do a good job computing the logarithm of a unitary HighamEtAlSquareRootGroupPreserving we have a method to unitarily diagonalize the original unitary. The other method LoringLogOfUnitary is to compute a Schur decomposition of and simply zero-out any stray terms about the diagonal in the triangular factor. Both methods have limitations.

Tracking symmetries in systems is critical in present-day physics, for example the symmetries corresponding to time-reversal. The exact definition of time-reversal in a system with a time-varying Hamiltonian is more complicated than for a system with a stationary Hamiltonian, but in any case, all manner of symmetries can wind up in the unitary that represents time-evolution. One important symmetry for a unitary is that it be complex symmetric, , which means there is a real symmetric so that . Here the Schur decomposition does not work, at least with standard implementations.

Here we look at five symmetry classes of unitary matrices: generic unitary, real orthogonal, complex symmetric, graded and self-dual. The previous method of using the Shur decomposition LoringLogOfUnitary works for three of these: generic, self-dual and real orthogonal. The self-dual case uses a self-dual variation on the Shur decomposition.

The real orthogonal case was not implemented in LoringLogOfUnitary . We leave as an exercise for the reader utilizing the real form of the Schur decomposition that reduces the problem to the two-by-two case. One must be careful, as a real logarithm is not always possible. An exponential is in the connected component of the identify, so if

is real and skew symmetric then

will be real orthogonal with determinant one. A real orthongonal matrix will have a real logarithm if and only if . An equivalent condition is that appear in the spectrum with even multiplicity.

The Schur decomposition is very easy to use to diagonalize a unitary matrix by a unitary— one just zeros out the off-diagonal terms as needed to turn upper-triangular into diagonal. If one wants the logarithm, one simply takes logarithm of the diagonal, a simple task. The limitation is that, to our knowledge, the Schur decomposition has only been developed for the general complex, real orthogonal and self-dual unitary situations. Thus we need to look for a different approach.

We look at a standard algorithm (Higham, , Ch. 11) for computing a matrix logarithm, which takes square root several times to reduce to the situation where something like a power series can compute the logarithm. This algorithm requires the input matrix to have no spectrum on the negative real axis, so at first glance this will restrict us to a unitary matrix with not in the spectrum. In practice, this is not much of a limitation.

In the algorithm, the eigenvalues at

are not stable, and so round-off error is sufficient to make them quickly act as if they are not , unless there is some manner of a -theory index that is non-zero that makes an eigenvalue at stable. This is fine, as the algorithm tends to fail only when the logarithm does not exist anyway.

The square-root algorithm we work with HighamEtAlSquareRootGroupPreserving was designed to work for non-unitary matrices. Indeed, the output can be far from unitary when the input is very close to being unitary. (The exact condition almost never holds for a matrix stored in floating point numbers.) We make a simple modification to the algorithm so that its output is very close to unitary when the input is very close to unitary.

We find that this modified algorithm behaves quite nicely, even when applied to most unitary matrices with in the spectrum. It will fail for a diagonal unitary matrix with in the spectrum, but we hardly need a new algorithm to find a square root of a diagonal matrix.

Then we consider unitaries that are symmetric or have a grading symmetry. The algorithm for square root can be further modified to enforce the needed symmetry at every stage of the main iteration. We are then able to create the desired matrix logarithm algorithm whose output has the corresponding symmetry. For example, for complex symmetric unitary matrix, we expect, and can compute, a logarithm of the form with a real symmetric matrix.

For a recent physics paper that looks at unitaries with various symmetries, and their spectrum, see Floquet-Block_with_symmetries . For a recent physics paper that looks at real symmetric with for a time evolution operator, see FloquetThermalization_with_Symmetries . A paper which computes an imaginary Hermitian so that

for an orthogonal matrix

is SurfaceHoppingAlgorithm .

Our main concern is accuracy, not speed. An accurate log of a unitary will be anti-Hermitian, have exponential close to the input matrix, and be either real or purely imaginary when appropriate. An accurate square root of a unitary will be a matrix with and , and with having an additional symmetry when appropriate.

After applying the appropriate square root algorithm to

several times, we can apply a Padé approximation, enforce the needed symmetries, and end up with

that is Hermitian and has additional symmetries. We can apply a structured diagonalization algorithm to that then gives a structured diagonalization of .

Several interesting results corresponding to some prior work on the subject of structured matrix logarithms computation were reported in Cardoso2010ExponentialsOS and Dieci1996ConsiderationsOC , these papers address the need to get a unitary output when computing the square root of a unitary, although, to the best of our knowledge, the accuracy of practical applications of the methods proposed in these documents was not documented, and the cases of a small gaps at -1 were implicitly avoided.

The rest of the paper is organized as follows. Section II looks at how symmetries in a time-varying Hamiltonian lead to symmetries in the unitary propagator. Section III takes a mathematical look at when structured logarithms and structured diagonalizations exist for unitary matrices with symmetry. In Section IV some theoretical arguments and numerical methods for the computation of symmetry preserving matrix square roots, are presented. The numerical methods for the computation of symmetry preserving matrix logarithms are presented in Section V. In Section VI some prototypical algorithms for symmetry preserving matrix diagonalization, that are based on the computational methods in Section V, are studied.

Ii Time evolution, with symmetries

ii.1 Floquet systems

We consider very general Floquet systems, not simply the crystalline systems, but also those based on quasicrystals and such. Thus we will not have momentum space. Given a periodically varying Hamiltonian , with time period , the standard procedure is to consider the Floquet operator, which is the time evolution operator , and then derive a Floquet Hamiltonian as some manner of a matrix logarithm. We define this formally as

(2.1)

We are assuming here a finite Hilbert space, so there will be at least a small gap in the spectrum of . If there are no symmetries to consider, then one can select a suitable branch of logarithm to use as log in Equation 2.1. For many symmetries, this will not work unless one is lucky enough to have a gap at . What one wants, in any case, is an operator such that

(2.2)

and

(2.3)

Our focus is on disordered, defective, amorphous or quasicrystalline systems where numerical computations will be important, so we desire effective numerical algorithms. The first step is computing the Floquet operator, by a time ordered exponentiation, for example. We have little to add on the methods to use here. The second step is computing the matrix logarithm of the Floquet operator. This is our focus.

Symmetries play a key role in the study of Floquet topological insulators, and arise in other aspects of physics. The definition of a symmetry preserving Floquet system is generally defined in terms of symmetries on the periodic path of Hamiltonians . This then manifests itself as a symmetry on the Floquet operator. In theory then, the Floquet Hamiltonian can be chosen to have an appropriate symmetry. The challenge we take up here is finding algorithms to compute from so that has appropriate symmetries and still satisfies Equations 2.2 and 2.3.

To see how symmetries percolate from the to the Floquet operator, it is convenient to use the Suzuki-Trotter expansion

(2.4)

A nice overview of the theory of periodically driven systems, including how to compute the Floquet operator, can be found in NathanTopPDrivenSystems . For large we have

and so we have also

(2.5)

which we will find useful when examining certain symmetries. In particular notice this implies

(2.6)

ii.2 Time Reversal Symmetry,

We consider now the case where has time reversal symmetry with . If we perform an appropriate orthogonal change of basis we can assume the time reversal antilinear operator

is just conjugation of vectors. For simplicity, we assume time-reversal symmetry about

so the assumption on the Hamiltonian is

In terms of matrices, this becomes

or

(The here means conjugation, so this is in mathematical notation.) We find then

and so, by Equation 2.6, we have shown . In more familiar mathematical terms, this means is a complex symmetric unitary matrix, so

(2.7)

where denotes transpose.

ii.3 Time Reversal Symmetry,

Should have time reversal symmetry with , we can perform a similar analysis. Assume we have preformed the appropriate orthogonal change of basis so that

(2.8)

where

Assuming time reversal about the origin, time reversal symmetry means

but this now translates to

Let us use the dual operation

The dual operation is closely related to , as explained in LoringQMatrices . Time reversal symmetry now becomes

We find then

which means

(2.9)

Thus is a unitary, self-dual matrix.

ii.4 Chiral Symmetry

A simple form of Chiral symmetry we might find for some periodic systems is

where is a unitary matrix that squares to one. A common choice would be

(2.10)

which of course means is even.

Assuming this symmetry, we find

and

and finally

(2.11)

Iii Symmetry preserving Effective Hamiltonians

iii.1 Principal logarithms

Since the Floquet operator is unitary, in the finite dimensional case it is a normal matrix and the spectral theorem applies. In terms of matrices the spectral theorem for normal matrices tells us

where is another unitary matrix and is diagonal with diagonal entries on the unit circle. In the equivalent picture of Floquet eigenstates the statement is that there is a orthogonal basis of with

An effective Hamiltonian is then

(3.1)

where is diagonal with diagonal entries .

If we can find an additional symmetry in the orthogonal basis we should be able to find an effective Hamiltonian with the desired symmetry. This implication does not lead us to an algorithm, but it is illuminating.

It is the possibility of in the spectrum that makes even the theory more difficult. If is not in the spectrum of we can use the functional calculus which, as has been noted many times before, has a tendency to preserve symmetries. Again, this does not lead us to an algorithm. It does provide a way to consider also non-unitary matrices, with symmetries, and how their principal logarithms pick up related symmetries.

For a given matrix , a logarithm of is any matrix such that . We assume that has no eigenvalues on so that the existence of a unique principal logarithm is assured as shown in the following theorem.

Theorem 3.1.

(Higham, , Theorem 1.31.) Let have no eigenvalues on . There is a unique logarithm of all of whose eigenvalues lie in the strip . We refer to as the principal logarithm of and write .

It is easy to check (using power series) that for any with spectrum avoiding that if is the principal logarithm of then so uniqueness tells us

(3.2)

A useful formula for theoretical work is to express the principal logarithm via the analytic functional calculus. This works for bounded operators just as well as for matrices. Still assuming have no spectrum on , we have

where is a positively oriented contour that encloses the spectrum of and is within the complement of .

The following holds true for bounded operators on Hilbert space, provided one uses a sensible definition of transpose, the dual operation and the unitary symmetry . For matrices, these were defined in §II. In the matrix case, Theorem 3.2 is substantially the same as (HighamEtAlSquareRootGroupPreserving, , Theorem 3.1).

Theorem 3.2.

Assume have no eigenvalues on and that is the principal logarithm of . Then the following are all true.

  1. If is unitary then .

  2. If is complex symmetric then is complex symmetric.

  3. If is real then is real.

  4. If is self-dual then is self-dual.

  5. If then .

Proof.

(1) When is unitary, it is normal and so the holomorphic functional calculus equals the continuous functional calculus. That is, we can compute as in Equation 3.1 where it is evident that (notice in defining an effective Hamiltonian we would use ).

For the other parts of the proof we need to use the holomorphic functional calculus, so select , a positively oriented loop that encloses the spectrum of and avoids . Then

(3.3)

(2) The transpose operation pulls inside a path integral, so

(3.4)

Thus the assumption leads to .

(3) Now assume is real, so that . Then by Equation 3.4 we have , while by Equation 3.2 we have . Thus and so is also real.

(4) This proof is similar to the proof of (2), relying on the formula

(3.5)

(5) Now consider the additional assumption that . Then

Thus we have proven .

In the following subsections, we examine the case of a unitary with in the spectrum, assuming an additional symmetry. We limit the discussion to both cases of time reversal symmetry and Chiral symmetry.

iii.2 Time Reversal Symmetry,

If the Floquet operator is complex symmetric, as well as unitary, we can use another variation on the spectral theorem. If we let

and

then one can easily verify that and are commuting real symmetric matrices with . The spectral theorem for commuting real symmetric matrices tells us

for some real orthogonal matrix and diagonal matrices with real diagonal elements and with real diagonal elements . If we set then we will have achieved

with diagonal with diagonal entries on the unit circle and now a real orthogonal matrix. This means we can select the Floquet eigenstates to be real. Also Equation 3.1 now tells us that the effective Hamiltonian can be taken to be a real symmetric matrix.

iii.3 Time Reversal Symmetry,

If the Floquet operator is self dual, as well as unitary, we can write with and commuting self-dual, Hermitian matrices. There is a spectral theorem for commuting Hermitian self-dual matrices (LoringQMatrices, , Theorem 2.4) that tells us

for a unitary with and where

and and are diagonal with real diagonals and . Here the we have in mind is as defined in Equation 2.8. Again we set and we have

for unitary with and

with diagonal with diagonal and the real. The condition means that column of is the result of applied to column of . Thus we can find an orthogonal basis of Floquet eigenstates that appear in time reversal pairs, , with

and

This also means we can find an effective Hamiltonian that is a self-dual, Hermitian matrix.

iii.4 Chiral Symmetry

Assume is a unitary matrix with . We only concern ourselves with the case where the and eigenspaces for are of the same dimension. Up to a unitary change of basis, we are able to assume then

where is even.

The following must certainly have appeared somewhere in the literature. As the proof is short, we include it.

Theorem 3.3.

Suppose is a unitary matrix with

Then there are matrices and such that

has -by- blocks that are diagonal, is unitary with

and

Moreover, we can do all the above with the added condition that the are real matrices.

Proof.

We will prove the first statement by induction on . In the case case, and we can use and .

Now assume that is a unitary with this symmetry, that , and that the theorem is true for unitaries of size -by-. The spectral theorem for normal matrices applies to . Thus, if

for unit vector then also Thus which implies

Of course must be on the unit circle.

First suppose there is some unit eigenvector for for which is not real. Then is an eigenvector for , which is not equal to , which implies that is orthogonal to . Consider the two vectors

These are both unit vectors, and

so is in while is in . Thus we can find an orthogonal basis of the form

with , , each in and each in . Notice that spans an invariant subspace for , and so also the span of

is an invariant subspace for . It is also an invariant subspace for , and the restriction of to that subspace has still balanced and eigenspaces. By the induction hypothesis, we can find a new basis

for this subspace so that each is an invariant subspace for . Thus

gives the desired basis needed to define .

The remaining case is where has eigenvalues only at . This means and so commutes with . Commuting Hermitian matrices can be diagonalized by one unitary . Commuting with means . Since will be diagonal, it will certainly have the desired diagonal blocks.

To get the second statement, it suffices to understand the 2-by-2 case. Given

with since

we see that and are already real, and . If is the phase such that is real, then the equation

show how to alter to still commute with but not make the blocks real as well as diagonal. ∎

If there are no eigenvalues of equal to then the proof just given indicates how to select eigenvectors in pairs of the form . The eigenspaces of for need special care, and it is simply not always possible to form the pairs desired. We need to assume more about .

Let us now consider the eigenspace of . If then and so is an invariant subspace of . Thus we can decompose into

where acts like on vectors in and like on vectors in . If

then we can get structured eigenvectors within as follows. Take any orthonomal basis of and any orthonomal basis of . Set

and notice

Then is a basis of the eigenspace of for .

The eigenspace for is dealt with similarly. The eigenvalues that are not real lead to structured pairs as discussed in the proof of Theorem 3.3.

In practice, computing the spaces is not a numerically stable operation. Forturnately, there is an invariant (coming from -theory, ultimately) that is stable to compute and that tells us if equals . We stick with the special case where has eigenvalues of equal multiplicity so that the obstruction for the vanishes exactly when the the obstruction for the vanishes, and so we need only a single invariant.

This invariant uses the signature. For an invertible Hermitian matrix , the signature of , denoted , is the number of positive eigenvalues of minus the number of negative eigenvalues of , each counted with multiplicity.

Lemma 3.4.

If is a unitary with then is Hermitian and invertible,

(3.6)

is an integer, and

if and only if

(3.7)

for the subspaces defined as above.

Proof.

We know and so

Notice is a product of unitary matrices, so is unitary, so is very much invertible. Since is even, the signature of must be even. We can decompose as for with diagonal blocks and a unitary with . Then

The signature is invariant under unitary conjugation, so

We can thus reduce to the case of with diagonal blocks. As both the signature and Equation 3.7 respect direct sums, we can reduce further to the two-by-two case.

If then

for some in or

or

The theorem is easy to check in all three cases. ∎

Lemma 3.5.

Suppose is a unitary with . If there is a Hermitian matrix with and then

Proof.

We note first

and that

varies continuously in . As this is always an integer it is constant, so

Remark 3.6.

If a unitary with has a square root that is also a unitary with , then has a structured logarithm and so also does . Thus is also an obstruction to finding structured square roots.

Now we see this index gives us the only obstruction to finding a structured logarithm.

Theorem 3.7.

If is a unitary with and

then there is a Hermitian matrix with and .

Proof.

In the special case where is not in the spectrum,we just use the standard branch of logarithm and define . Theorem 3.2 tells us that and . If is in the spectrum of , one cannot use any other branch of logarithm as these fail to have the correct symmetry and will as well. We need to carefully split apart the eigenspace of and assign some parts the new eigenvalue , and assign other parts the eigenvalue .

Once more we decompose as for real and block diagonal, and a unitary with . We are assuming , or equivalently . Next we find with and . This can be done in (graded) two-by-two blocks, which we can assume only come in the form

for various in . Since the index invariant is zero, by Lemma 3.4, we can combine the one-by-one blocks into and . Since

for

and

we can define

Taking blocked direct sums of these and we find a block-diagonal real matrix of the form

with . The desired is . ∎

Lemma 3.8.

If the Floquet Hamiltonian has chiral symmety , for with eigenspaces for and for of the same dimension, then

Proof.

For in we have

where and , as well as

Since

we find

and so

Thus is a continuous path of unitaries with this symmetry. Since

is constant, we see

Theorem 3.9.

If the Floquet Hamiltonian has chiral symmety for with eigenspaces for and for of the same dimension, then there is a Hermitian matrix with

and

Moreover, there is an orthonormal basis and unit scalars with , for , and

for .

Proof.

Lemma 3.8 tells us that Theorem 3.7 applies, so there is a Hermitian matrix so that and .

The method to find a structured orthonormal basis of eigenvectors for , and hence for , was described in the proof of Theorem 3.3 and the discussion after the proof.

Iv Symmetry preserving matrix square roots

iv.1 Matrix preserving functional calculus

A matrix with no eigenvalues in has principal square root defined by applying the principal branch of the scalar square root function via functional calculus. In particular, for a unitary with not in the spectrum, this is a unitary with spectrum within the open right half of the unit circle so that . For unitary with in the spectrum, we can look for one of many unitary matrices with and spectrum in the closed right half of the unit circle. We start looking at an algorithm that works for unitaries with not in the spectrum that tends to preserve symmetries during the calculation.

First, a theorem that explains why it makes sense to search for a matrix square root algorithm that preserves various symmetries.

Theorem 4.1.

Assume have no eigenvalues on and that is the principal square root of . Then the following are all true.

  1. If is unitary then is unitary.

  2. If is complex symmetric then is complex symmetric.

  3. If is real then is real.

  4. If is self-dual then is self-dual.

  5. If then .

Proof.

Since it is easy, using power series, to see how symmetries in a matrix lead to symmetries in the exponential of that matrix, these are all easily proven via the formula

and Theorem 3.2. ∎

There are many algorithms to compute a matrix square root Higham . One algorithm has been noted to behave well for many symmetries HighamEtAlSquareRootGroupPreserving relies on an iteration

(4.1)
(4.2)

Given the initial conditions and this leads to good convergence for with spectrum avoiding , as is explained in (HighamEtAlSquareRootGroupPreserving, , Section 6). The following result shows why, in theory at least, this algorithm will return an approximate square root that is unitary when the input is unitary, and has an additional symmetry when the input has that additional symmetry.

Theorem 4.2.

Suppose and are matrices such that is invertible, and let

  1. If and are unitary then and are unitary.

  2. If and are complex symmetric then and are complex symmetric.

  3. If and are real then and are real.

  4. If and are self-dual then and are self-dual.

  5. If and then and .

Proof.

(1) This was proven in HighamEtAlSquareRootGroupPreserving . Note that the scalar identity

(4.3)

shows that the function sends the unit circle to the unit circle, so for any unitary matrix the matrix

is also unitary.

(2) Assuming and we have

where the last step uses the standard fact about the holomorphic functional calculus that so long as and make sense. The proof for is similar.

(3) This is obvious.

(4) This is similar to the proof of (2).

(5) Assuming and we have

and the proof for is similar. ∎

We now consider Algorithm 1, which is essentially that given in (HighamEtAlSquareRootGroupPreserving, , Theorem 6.1.). Theorem 4.2 shows that this iterative algorithm will respect the property of being unitary while at the same time preserve various symmetries of importance in physics.

In practice, it is not even possible to start Algorithm 1 as it requires before it starts the condition . This is not likely to hold in finite-precision arithmetic. We need to deal with approximate unitaries and matrices as well as confront approximate symmetries.

Input: unitary matrix with .

Output: a unitary with with the same symmetry as .

  
  
  repeat