Quantum many-body physics is entering a new era, with the rise of high precision algorithms capable of obtaining controlled solutions in the strongly interacting regime. A large family of approaches concentrates on computing finite temperature correlation functions. Indeed, the imaginary time formalism in thermal equilibrium is well suited to describe both the thermodynamic and many of the equilibrium properties of a system.Abrikosov:QFT It is widely used, for example by quantum Monte Carlo algorithms, which are formulated in imaginary time.
For many applications, generic methods of representing one and two-particle imaginary time Green’s functions may be insufficient to obtain the required precision given computational cost and memory constraints. Examples include (i) the storage of one-body Green’s functions with a large number of orbitals, as in quantum chemistry applications (see Ref. GullStrand_2020, and the references therein), or on a lattice with complex momentum dependence; (ii) the high precision solution of the Dyson equation for such Green’s functions; (iii) computations in which highly accurate representations of Green’s functions are required, as for the bare propagator in some high-order perturbative expansions qqmc2020 ; and (iv) the storage of two-body Green’s functions, which depend on three time arguments shinaoka18 ; shinaoka20 .
The simplest approach is to represent a Green’s function on a uniform grid of points in imaginary time , and by a truncated Fourier series of modes in Matsubara frequency
. While this method offers some practical advantages, including the ability to transform between the imaginary time and Matsubara frequency domains by means of the fast Fourier transform, it is a poor choice from the point of view of efficiency, particularly when the inverse temperatureis large. First, grid points are required in imaginary time to resolve sharp features caused by high energy scales. Second, since the Green’s functions are discontinuous at the endpoints and of the imaginary time interval, their Fourier coefficients decay as , so that the representation converges with low-order accuracy.
Representing by an orthogonal polynomial (Chebyshev or Legendre) expansion of degree yields a significant improvement.Boehnke2011 Indeed, since is smooth on , such a representation converges with spectral accuracy Boehnke2011 ; kananenka16 ; gull18 ; gull18 ; GullStrand_2020 ; see also Ref. trefethen19, for a thorough overview of the theory of orthogonal polynomial approximation. However, resolving the Green’s function still requires an expansion of degree . chikano18
A third idea is the “power grid” method, which uses a grid exponentially clustered towards and . In this approach, an adaptive sequence of panels is constructed, and a polynomial interpolant used on each panel, leading to a representation requiring only degrees of freedom. ku00 ; ku02 ; kananenka16 However, the power grid method has been implemented using uniform grid interpolation on each panel, which can lead to numerical instability for high-order interpolants. A more stable method, using spectral grids on each panel, is incorporated as an intermediate step in our framework, but ultimately further compression of the representation can be achieved.
A newer approach is to construct highly compact representations by taking advantage of the specific structure of imaginary time Green’s functions, which satisfy the spectral Lehmann representation
for . Here is the spectral density, is a real frequency variable, and the kernel is given in the fermionic case by
In our discussion, we assume that the support of is contained in , for a high energy cutoff; this always holds to high accuracy for sufficiently large . For convenience, we also define a dimensionless high energy cutoff .
The key observation is that the fermionic kernel can be approximated to high accuracy by a low rank decomposition. shinaoka17 The most well-known manifestation of this fact is the severe ill-conditioning of analytic continuation from the imaginary to the real time axis. However, one can take advantage of this low rank structure to obtain a compact representation of . In Refs. shinaoka17, ; chikano18, , orthogonal bases for imaginary time Green’s functions containing only. This method, called the intermediate representation (IR), has been used successfully in a variety of applications, including those involving two-particle quantities shinaoka18 ; shinaoka19 ; otsuki20 ; shinaoka20 ; wallerberger20 ; wang20 ; shinaoka21 ; see also Ref. shinaoka21_2, for a useful review and further references. A related approach is the minimax isometry method, which uses similar ideas to construct optimal quadrature rules for Matsubara summation in GW applications. kaltak20
In this paper, we present a method which is related to the IR, but uses a different low rank decomposition of , called the interpolative decomposition (ID). cheng05 ; liberty07 It leads to a discrete Lehmann representation (DLR) of any imaginary time Green’s function as a linear combination of exponentials with a set of frequencies which depend only on and . Like the IR basis, the DLR basis is universal in the sense that given any and , it is sufficient to represent any imaginary time Green’s function obeying the energy cutoff to within accuracy . The number of basis functions is observed to scale as , and is nearly the same as the number of IR basis functions with the same choice of and error tolerance .
Our construction begins with a discretization of on a composite Chebyshev grid, designed to resolve the range of energy scales present in Green’s functions up to a given cutoff . Then, instead of applying the SVD to the resulting matrix as in the IR method, we use the ID to select a set of representative frequencies such that the functions form the basis of exponentials. The ID also yields a set of interpolation nodes, such that the DLR of a given Green’s function can be recovered from samples at those nodes. The DLR can be explicitly transformed to the Matsubara frequency domain, where it takes the form of a linear combination of poles . As in the imaginary time domain, the DLR can also be recovered by interpolation at nodes on the Matsubara frequency axis.
Compared with the IR approach, the DLR basis exchanges orthogonality for a simple, explicit form of the basis functions. However, we show that orthogonality is not required for numerically stable recovery of the representation. On the other hand, using an explicit basis of exponentials has many advantages. In particular, it avoids the cost and complexity of working with the IR basis functions, which are themselves represented on a fine adaptive grid, and evaluated using corresponding interpolation procedures. Many standard computational tasks – such as transforming between the imaginary time and Matsubara frequency domains, and performing convolutions – are reduced to simple explicit formulas.
The algorithms which we use in the context of the DLR also carry over to the IR method, and offer two main improvements over previously established algorithms.
First, the numerical tools we describe can be used to construct an efficient sampling grid for the IR basis in a more systematic manner than the sparse sampling method, which is typically used. Sparse sampling provides a method of obtaining compact grids in imaginary time and Matsubara frequency, from which one can recover the IR coefficients. li20
The sparse sampling grid is analogous to the interpolation grid used for the DLR. However, whereas the sparse sampling method selects a grid based on a heuristic, we use a purely linear algebraic method with robust accuracy guarantees, which is also applicable to the IR.
Second, existing methods to build the IR basis functions are computationally intensive, requiring extended precision arithmetic and hours of computation time for large values of . Furthermore, the basis functions themselves are represented using a somewhat complicated adaptive data structure. Of course, basis functions for a given choice of
need only be computed once and stored, and to facilitate the process, an open source software package has been released which contains tabulated basis functions for several fixed values of, as well as routines to work with them. chikano19 However, in some cases, the situation is cumbersome, for example if one wishes to converge a calculation with respect to . By contrast, we present a simple discretization of , which allows us to construct either the DLR or IR basis functions from a single call to the pivoted QR and SVD algorithms, respectively, with matrices of modest size. This yields the basis functions and associated imaginary time interpolation nodes in less than a second on a laptop for as large as and near the double machine precision. The resulting DLR basis functions are characterized by a list of frequency nodes , and the IR basis functions are represented using a simple data structure.
In addition to describing efficient algorithms to implement the DLR, we present mathematical theorems which provide error bounds and control inequalities. We illustrate the DLR approach on several simple examples, as well as on a high precision, low temperature solution of the Sachdev-Ye-Kitaev (SYK) model. SachdevYe93 ; gu20
This paper is structured as follows. In Section II, we present a short overview of the DLR with an example, leaving aside technical details. In Section III, we introduce the mathematical tools required in the rest of the paper, namely composite Chebyshev interpolation and the interpolative decomposition. In Section IV, we develop the DLR, describe our algorithm, and show some benchmarks. In Section V, we derive the IR, describe its relationship with the DLR, and present efficient algorithms to construct the IR basis functions and associated grid. We show how to solve the Dyson equation efficiently using the DLR in Section VI, and demonstrate the method by solving the SYK equation in Section VII. Section VIII contains a concluding discussion.
We develop our method using fermionic Green’s functions; we show in Appendix A that in fact no modification is required for bosonic Green’s functions. To simplify the notation, we also restrict our discussion to scalar-valued Green’s functions, as the extension to the matrix-valued case is straightforward.
We assume the spectral density , which may in general be a distribution, is integrable and supported in . It is convenient to further nondimensionalize (1) by performing the change of variables and . In these variables, we have , and the support of is contained in , with .
is a user-determined parameter. An estimate of, and therefore of , can often be obtained on physical grounds, but in general is used as an accuracy parameter and is increased until convergence is reached. Then, assuming is taken sufficiently large, (1) is equivalent to the truncated Lehmann representation
for given by (2) with .
As for the IR, we exploit the low numerical rank of an appropriate discretization of to obtain a compact representation of . We simply use the ID, rather than the SVD, after discretizing on a carefully constructed grid. We will show that can be approximated to any fixed accuracy by a discrete sum with terms,
Here is a collection of selected frequencies, and the spectral density has been replaced by a discrete set of coefficients . A minus sign has been absorbed into the coefficients to simplify expressions. The basis functions of this representation are simply exponentials,
We emphasize that given a user-specified error tolerance and a choice of , the selected frequencies are universal; that is, independent of . Furthermore, , which we refer to as the DLR rank, is close to the -rank of , which is the number of IR basis functions for the same choice of and , so the DLR also requires at most degrees of freedom. The high energy cutoff plays an important role in this representation, as it controls the regularity of , allowing a representation by a finite combination of exponentials. We will prove the existence of a representation (4) with error tightly controlled by , and describe a method to construct such a representation by interpolation of at selected nodes in imaginary time or Matsubara frequency.
A first example is presented in Figure 1. We take , and consider a particle-hole symmetric Green’s function defined by a semicircular spectral density , with the Heaviside function, as shown in Figures 1a and 1b. Figure 1c shows the error of the DLR (4, 5) as a function of for fixed . Here, we vary near the known sufficient value of ( and is supported in ) and plot the error here versus , instead of , to emphasize the number of basis functions. The error decays super-exponentially at first, and reaches when . The value of at which convergence is reached depends on , so that in practice, to obtain the smallest possible basis for a given accuracy, one should first choose and then increase until convergence.
The DLR can be formally interpreted as a spectral representation with an effective spectral density which is a sum of functions:
Such a representation is made possible by the ill-conditioning of the integral operator defining the Lehmann representation; up to a fixed precision , the spectral density corresponding to a given imaginary time Green’s function is highly non-unique. Thus, we simply pick one such spectral density with a particularly simple form, rather than attempting to reconstruct the original spectral density. Figure 1d shows a graphical representation of and hence of the coefficients and the selected frequencies .
Iii Mathematical preliminaries
This section will review our two main numerical tools: composite Chebyshev interpolation, which will be used to obtain an accurate initial discretization of the kernel , and the interpolative decomposition, which will be used for low rank compression.
iii.1 Composite Chebyshev interpolation
Polynomial interpolation at Chebyshev nodes is a well-conditioned method for the approximation of a smooth function on an interval. trefethen19 If can be analytically continued to a neighborhood of , the error of the interpolant in the supremum norm decreases geometrically with its degree, and if can be analytically continued to the entire complex plane, the convergence is super-geometric; see Ref. trefethen19, (Thm. 8.2). There are fast and stable algorithms to evaluate Chebyshev interpolants, such as the method of barycentric Lagrange interpolation. berrut04 ; higham04
For functions with sharp features or variation at multiple length scales, using a single polynomial interpolant on is inefficient. A better alternative is to construct a piecewise polynomial interpolant by the method of composite Chebyshev interpolation at fixed order. To be precise, let with be a collection of subintervals partitioning . Let be the Chebyshev nodes on . Then is called a composite Chebyshev grid. Let be the Lagrange polynomial corresponding to the th grid point on the th panel; this is the polynomial of degree which satisfies
be the characteristic function on the interval. Then the degree composite Chebyshev interpolant of a function on corresponding to the above partition is given by
Evidently, we have for each and . The partition of should be chosen to resolve local features of , and the degree should be chosen sufficiently large so that the rapidly converging Chebyshev interpolants of on each subinterval are accurate.
To simplify expressions, we define the truncated Lagrange polynomial on the interval by . It will also sometimes be convenient to cast the double index , for the composite grid points to a single index , with , , and . In this notation, (7) becomes
iii.2 Interpolative decomposition
We say an matrix is numerically low rank, or more specifically, has low -rank, if has only singular values larger than . The best rank approximation of in the spectral norm is given by its SVD truncated to the first singular values, and its error in that norm is the next singular value ; see Ref. ballani16, (Sec. 2, Thm. 2). Thus, the truncated SVD (TSVD) yields an approximation with error in the spectral norm for a matrix with -rank .
The interpolative decomposition is an alternative to the TSVD for compressing numerically low rank matrices. It has the advantage that the column space is represented by selected columns of , rather than an orthogonalization of the columns of , as in the TSVD. The price is a mild and controlled loss of optimality compared with the TSVD. The ID and related algorithms are described in Refs. cheng05, ; liberty07, ; gu96, ; in particular, we make use of the form of the ID and the theoretical results summarized in Ref. liberty07, .
Given , the rank ID is given by
with a matrix containing selected columns of , and , the so-called projection matrix, containing the coefficients required to approximately recover all of the columns of from the selected columns. The error of the decomposition is given by
so the ID gives a rank approximation of which is at most a factor of less accurate than the TSVD. The numerical stability of the ID as a representation of can also be guaranteed; in particular, we have
The references given above contain detailed statements of the relevant results which we have quoted here, along with the accompanying analysis.
Numerical algorithms are available which construct such a decomposition with bounds typically within a small factor of those stated above. The standard algorithm, described in Ref. cheng05, , proceeds in two steps. First, the pivoted QR process is applied to , yielding a collection of columns of – corresponding to the pivot indices – which are, in a certain sense, as close as possible to being mutually orthogonal. These columns comprise the matrix in the ID. Next, a linear system is solved to determine the coefficients of the remaining columns of in the basis determined by . These coefficients are stored in the matrix . The cost of this algorithm is . If the rank is not known a priori, it is straightforward to apply this algorithm in a rank-revealing manner, so that given an input it yields an estimated -rank and a rank ID with . Of course, the returned -rank may be larger than the true -rank, consistent with the suboptimality of the estimate (9) and the behavior of the singular values of .
We remark that for several of the algorithms presented in this article – in particular for all the algorithms involving the DLR – we only ever need to perform the pivoted QR step of the ID to identify selected columns of a matrix, and in particular do not need to construct the full ID. Nevertheless, the presentation in terms of the ID is both conceptually and theoretically useful, and helps to unify our discussions of the DLR and the IR, so we adopt that language throughout. Our descriptions of algorithms in the text will make this point clear.
Iv Discrete Lehmann representation
The DLR basis functions are built by a two-step procedure. First, we discretize on a composite Chebyshev fine grid , obtaining a matrix with entries . Then, we obtain a small subset of the fine grid points in from the ID of this matrix, such that
holds to high accuracy uniformly in , for some coefficients . The functions are referred to as the DLR basis functions. Inserting (11) into the Lehmann representation (1) will establish the existence of the DLR. The discretization of will be discussed in Section IV.1, and the construction of the DLR basis in Section IV.2. In Sections IV.3 and IV.4, we will describe a stable method of constructing the DLR of a Green’s function from samples of at only selected imaginary time and Matsubara frequency nodes, respectively. In Section IV.5 we will give a practical summary of the various procedures, and we will demonstrate the DLR with a few simple examples in Section IV.6. Throughout the discussion, except when describing specific physical examples, we will work in the nondimensionalized variables described at the beginning of Section II, with , , and .
iv.1 Discretization of
We discretize by finding grids sufficient to resolve on for all fixed , and on for all fixed . A closely related problem was considered in Ref. gimbutas20, , in which it is shown (Lemma 4.4) that all exponentials in the family can be represented to error uniformly less than on in a basis of exponentials chosen from the family. As in their proof, we will make use of dyadically refined composite Chebyshev grids. A minor modification of their proof is sufficient to give a rigorous justification of our method, though we do not discuss the details here.
We begin with the first case, for , which gives for a constant ; a family of decaying exponentials. Consider the composite Chebyshev grid on dyadically refined towards the origin; that is, with intervals given by , for , and . We take to resolve the smallest length scale in the family of exponentials, which appears for . With this choice, the degree parameter can be chosen sufficiently large so that the resulting composite Chebyshev interpolant is uniformly accurate for any . Double precision machine accuracy can be achieved with a moderate choice of , since the Chebyshev interpolants of the exponentials converge rapidly with . The accuracy of the interpolants can be checked directly, and refined to convergence.
For , we observe that , revealing a symmetry in about . We therefore split the last interval in our partition into a set of subintervals dyadically refined towards , in the same manner as above. The resulting composite Chebyshev grid is sufficient to resolve for all , and contains points. An example of such a grid is shown in Figure 2a.
We refer to the nodes of this composite Chebyshev grid as the set of fine grid points in , and denote them by , using the single-index notation for a composite Chebyshev grid. Here, , where is the total number of subintervals in the partition of . Thus we can ensure that for each fixed , the composite Chebyshev interpolant on the fine grid in is uniformly accurate to ; using the notation defined in Section III.1, we have
We next consider fixed , for which we have . This is again a family of functions which are sharply peaked near the origin, and we discretize by a composite Chebyshev grid with intervals dyadically refined towards the origin from the positive and negative direction until the smallest panels are of unit size, which again requires . A similar choice of is again sufficient to obtain accuracy for any . An example of this grid is shown in Figure 2b.
The resulting fine grid points in are denoted by , and give a composite Chebyshev interpolant of on for each which is uniformly accurate to ; that is
with . We note an abuse of notation: refers to the truncated Lagrange polynomials for the fine grid in , whereas refers to those for the fine grid in . Combining (12) and (13), and possibly increasing , we obtain
We summarize as follows. The kernel may be represented by composite Chebyshev interpolants of and terms in and , respectively, with subintervals chosen by dyadically subdivision. These representations can be constructed at a negligible cost, and directly checked for accuracy. We have and ; in practice, we find and to be sufficient to ensure double precision machine accuracy.
For simplicity of exposition, we will assume in the remainder of the article that the interpolation errors in (12), (13), and (14) are identically zero. Indeed, given these estimates, is indistinguishable from its interpolants to the machine precision, and we can just as well take the interpolants as our definition of .
iv.2 The DLR basis
Define with entries given by . Figure 3a shows the singular values of for a few choices of . Evidently, the singular values decay at least exponentially, so that for each fixed , the -rank of is . Figure 3b shows that the rate of exponential decay is proportional to . It follows that the -rank is . A derivation and analysis of this bound will be given in a forthcoming publication. chen_inprep
Since the column space of characterizes the subspace of imaginary time Green’s functions defined by (3), the low numerical rank of shows that this subspace is finite-dimensional to a good approximation. An equivalent observation is made in Ref. shinaoka17, , where it justifies using the left singular vectors of a discretization of as a compressed representation of imaginary time Green’s functions. This is the IR basis, which we discuss in detail in Section V.
Here, we use the ID to build a basis for the column space of . Let be a user-provided error tolerance. We can construct a rank ID of ,
for , , and an error matrix with
It follows from (9) and the rapid decay of the singular values of that will be at worst only slightly larger than the true -rank of . The discrepancy is shown in Figure 3b, with the blue points showing the true -rank against for several , and the orange points showing as obtained by the ID with the same choices of , which we refer to as the DLR rank. This is a useful figure to refer to, as it shows the number of DLR basis functions required to represent any imaginary time Green’s function obeying a high energy cutoff to a given accuracy.
Writing (15) entrywise gives
for a subset of . This subset corresponds to the selected columns in the ID, and we refer to it as the collection of DLR frequencies. Summing both sides against and gives
with and . Inserting this into (3), we obtain
Letting gives our first main result. The bound on the error term is proven in Appendix B.
The constant is mild and computable; for , it is less than . The functions are referred to as the DLR basis functions, and are characterized solely by the DLR frequencies selected in the ID. An example of a set of DLR frequencies, selected from the fine grid shown in Figure 2b with and , is shown in Figure 2d.
We note that in practice, it is not necessary to form the full ID in order to obtain the DLR basis, since we do not use the projection matrix . Rather, we only need to identify the DLR frequencies . The selection of the DLR frequencies takes place in the pivoted QR step of the ID algorithm. Thus to construct the DLR basis, we simply apply the rank-revealing pivoted QR algorithm to the columns of with a tolerance .
iv.3 The imaginary time DLR grid
In general, the spectral density is not known a priori, so we cannot find the coefficients in (17) using the construction above. Rather, we will identify a set of imaginary time interpolation nodes so that expansion coefficients can be recovered from the values by solving an interpolation problem using the basis functions .
Consider the matrix introduced above, with entries . Forming the ID of gives
with consisting of selected rows of , and the associated projection matrix. The selected rows of correspond to a subset of the fine grid points in imaginary time, which we refer to as the imaginary time DLR grid. We have
Writing (18) entrywise and summing over the truncated Lagrange polynomials in , we obtain
with . Equation (20) tells us that the DLR basis functions can be recovered from their values at the imaginary time DLR grid points. It will follow that a Green’s function can similarly be recovered from its values on this grid. An example of an imaginary time DLR grid, selected from the fine grid shown in Figure 2a with and , is shown in Figure 2c.
The recovery may be carried out in practice by computing the values for , solving the interpolation problem
for DLR coefficients , and using
as an approximation of . Here, . Although it is tempting to compare (22) with (17) and assume holds to high accuracy, this is not guaranteed a priori. Indeed, if the interpolation nodes were not selected carefully, this would not be the case. However, the following stability result, proven in Appendix C, leads to an accuracy guarantee.
Suppose and are given by
respectively, with chosen as above. Let be given by , , with the imaginary time DLR grid determined by the selected rows of the ID (18). Then
The ID guarantees that is controlled; in particular, we have the estimate (10). Since and is small, this factor in the estimate is small in practice. With Lemma 1 in hand, we consider the following practical question: if a Green’s function is sampled at the DLR grid points with some error, how accurate is the approximation given by (22), with the coefficients obtained by solving the interpolation problem (21)?
Let be a Green’s function given by a truncated Lehmann representation (3). Let be a vector of samples of at the imaginary time DLR grid points , up to an error : . Suppose solves the corresponding interpolation problem (21) up to a residual error : , with . Let be given by
with the constant from Theorem 1.
It is expected, and our numerical experiments confirm, that typically . Thus the accuracy of the approximation (22) is indeed determined by the user-input error tolerance , and is limited only by the accuracy to which can be evaluated. We remark that this holds true despite the fact that the matrix is ill-conditioned, and therefore that the computed DLR coefficients are not expected to be close to those appearing in (17). Indeed, this ill-conditioning reflects a fundamental non-uniqueness in . However, it will not prevent a standard linear solver from identifying a solution with small residual, and therefore does not imply any difficulty in accurately representing .
iv.4 DLR in the Matsubara frequency domain
A DLR can be transformed to the Matsubara frequency domain analytically. Indeed, we have
with a fermionic Matsubara frequency point. A DLR expansion therefore transforms to the Matsubara frequency domain as
We can construct a set of Matsubara frequency interpolation nodes using the ID. As in the previous section, we simply apply the ID to the rows of the matrix with entries , for , and . Here is a chosen Matsubara frequency cutoff. This process returns selected Matsubara frequency interpolation nodes . As before, it is not necessary to form the full ID, but only to use the pivoted QR algorithm to identify the selected nodes. The DLR coefficients can be recovered by solving the interpolation problem
for , which is analogous to (21). One must ensure that the Matsubara frequency nodes have been converged with respect to , and in practice we find is usually a sufficient cutoff.
This procedure requires carrying out the pivoted QR algorithm on the rows of a matrix, and typically . It is more expensive than the procedure to select the imaginary time DLR grid points, which uses the pivoted QR algorithm on an matrix, with . However, it is still quite fast in practice for moderate values of . If it were to become a bottleneck, one could design a more efficient scheme to select the Matsubara frequency interpolation nodes from a smaller subset of the full Matsubara frequency grid .
iv.5 Summary of DLR algorithms
We pause to summarize the practical procedures we have described to build and work with the DLR.
Construction of the DLR basis
To construct the DLR basis for a given choice of and , we first discretize the kernel on a composite Chebyshev grid to obtain the matrix with entries . We then apply the pivoted QR algorithm, with an error tolerance , to the columns of . The pivots correspond to a set of DLR frequencies , where , the so-called DLR rank, is the number of basis functions required to represent the full subspace characterized by the truncated Lehmann integral operator to an accuracy approximately . The DLR basis functions are simply given by .
DLR from imaginary time values
To obtain the imaginary time interpolation nodes , we simply apply the pivoted QR algorithm to the rows of the matrix with entries . The pivots correspond to the interpolation nodes. To obtain the DLR coefficients of a Green’s function , we compute the values and solve the interpolation problem (21).
DLR from Matsubara frequency values
To obtain the Matsubara frequency interpolation nodes , we apply the pivoted QR algorithm in the same manner to the rows of the matrix with entries , where for some choice of . In practice, we find to be sufficient in most cases, but can be increased until the selected Matsubara frequency nodes no longer change. To obtain the DLR expansion coefficients of a Green’s function in the Matsubara frequency domain, we solve the interpolation problem (24).
Transforming between imaginary time and Matsubara frequency domains
The DLR coefficients for the representation of a given Green’s function in the imaginary time and Matsubara frequency domains are the same; one simply takes the Fourier transform of the DLR in imaginary time explicitly using (23) to obtain the DLR in Matsubara frequency, and inverts the transform explicitly to go in the opposite direction. Thus, having obtained DLR coefficients for a Green’s function, the representation can be evaluated in either domain.
A remark on the selection of and
In our framework, both and are user-determined parameters which control the accuracy of a given representation, and each choice of and yields some basis of functions which should then all be used. This is different from many typical methods, like orthogonal polynomial approximation, in which one simply converges a given calculation with respect to the number of basis functions directly. The inclusion of such a user-determined accuracy parameter is a desirable feature of many modern algorithms used in scientific computing, which enables automatic data compression with an accuracy guarantee.
In practice, to obtain a desired accuracy with the smallest possible number of basis functions, one should choose according to that desired accuracy, and not smaller. One should then converge with respect to , which describes the frequency content of the problem, and is therefore more analogous to the parameter in the Legendre polynomial method. This process is illustrated, for example, by Figure 4, which is discussed in the next subsection.
iv.6 Numerical examples
We can test the algorithms described above by evaluating a known Green’s function on the imaginary time or Matsubara frequency DLR grids, recovering the corresponding DLR coefficients, and measuring the accuracy of the resulting DLR expansion by computing its error against .
We first test the imaginary time sampling approach using the Green’s function corresponding to the semicircular spectral density . We fix , and measure the error of the computed DLR for several choices of . Results for were already presented in Figure 1c, in which we plot error against the number of basis functions obtained using , for , , and . We observe rapid convergence with to error in each case.
In Figures 4 and 5, respectively, we present similar plots for and . In Figures 4c and 5c, we plot the error against directly. These plots demonstrate the method as it is used in practice; and , not , are chosen directly by the user in our framework. It can be seen from Figures 4b and 5b that choosing to be smaller than the actual desired accuracy simply yields a larger basis than is needed, as was discussed in Section IV.5 e.
We next repeat the experiment using for . The Green’s function is shown in Figure 6a, and the error versus in Figure 6b. The results are similar to those for the previous example. We note that the same experiments with and , and adjusted accordingly, give the expected results.
To test the Matsubara frequency sampling approach, we repeat the same experiments, except that we recover the DLR coefficients from samples of the Green’s function on the Matsubara frequency DLR grid. As before, we measure the error in the imaginary time domain. Results for with are shown in Figure 7. These can compared with Figure 1c. We observe only a mild loss of accuracy compared with the results obtained using imaginary time sampling, and we still achieve accuracy near when is increased beyond the known cutoff. Results for with are shown in Figures 6c and 6d. We have tested other choices of for both examples, up to , with similar results.
V Intermediate representation
In this section, we rederive the intermediate representation (IR) presented in Ref. shinaoka17, using the tools we have introduced to construct the DLR. The IR uses an orthonormal basis obtained from the SVD of an appropriate discretization of the kernel . It represents the same space as DLR, but has the advantage of orthogonality, at the cost of using more complicated basis functions. Our presentation of the IR differs from Refs. shinaoka17, ; chikano18, ; chikano19, ; shinaoka21_2, in two ways.
First, we show that discretizing on a composite grid like that introduced in Section IV.1 leads to an efficient construction of the IR basis. By contrast, in Ref. chikano18, , an automatic adaptive algorithm with extended precision arithmetic is used. The authors report in Ref. chikano19, that this algorithm takes on the order of hours to build the IR basis for . To address this problem, the library irbasis contains precomputed basis functions for several values of , and codes to work with them. chikano19 While this is a sufficient solution for many cases, it may be restrictive in others, for example in converging the IR with respect to , or selecting to achieve a given accuracy with the smallest possible number of basis functions. Our approach, presented in Section V.1, does not require automatic adaptivity or extended precision arithmetic. The IR basis is obtained by discretizing on a well-chosen grid, as before, and computing a single SVD of a matrix whose dimension grows logarithmically with , and for up to is less than . As an illustration, Figure 8 contains plots of a few IR and DLR basis functions for and . Building each basis takes less than a second, despite the high resolution required.
Second, we show in Section V.2 that the interpolative decomposition of a matrix containing the IR basis functions naturally yields a set of sampling nodes for the IR, analogous to the interpolation grid for the DLR, and a transformation from values of a Green’s function at these nodes to its IR coefficients. In previous works, the sparse sampling method was used to provide such a sampling grid for the IR. li20 The sparse sampling nodes are chosen based on a heuristic, which is motivated by the relationship between orthogonal polynomials and their associated interpolation grids. While this heuristic appears to lead to a numerically stable algorithm, the procedure we have used to construct the DLR and Matsubara frequency grids is automatic and offers robust accuracy guarantees.
v.1 The IR basis
The first step in constructing the IR basis is again to finely discretize . Here, to ensure that we obtain a basis which is orthogonal in the inner product, we use composite Legendre grids rather than composite Chebyshev grids. The discussion in Section IV.1 holds equally well for composite Legendre grids, with Gauss-Legendre nodes used in place of Chebyshev nodes.
In particular, let and be the nodes of the composite Legendre fine grids in and , respectively, and let be the matrix with entries . Let be a diagonal matrix with entries , the quadrature weights associated with the composite Legendre grid points . The quadrature weights are obtained from the ordinary Gauss-Legendre quadrature weights at Legendre nodes, rescaled to account for the panel length.
Consider the SVD of the reweighted matrix. Truncating the SVD at rank gives
where , , and are the first singular values, left singular vectors, and right singular vectors, respectively, and is an error matrix. As before, we choose so that , implying is the -rank of .
Note that the entries of each left singular vector can be interpreted as samples of a function on the fine grid in , and similarly, the entries of as samples of a function on the fine grid in . Summing against the corresponding truncated Lagrange polynomials, we find
with . Inserting this into the truncated Lehmann representation (3), we obtain
This establishes the validity of the representation