# Split representation of adaptively compressed polarizability operator

The polarizability operator plays a central role in density functional perturbation theory and other perturbative treatment of first principle electronic structure theories. The cost of computing the polarizability operator generally scales as O(N_e^4) where N_e is the number of electrons in the system. The recently developed adaptively compressed polarizability operator (ACP) formulation [L. Lin, Z. Xu and L. Ying, Multiscale Model. Simul. 2017] reduces such complexity to O(N_e^3) in the context of phonon calculations with a large basis set for the first time, and demonstrates its effectiveness for model problems. In this paper, we improve the performance of the ACP formulation by splitting the polarizability into a near singular component that is statically compressed, and a smooth component that is adaptively compressed. The new split representation maintains the O(N_e^3) complexity, and accelerates nearly all components of the ACP formulation, including Chebyshev interpolation of energy levels, iterative solution of Sternheimer equations, and convergence of the Dyson equations. For simulation of real materials, we discuss how to incorporate nonlocal pseudopotentials and finite temperature effects. We demonstrate the effectiveness of our method using one-dimensional model problem in insulating and metallic regimes, as well as its accuracy for real molecules and solids.

## Authors

• 11 publications
• 44 publications
• 1 publication
09/20/2021

### Convergence analysis of an operator-compressed multiscale finite element method for Schrödinger equations with multiscale potentials

In this paper, we analyze the convergence of the operator-compressed mul...
08/18/2020

### Splitting methods for solution decomposition in nonstationary problems

In approximating solutions of nonstationary problems, various approaches...
12/03/2020

### Iterative Oversampling Technique for Constraint Energy Minimizing Generalized Multiscale Finite Element Method in the Mixed Formulation

In this paper, we develop an iterative scheme to construct multiscale ba...
06/09/2020

### A refined dynamic finite-strain shell theory for incompressible hyperelastic materials: equations and two-dimensional shell virtual work principle

Based on previous work for the static problem, in this paper we first de...
02/21/2021

### A new efficient operator splitting method for stochastic Maxwell equations

This paper proposes and analyzes a new operator splitting method for sto...
06/18/2019

Panel-based, kernel-split quadrature is currently one of the most effici...
10/24/2019

### Efficient Computation of Kubo Conductivity for Incommensurate 2D Heterostructures

Here we introduce a numerical method for computing conductivity via the ...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Density functional perturbation theory (DFPT) [4, 17, 3, 9] studies the response of a quantum system under small perturbation, where the quantum system is described at the level of first principle electronic structure theories such as Kohn-Sham density functional theory (KSDFT) [19, 23]. One important application of DFPT is the calculation of vibration properties such as phonons, which can be further used to calculate many physical properties such as infrared spectroscopy, elastic neutron scattering, specific heat, heat conduction, and electron-phonon interaction related behaviors such as superconductivity (see [3] for a review). DFPT describes vibration properties through a polarizability operator, which characterizes the linear response of the electron density with respect to the perturbation of the external potential. More specifically, in vibration calculations, the polarizability operator needs to be applied to

perturbation vectors, where

is the spatial dimension (usually ), is the number of atoms, and is the number of electrons. In general the complexity for solving KSDFT is , while the complexity for solving DFPT is . It is possible to reduce the computational complexity of DFPT calculations by “linear scaling methods” [16, 35, 6]. Such methods can be successful in reducing the computational cost for systems of large sizes with substantial band gaps, but this can be challenging for medium-sized systems with relatively small band gaps.

The term “phonon calculation” usually describes the calculation of vibration properties of condensed matter systems. In this paper, we slightly abuse this term to refer to calculations of vibration properties of general systems, including condensed matter systems as well as isolated molecule clusters, since such calculations share the same mathematical structure. In order to apply the polarizability operator to vectors, we need to solve

coupled Sternheimer equations. On the other hand, when a constant number of degrees of freedom per electron is used, the size of the Hamiltonian matrix is only

. Hence asymptotically there is room to obtain a set of only “compressed perturbation vectors”, which encodes essentially all the information of the Sternheimer equations. The recently developed adaptively compressed polarizability operator (ACP) formulation [27] follows this route, and successfully reduces the computational complexity of phonon calculations to for the first time. The ACP formulation does not rely on exponential decay properties of the density matrix as in linear scaling methods, and its accuracy depends weakly on the size of the band gap. Hence the method can be used for phonon calculations of both insulators and semiconductors with small gaps.

There are three key ingredients of the ACP formulation. 1) The Sternheimer equations are equations for shifted Hamiltonians, where each shift corresponds to an energy level of an occupied band. Hence for a general right hand side vector, there are possible energies (shifts). We use a Chebyshev interpolation procedure to disentangle such energy dependence so that there are only constant number of shifts that is independent of . 2) We disentangle the right hand side vectors in the Sternheimer equations using the recently developed interpolative separable density fitting procedure, to compress the right-hand-side vectors. 3) We construct the polarizability operator by adaptive compression so that the operator remains low rank as well as accurate when applying to a certain set of vectors. This make it possible for fast computation of the matrix inversion using methods like Sherman-Morrison-Woodbury. In particular, the ACP method does not employ the “nearsightedness” property of electrons for insulating systems with substantial band gaps as in linear scaling methods [22]. Hence the ACP method can be applied to insulators as well as semiconductors with small band gaps.

In this paper, we introduce a generalization the ACP formulation for efficient phonon calculations of real materials called split representation of ACP. In the split representation, the nonlocal pseudopotential is taken into account, as well as temperature effects especially for metallic systems. The new split representation maintains the complexity, and improves all key steps in the ACP formulation, including Chebyshev interpolation of energy levels, iterative solution of Sternheimer equations, and convergence of the Dyson equations.

The rest of the paper is organized as follows. Section 2 introduces the basic formulation of KSDFT and DFPT, and reviews the formulation of ACP. Section 3 describes the split representation of the ACP formulation. Numerical results are presented in section 4, followed by conclusion and discussion in section 5.

## 2 Preliminaries

### 2.1 Kohn-Sham density functional theory

For simplicity we consider a system of finite size with periodic boundary conditions. This can be used to model isolated molecular systems as well as solid state systems with the Gamma point sampling strategy of the Brillouin zone [30]. However, we do not explicitly take advantage of that are real, so that the formulation is applicable to real space and Fourier space implementation, as commonly done in electronic structure software packages. The spatial dimension is assumed in the treatment of e.g. Coulomb interaction unless otherwise specified. Since our numerical results involve real materials and systems of both insulating and metallic characters, we include relevant technical details such as nonlocal pseudopotential and temperature dependence in the discussion. Consider a system consisting of nuclei and electrons at temperature , where is the Boltzmann constant. In the Born-Oppenheimer approximation, for each set of nuclear positions , the electrons are relaxed to their ground state. The ground state total energy is denoted by , and can be computed in Kohn-Sham density functional theory [19, 23, 31] according to the minimization of the following Kohn-Sham-Mermin energy functional

 EKS({ψi};{RI})=12∞∑i=1fi∫|∇ψi(r)|2dr+∞∑i=1fi∫ψ∗i(r)Vion(r,r′;{RI})ψi(r′)drdr′+12∬vc(r,r′)ρ(r)ρ(r′)drdr′+Exc[ρ]+EII({RI})+1β∞∑i=1[filogfi+(1−fi)log(1−fi)]. (1)

Here the minimization is with respect to the Kohn-Sham orbitals satisfying the orthonormality condition , as well as the occupation numbers satisfying . In Eq. (1), defines the electron density with normalization condition . In the discussion below we will omit the range of indices unless otherwise specified. In Eq. (1), defines the kernel for Coulomb interaction in and the corresponding term is called the Hartree energy. is a potential characterizing the electron-ion interaction, and is independent of the electronic states . More specifically, in a pseudopotential approximation [30], if we view as an integral operator, then the kernel of can be expressed as the summation of contribution from each atom

 Vion(r,r′;{RI})=∑IVloc,I(r−RI)δ(r−r′)+∑IVnl,I(r−RI,r′−RI). (2)

Here is called the local pseudopotential, and the nonlocal pseudopotential. In the Kleinman-Bylander form [20], each nonlocal pseudopotential is a low rank and symmetric operator with kernel

 Vnl,I(r−RI,r′−RI)=LI∑l=1γI,lbI,l(r−RI)b∗I,l(r′−RI). (3)

Here is a weight factor, and each is a real valued function. The function is also localized, in the sense that it is compactly supported around . The locality originates from the physical meaning of nonlocal pseudopotentials, i.e. they characterize the orthogonality of the valence electron orbitals with respect to the core electron orbitals, and hence the support of is restricted by the span of the core orbitals. is the exchange-correlation energy, and here we assume semi-local functionals such as local density approximation (LDA) [10, 39] and generalized gradient approximation (GGA) functionals [5, 24, 38] are used. is the ion-ion Coulomb interaction energy. For isolated clusters in 3D, , while for periodic systems the contribution from all the image charges should be properly taken into account via e.g. the Ewald summation technique [14]. The last term of Eq. (1) is the entropy term related to the temperature, and spin degeneracy is neglected for simplicity of the notation.

The Euler-Lagrange equation associated with the Kohn-Sham energy functional gives rise to the Kohn-Sham equations as

 (4) ∫ψ∗i(r)ψj(r)dr=δij,ρ(r)=∞∑i=1fi|ψi(r)|2,fi=11+eβ(εi−μ). (5)

Here the eigenvalues

are ordered non-decreasingly. Note that the occupation number is given analytically by the Fermi-Dirac distribution with respect to the eigenvalue , and is a Lagrange multiplier enforcing the normalization condition of the electron density. The difference of the eigenvalues is called the energy gap. If is positive, then the system is called an insulating system. Otherwise it is a metallic system. For insulating systems, are called the occupied orbitals, while are called the unoccupied orbitals. is sometimes called the highest occupied molecular orbital (HOMO), and the lowest unoccupied molecular orbital (LUMO).

The effective potential depends on the electron density as

 V[ρ](r,r′)=Vion(r,r′)+[∫vc(r,r′)ρ(r′)dr′+Vxc[ρ](r)]δ(r−r′). (6)

Here is the exchange-correlation potential, which is the functional derivative of the exchange-correlation energy with respect to the electron density. The Kohn-Sham Hamiltonian depends nonlinearly on the electron density , and the electron density should be solved self-consistently. When the Kohn-Sham energy functional achieves its minimum, the self-consistency of the electron density is simultaneously achieved. Note that both the Hartree potential and the exchange-correlation potential are local potentials. This plays an important role in simplifying the treatment of the density functional perturbation theory.

When the Kohn-Sham energy functional achieves its minimum, the self-consistency of the electron density is simultaneously achieved. Then the total energy can be equivalently computed as [30]

 (7)

Here is referred to as the band energy.

At this point, the atomic force can be given by the negative of the first order derivative of with respect to the atomic configuration using the Hellmann-Feynman theorem as

 FI=−∂Etot({RI})∂RI=−∫∂Vion∂RI(r,r′;{RI})P(r′,r)drdr′−∂EII({RI})∂RI. (8)

Here is the density matrix defined as

 P(r,r′)=∞∑i=1fiψi(r)ψ∗i(r′). (9)

In particular, the diagonal entries of the density matrix is the electron density . The derivative of the pseudopotential does not depend on the electron density, can be obtained semi-analytically. Hence the computation of the atomic force only involves a number of quadratures. The atomic force allows the performance of structural relaxation of the atomic configuration, by minimizing the total energy with respect to the atomic positions . When the atoms are at their equilibrium positions, all atomic forces should be zero.

### 2.2 Density functional perturbation theory

In density functional perturbation theory (DFPT), we assume that the self-consistent ground state electron density has been computed, denoted by . In this paper, we focus on phonon calculations using DFPT. Assume the system deviates from its equilibrium position by some small magnitude, then the changes of the total energy is dominated by the Hessian matrix with respect to the atomic positions. The dynamical matrix consists of blocks in the form

 DI,J=1√MIMJ∂2Etot({RI})∂RI∂RJ,

where is the mass of the -th nuclei. The dimension of the dynamical matrix is . The equilibrium atomic configuration is at a local minimum of the total energy, and all the eigenvalues of are real and non-negative. Hence the eigen-decomposition of is

 Duk=ω2kuk,

where is called the -th phonon mode, and is called the -th phonon frequency. The phonon spectrum is defined as the distribution of the eigenvalues i.e.

 ϱD(ω)=1dNA∑kδ(ω−ωk). (10)

Here is the Dirac- distribution. is also referred to as the density of states of  [30, 26].

In order to compute the Hessian matrix, we obtain from Eq. (8) that

 (11)

Similar to the force calculation, the second term of Eq. (11

) can be readily computed with numerical integration, and the third term involves only ion-ion interaction that is independent of the electronic states. Hence the first term is the most challenging one due to the response of the electron density with respect to the perturbation of atomic positions. Applying the chain rule, we have

 ∫∂Vion∂RI(r,r′;{RI})∂P(r′,r)∂RJdrdr′=∫∂Vion(r,r′;{RI})∂RIδP(r′,r)δVion(r′′,r′′′)∂Vion(r′′,r′′′);{RI})∂RJdrdr′dr′′dr′′′. (12)

Here the Fréchet derivative is referred to as the reducible polarizability operator [36], which characterizes the self-consistent linear response of the density matrix at with respect to an external nonlocal perturbation of at . However, the computation of must be obtained through a simpler quantity , which is called the irreducible polarizability operator (a.k.a. independent particle polarizability operator) [36].

The discussion using the notation etc will quickly become complicated. For simplicity in the discussion below, we will not distinguish the continuous and discretized representations of various quantities. In the case when a discretized representation is needed, we assume that the computational domain is uniformly discretized into a number of grid points

. After discretization all quantities can be called tensors. For example, we will call

an order tensor (or a vector), an order tensor (or a matrix), and an order tensor. The tensor slicing and tensor contraction can be denoted using either the continuous or the discrete notation. For example, denotes a sliced tensor which is an order tensor. The tensor contraction between two order tensors and should be interpreted as . The tensor contraction between an order tensor and an order tensor (i.e. a matrix-vector product) should be interpreted as . Similarly the contraction between an order tensor and an order tensor (i.e. matrix-matrix product) should be interpreted as , and the contraction between an order tensor and an order tensor should be interpreted as

 (Xg)(r,r′)=∫X(r,r′;r′′,r′′′)g(r′′,r′′′)dr′′dr′′′.

We also define two operations for order tensors. The Hadamard product of two order tensors should be interpreted as . For an order tensor , we define an associated order tensor as . It is easy to verify that the Hadamard product can be written as .

Using the linear algebra type of notation as above, the key difficulty of phonon calculations is the computation of the tensor contraction , where traverses order tensors of the form , where is the -th direction of the atomic position (). According to Eq. (2), can split into a local component and a nonlocal component as

 g(r,r′)=gloc(r)δ(r−r′)+gnl(r,r′), (13)

or equivalently . For each , only one atom contributes to the order tensor and the order tensor . From the definition of nonlocal pseodopotential Eq. (3), we have

 (14)

We note that is a symmetric operator of rank , where the factor comes from the Leibniz formula. In the rest of the paper, we shall use to hide the explicit dependence on the atom indices or the atomic positions .

From the definition of in Eq. (6), we apply the chain rule and have

 u=Xg=δPδVδVδViong=X0g+X0fhxcXg=X0g+X0fhxcu. (15)

In Eq. (15),

 fhxc(r,r′;r′′,r′′′)=(vc(r,r′′)+δVxc[ρ∗](r)δρ(r′′))δ(r−r′)δ(r′′−r′′′):=fhxc(r,r′′)δ(r−r′)δ(r′′−r′′′) (16)

is an order tensor, which is the kernel characterizing the dependence of the with respect to the density matrix in the linear regime. Here is called the exchange-correlation kernel, which is a local operator in the LDA and GGA formulations of the exchange-correlation functionals. Therefore in Eq. (16), comes from that the Hartree and exchange-correlation potentials are local, while comes from that the nonlinear term only depends on the electron density, i.e. the diagonal elements of the density matrix. Eq. (15) is called the Dyson equation, and the solution should be solved self-consistently.

In order to solve the Dyson equation (15), we need to apply to order tensors of the form or

. By means of the eigenfunctions

, the eigenvalues , and the occupation numbers , can be expressed using the Adler-Wiser formula [1, 46]

 (X0g)(r,r′)=∞∑i,a=1fa−fiεa−εiψa(r)(∫ψ∗a(r′′)g(r′′,r′′′)ψi(r′′′)dr′′dr′′′)ψ∗i(r′), (17)

where the term when should be interpreted as the limit when . Using the linear algebra notation, Eq. (17) can be written as

 X0g=∞∑i,a=1fa−fiεa−εiψa(ψ∗agψi)ψ∗i. (18)

Since is an Hermitian order tensor, is also an Hermitian order tensor. If we truncate the infinite sum in Eq. (18) to a finite sum of states, Eq. (18) and Eq. (15) can be solved together to obtain , and therefore the Hessian matrix (11) can be evaluated.

In order to observe the computational complexity of DFPT for phonon calculations, let us first neglect the nonlocal pseudopotential , which simplifies the discussion. Since each only involves the local contribution, Eq. (11) only requires . Therefore one is only interested in computing

 u(r)=u(r,r)=∫X(r,r;r′,r′)g(r′,r′)dr′:=∫χ(r,r′)gloc(r′)dr′. (19)

Here we have introduced the notation , and used that the nonlocal component of vanishes. Similarly we can define . We also consider insulating systems with a finite band gap. This allows us to reduce the temperature dependence of the occupation number, so that if and if . As a result, Eq. (18) can be simplified as

Here means the Hermitian conjugate of the first term.

In order to overcome the difficulty of explicitly computing all the unoccupied orbitals , we first define the projection operator to the unoccupied space . Then we can compute as

 χ0gloc=Ne∑i=1diag[ψ∗i]Q(εi−H)−1Q(diag[gloc]ψi)+h.c.. (21)

In principle, since commutes with , the right hand side of Eq. (21) only requires one operator to be present. However, we choose the form to emphasize that this operator is Hermitian. Let , the matrix inverse in Eq. (21) can be avoided by solving the Sternheimer equations

 Q(εi−H)Qζi=Q(diag[gloc]ψi). (22)

This strategy has been used in a number of contexts involving the polarizability operator [17, 36, 44, 15, 34]. The Sternheimer equations can be solved using standard direct or iterative linear solvers. The choice of the solver can depend on practical matters such as the discretization scheme, and the availability of preconditioners. In practice for planewave discretization, we find that the use of the minimal residual method (MINRES) [37] gives the best numerical performance.

The complexity of phonon calculations can now be analyzed as below. Even with local pseudopotential only, and assume the Dyson equations always converge within a constant number of iterations that is independent of the system size , we need to apply to vectors of the form . Each requires solving Sternheimer equations (22), and the computational cost of applying the projection operator to a vector is . Hence the overall complexity is  [3]. This is significantly more expensive than solving the KSDFT, of which the computational complexity is typically .

### 2.3 Adaptively compressed polarizability operator

In this section we briefly review the ACP formulation [27] in the context of phonon calculations for insulating systems using local pseudopotentials. If we label the possible using a single index , the Sternheimer equation (22) can be written as

 Q(εi−H)Qζij=Q(ψi⊙gloc,j). (23)

Here we have used the relation to place and on a more symmetric footing. Then reduction of the computational complexity is achieved by means of reducing the equations in Eq. (23) to equations with systematic control of the accuracy.

The compression of the right hand side vectors is performed via the interpolative separable density fitting method by Lu and Ying [29]. Let us denote by the collection of right hand side vectors in Eq. (23) without the factor, i.e. . Here we have used as a stacked column index for the matrix . The dimension of is . Due to the large number of columns of , we seek for the following interpolative decomposition (ID) type of compression [12] for the matrix , i.e.

 Mij(r)≈Nμ∑μ=1ξμ(r)Mij(rμ)≡Nμ∑μ=1ξμ(r)ψi(rμ)gloc,j(rμ). (24)

Here denotes a collection of selected row indices (see Fig. 1 in [27] for an illustration). Mathematically, the meaning of the indices is clear: Eq. (24) simply states that for any grid point , the corresponding row vector can be approximately expressed as the linear combination of the selected rows . Since , as increases, the column dimension of (which is ) can be larger than its row dimension (which is ), and we can expect that the vectors are approximately linearly dependent. Such observation has been observed in the electronic structure community under the name of density fitting or resolution of identity (RI) [45, 41, 13, 43, 40], and the numerical rank of the matrix after truncation can be only with a relatively small pre-constant. This dimension reduction property has also been recently analyzed in [28]. In the context of the interpolative decomposition, our numerical results also indicate that it is sufficient to choose , and the pre-constant is small.

One possible way of finding interpolative decomposition is to use a pivoted QR factorization [11, 18]. However, the computational complexity for compressing the dense matrix using the interpolative decomposition is still . The interpolative separable density fitting method [29]

employs a two-step procedure to reduce this cost. The first step is to use a fast down-sampling procedure, such as a subsampled random Fourier transform (SRFT)

[47], to transform the matrix into a matrix of smaller dimension , with a relatively small constant so that is slightly larger than

. The second step is to apply the pivoted QR decomposition to

 ˜M∗˜Π=˜Q˜R, (25)

where is a permutation matrix and encodes the choice of the row indices from . The interpolation vectors in Eq. (24) can be also be computed from this pivoted QR decomposition. It should be noted that the pre-processing procedure does not affect the quality of the interpolative decomposition, while the cost of the pivoted QR factorization in Eq. (25) is now reduced to . We refer readers to [29, 27] for a more detailed description of this procedure.

Once the compressed representation (24) is obtained, we solve the following set of modified Sternheimer equations

 Q(εi−H)Q˜ζcμ=Qξμ,i=1,…,Ne,μ=1,…,Nμ.

Note that there are still equations to solve, but this time the number of equations arises from the energy dependence on the left hand side of the equation. If the band gap is positive, we can solve a set of equations of the form

 Q(˜εc−H)Q˜ζcμ=Qξμ,c=1,…,Nc,μ=1,…,Nμ. (26)

where the number of shifts is independent of the system size . For example, this can be achieved using the Chebyshev points on the occupied band , and the number of Chebyshev points needed to achieve a certain error tolerance scales weakly with respect to the band gap as . Here is the band gap and is the width of the occupied band [27].

Then define

 Wμ=Ne∑i=1diag[ψ∗i]⊙⎛⎝Nc∑c=1˜ζcμ∏c′≠cεi−˜εc′˜εc−˜εc′⎞⎠ψi(rμ)+h.c., (27)

and we can combine Eq. (27) with Eq. (22) to compute as

 χ0gloc,j≈Nμ∑μ=1Wμgloc,j(rμ). (28)

Formally, Eq. (28) can further be simplified by defining a matrix with columns, which consists of selected columns of a permutation matrix, i.e. as the first columns of the permutation matrix obtained from pivoted QR decomposition. More specifically, and is a unit vector with only one nonzero entry at such that . Then

 χ0gloc,j≈WΠTgloc,j:=˜χ0[{gloc,j}]gloc,j. (29)

Note that the notation emphasizes the dependence on the vectors that applies to. In other words, is designed to only agree with when applied to vectors , and the difference between and is not controlled in the space orthogonal to that spanned by these vectors. The rank of is only

, while the singular values of

have a much slower decay rate.

In the case when only local pseudopotential is used, the Dyson equation (15) is simplified as

 uj=χgloc,j=u0,j+χ0fhxcuj. (30)

Here is called the non-self-consistent response, and has been computed using the algorithm described above.

In order to solve Eq. (30), we do not only need to evaluate , but also the application of to the self-consistent response which is not known a priori. If we build a library of right hand side vectors so that the application of remains accurate throughout the iteration process of solving Eq. (30), the computational complexity can quickly increase. Instead it is much more efficient to adaptively compress the polarizability operator .

Note that for any given set of functions , we can construct an operator so that agrees well with when applied to the vectors . The Dyson equation can be rewritten as

 uj=(I−˜χ0[{fhxcuj}])−1u0,j. (31)

Note that is a low rank operator, and the matrix inverse in Eq. (31) can be efficiently evaluated using the Sherman-Morrison-Woodbury formula.

Eq. (31) yields an iterative scheme

 uk+1=(I−˜χ0[{fhxcuk}])−1u0. (32)

In the equation we omitted the subindex of . The convergence of the modified fixed point iteration (32) can be understood as follows. At the iteration step , the scheme and the true solution respectively satisfy

 uk+1=u0+˜χ0[{fhxcuk}]fhxcuk+1,u∗=u0+χ0fhxcu∗. (33)

Let be the error at the iteration step . We have

 ek+1=˜χ0[{fhxcuk}]fhxcuk+1−χ0fhxcu∗=˜χ0[{fhxcuk}]fhxcuk+1−χ0fhxcuk+1+χ0fhxcuk+1−χ0fhxcu∗=ηk+χ0fhxcek+1. (34)

Here

 (35)

which characterizes the discrepancy between and when applied to the unknown vector . Therefore the error at the -th step satisfies

 ek+1=(I−χ0fhxc)−1ηk. (36)

Since is negative semi-definite, the norm of is bounded from above by one. Hence the error goes to zero if the error of compression converges to .

To summarize, the ACP formulation has three key ingredients: Compress the right hand side; Disentangle the energy dependence; Adaptively compress the polarizability operator.

## 3 Split representation of the adaptively compressed polarizability operator

In this section, we demonstrate how to generalize the ACP formulation in section 2.3 for efficient phonon calculations of real materials. To this end we need to treat the nonlocal pseudopotential, as well as temperature effects especially for metallic systems. We demonstrate that the new split representation maintains the complexity, and improves all key steps in the ACP formulation, including Chebyshev interpolation of energy levels, iterative solution of Sternheimer equations, and convergence of the Dyson equations.

The split representation of the polarizability operator first chooses two cutoff energies , and splits the right hand side of Eq. (18) into two terms

 X0g≈⎡⎣⎛⎝Ncut∑i=1˜Ncut∑a=Ncut+1fa−fiεa−εiψa(ψ∗agψi)ψ∗i+h.c.⎞⎠+Ncut∑i=1Ncut∑a=1fa−fiεa−εiψa(ψ∗agψi)ψ∗i]+⎡⎢⎣Ncut∑i=1∞∑a=˜Ncut+1fiεi−εaψa(ψ∗agψi)ψ∗i+h.c.⎤⎥⎦:=X(s)0g+X(r)0g. (37)

Here the first and second brackets split into a singular component and a regular component , respectively. The Hermitian conjugate appears for the same reason as in Eq. (20) when treating insulating systems. is called the singular component because for systems with small gaps, the ratio can be as large as . When the physical band gap is small, this term becomes numerically singular to treat in the iterative solution of Sternheimer equations as well as the Chebyshev interpolation. On the other hand, the term is bounded from above by , where is called the effective gap. As the effective gap increases, the magnitude of also decreases. In order to efficiently treat the singular part, we assume that the eigenfunctions

have been computed using an iterative eigensolver. The cost for obtaining the additional eigenvectors is modest, given that the ground state DFT calculation already prepares the eigenvectors

.

The approximation in Eq. (37) only comes from that as increases above the chemical potential , the occupation number decays exponentially. Then we can choose large enough so that is sufficiently small and can be approximated by . For insulating systems we can simply choose . The second energy cutoff defines an effective gap , of which the role will be discussed later. The split representation requires the solution of eigenpairs of for . Fig. 1 illustrates the position of the cutoff energies along the energy spectrum, with respect to the occupation number given by the Fermi-Dirac distribution.

### 3.1 Compression of the regular component of the polarizability operator

One advantage of the split representation is that in the regular component, the contribution from vanishes, and hence can be evaluated using Sternheimer equations to eliminate the need of computing all the unoccupied orbitals as follows

 X(r)0g=Ncut∑i=1fiQc(εi−H)−1Qc(gψi)ψ∗i+h.c. (38)

Here the projection operator