Convergence analysis of adaptive DIIS algorithms with application to electronic ground state calculations
This paper deals with a general class of algorithms for the solution of fixed-point problems that we refer to as Anderson-Pulay acceleration. This family includes the DIIS technique and its variant sometimes called commutator-DIIS, both introduced by Pulay in the 1980s to accelerate the convergence of self-consistent field procedures in quantum chemistry, as well as the related Anderson acceleration, which dates back to the 1960s, and the wealth of methods it inspired. Such methods aim at accelerating the convergence of any fixed-point iteration method by combining several previous iterates in order to generate the next one at each step. The size of the set of stored iterates is characterised by its depth, which is a crucial parameter for the efficiency of the process. It is generally fixed to an empirical value in most applications. In the present work, we consider two parameter-driven mechanisms to let the depth vary along the iterations. One way to do so is to let the set grow until the stored iterates (save for the last one) are discarded and the method "restarts". Another way is to "adapt" the depth by eliminating some of the older, less relevant, iterates at each step. In an abstract and general setting, we prove under natural assumptions the local convergence and acceleration of these two types of Anderson-Pulay acceleration methods and demonstrate how to theoretically achieve a superlinear convergence rate. We then investigate their behaviour in calculations with the Hartree-Fock method and the Kohn-Sham model of density function theory. These numerical experiments show that the restarted and adaptive-depth variants exhibit a faster convergence than that of a standard fixed-depth scheme. This study is complemented by a review of known facts on the DIIS, in particular its link with the Anderson acceleration and some multisecant-type quasi-Newton methods.
READ FULL TEXT