Convergence analysis of adaptive DIIS algorithms with application to electronic ground state calculations

02/28/2020
by   Maxime Chupin, et al.
0

This paper deals with a general class of algorithms for the solution of fixed-point problems that we refer to as Anderson-Pulay acceleration. This family includes the DIIS technique and its variant sometimes called commutator-DIIS, both introduced by Pulay in the 1980s to accelerate the convergence of self-consistent field procedures in quantum chemistry, as well as the related Anderson acceleration, which dates back to the 1960s, and the wealth of methods it inspired. Such methods aim at accelerating the convergence of any fixed-point iteration method by combining several previous iterates in order to generate the next one at each step. The size of the set of stored iterates is characterised by its depth, which is a crucial parameter for the efficiency of the process. It is generally fixed to an empirical value in most applications. In the present work, we consider two parameter-driven mechanisms to let the depth vary along the iterations. One way to do so is to let the set grow until the stored iterates (save for the last one) are discarded and the method "restarts". Another way is to "adapt" the depth by eliminating some of the older, less relevant, iterates at each step. In an abstract and general setting, we prove under natural assumptions the local convergence and acceleration of these two types of Anderson-Pulay acceleration methods and demonstrate how to theoretically achieve a superlinear convergence rate. We then investigate their behaviour in calculations with the Hartree-Fock method and the Kohn-Sham model of density function theory. These numerical experiments show that the restarted and adaptive-depth variants exhibit a faster convergence than that of a standard fixed-depth scheme. This study is complemented by a review of known facts on the DIIS, in particular its link with the Anderson acceleration and some multisecant-type quasi-Newton methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/03/2020

Nonmonotone Globalization for Anderson Acceleration Using Adaptive Regularization

Anderson acceleration (AA) is a popular method for accelerating fixed-po...
research
05/27/2018

Fast K-Means Clustering with Anderson Acceleration

We propose a novel method to accelerate Lloyd's algorithm for K-Means cl...
research
09/29/2021

Linear Asymptotic Convergence of Anderson Acceleration: Fixed-Point Analysis

We study the asymptotic convergence of AA(m), i.e., Anderson acceleratio...
research
10/18/2019

Anderson Acceleration of Proximal Gradient Methods

Anderson acceleration is a well-established and simple technique for spe...
research
03/03/2022

Improved convergence of the Arrow-Hurwicz iteration for the Navier-Stokes equation via grad-div stabilization and Anderson acceleration

We consider two modifications of the Arrow-Hurwicz (AH) iteration for so...
research
05/26/2022

An Acceleration of Fixed Point Iterations for M/G/1-type Markov Chains by Means of Relaxation Techniques

We present some accelerated variants of fixed point iterations for compu...

Please sign up or login with your details

Forgot password? Click here to reset