Non-stationary Anderson acceleration with optimized damping

02/10/2022
by   Kewang Chen, et al.
0

Anderson acceleration (AA) has a long history of use and a strong recent interest due to its potential ability to dramatically improve the linear convergence of the fixed-point iteration. Most authors are simply using and analyzing the stationary version of Anderson acceleration (sAA) with a constant damping factor or without damping. Little attention has been paid to nonstationary algorithms. However, damping can be useful and is sometimes crucial for simulations in which the underlying fixed-point operator is not globally contractive. The role of this damping factor has not been fully understood. In the present work, we consider the non-stationary Anderson acceleration algorithm with optimized damping (AAoptD) in each iteration to further speed up linear and nonlinear iterations by applying one extra inexpensive optimization. We analyze this procedure and develop an efficient and inexpensive implementation scheme. We also show that, compared with the stationary Anderson acceleration with fixed window size sAA(m), optimizing the damping factors is related to dynamically packaging sAA(m) and sAA(1) in each iteration (alternating window size m is another direction of producing non-stationary AA). Moreover, we show by extensive numerical experiments that the proposed non-stationary Anderson acceleration with optimized damping procedure often converges much faster than stationary AA with constant damping or without damping.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/28/2022

Composite Anderson acceleration method with dynamic window-sizes and optimized damping

In this paper, we propose and analyze a set of fully non-stationary Ande...
research
09/29/2021

Linear Asymptotic Convergence of Anderson Acceleration: Fixed-Point Analysis

We study the asymptotic convergence of AA(m), i.e., Anderson acceleratio...
research
01/11/2018

Non-stationary Douglas-Rachford and alternating direction method of multipliers: adaptive stepsizes and convergence

We revisit the classical Douglas-Rachford (DR) method for finding a zero...
research
07/06/2020

Quantifying the asymptotic linear convergence speed of Anderson Acceleration applied to ADMM

We explain how Anderson Acceleration (AA) speeds up the Alternating Dire...
research
09/29/2021

Anderson Acceleration as a Krylov Method with Application to Asymptotic Convergence Analysis

Anderson acceleration is widely used for accelerating the convergence of...
research
10/19/2021

Performance of Low Synchronization Orthogonalization Methods in Anderson Accelerated Fixed Point Solvers

Anderson Acceleration (AA) is a method to accelerate the convergence of ...
research
04/06/2021

Approximate Linearization of Fixed Point Iterations: Error Analysis of Tangent and Adjoint Problems Linearized about Non-Stationary Points

Previous papers have shown the impact of partial convergence of discreti...

Please sign up or login with your details

Forgot password? Click here to reset