Composite Anderson acceleration method with dynamic window-sizes and optimized damping

03/28/2022
by   Kewang Chen, et al.
0

In this paper, we propose and analyze a set of fully non-stationary Anderson acceleration algorithms with dynamic window sizes and optimized damping. Although Anderson acceleration (AA) has been used for decades to speed up nonlinear solvers in many applications, most authors are simply using and analyzing the stationary version of Anderson acceleration (sAA) with fixed window size and a constant damping factor. The behavior and potential of the non-stationary version of Anderson acceleration methods remain an open question. Since most efficient linear solvers use composable algorithmic components. Similar ideas can be used for AA to solve nonlinear systems. Thus in the present work, to develop non-stationary Anderson acceleration algorithms, we first propose two systematic ways to dynamically alternate the window size m by composition. One simple way to package sAA(m) with sAA(n) in each iteration is applying sAA(m) and sAA(n) separately and then average their results. It is an additive composite combination. The other more important way is the multiplicative composite combination, which means we apply sAA(m) in the outer loop and apply sAA(n) in the inner loop. By doing this, significant gains can be achieved. Secondly, to make AA to be a fully non-stationary algorithm, we need to combine these strategies with our recent work on the non-stationary Anderson acceleration algorithm with optimized damping (AAoptD), which is another important direction of producing non-stationary AA and nice performance gains have been observed. Moreover, we also investigate the rate of convergence of these non-stationary AA methods under suitable assumptions. Finally, our numerical results show that some of these proposed non-stationary Anderson acceleration algorithms converge faster than the stationary sAA method and they may significantly reduce the storage and time to find the solution in many cases.

READ FULL TEXT

page 1

page 17

research
02/10/2022

Non-stationary Anderson acceleration with optimized damping

Anderson acceleration (AA) has a long history of use and a strong recent...
research
09/18/2018

Non-Stationary Covariance Estimation using the Stochastic Score Approximation for Large Spatial Data

We introduce computational methods that allow for effective estimation o...
research
06/01/2023

Non-stationary Reinforcement Learning under General Function Approximation

General function approximation is a powerful tool to handle large state ...
research
03/05/2023

Revisiting Weighted Strategy for Non-stationary Parametric Bandits

Non-stationary parametric bandits have attracted much attention recently...
research
01/11/2018

Non-stationary Douglas-Rachford and alternating direction method of multipliers: adaptive stepsizes and convergence

We revisit the classical Douglas-Rachford (DR) method for finding a zero...
research
10/11/2020

Non-Stationary Stochastic Global Optimization Algorithms

Gomez proposes a formal and systematic approach for characterizing stoch...
research
10/04/2011

Two Projection Pursuit Algorithms for Machine Learning under Non-Stationarity

This thesis derives, tests and applies two linear projection algorithms ...

Please sign up or login with your details

Forgot password? Click here to reset