Anderson acceleration with approximate calculations: applications to scientific computing

06/08/2022
by   Massimiliano Lupo Pasini, et al.
0

We provide rigorous theoretical bounds for Anderson acceleration (AA) that allow for efficient approximate calculations of the residual, which reduce computational time and memory storage while maintaining convergence. Specifically, we propose a reduced variant of AA, which consists in projecting the least squares to compute the Anderson mixing onto a subspace of reduced dimension. The dimensionality of this subspace adapts dynamically at each iteration as prescribed by computable heuristic quantities guided by the theoretical error bounds. The use of the heuristic to monitor the error introduced by approximate calculations, combined with the check on monotonicity of the convergence, ensures the convergence of the numerical scheme within a prescribed tolerance threshold on the residual. We numerically assess the performance of AA with approximate calculations on: (i) linear deterministic fixed-point iterations arising from the Richardson's scheme to solve linear systems with open-source benchmark matrices with various preconditioners and (ii) non-linear deterministic fixed-point iterations arising from non-linear time-dependent Boltzmann equations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/04/2020

On the Asymptotic Linear Convergence Speed of Anderson Acceleration, Nesterov Acceleration, and Nonlinear GMRES

We consider nonlinear convergence acceleration methods for fixed-point i...
research
11/02/2021

Practical error bounds for properties in plane-wave electronic structure calculations

We propose accurate computable error bounds for quantities of interest i...
research
07/03/2021

Stochastic Algorithms for Self-consistent Calculations of Electronic Structures

The convergence property of a stochastic algorithm for the self-consiste...
research
09/29/2021

Anderson Acceleration as a Krylov Method with Application to Asymptotic Convergence Analysis

Anderson acceleration is widely used for accelerating the convergence of...
research
08/25/2023

A Game of Bundle Adjustment – Learning Efficient Convergence

Bundle adjustment is the common way to solve localization and mapping. I...
research
04/06/2021

Approximate Linearization of Fixed Point Iterations: Error Analysis of Tangent and Adjoint Problems Linearized about Non-Stationary Points

Previous papers have shown the impact of partial convergence of discreti...
research
10/26/2021

Stable Anderson Acceleration for Deep Learning

Anderson acceleration (AA) is an extrapolation technique designed to spe...

Please sign up or login with your details

Forgot password? Click here to reset