DeepAI AI Chat
Log In Sign Up

Flexible Enlarged Conjugate Gradient Methods

by   Sophie M. Moufawad, et al.
American University of Beirut

Enlarged Krylov subspace methods and their s-step versions were introduced [7] in the aim of reducing communication when solving systems of linear equations Ax = b. These enlarged CG methods consist of enlarging the Krylov subspace by a maximum of t vectors per iteration based on the domain decomposition of the graph of A. As for the s-step versions, s iterations of the enlarged Conjugate Gradient methods are merged in one iteration. The Enlarged CG methods and their s-step versions converge in less iterations than the classical CG, but at the expense of requiring more memory storage than CG. Thus in this paper we explore different options for reducing the memory requirements of these enlarged CG methods without affecting much their convergence.


page 1

page 2

page 3

page 4


The Adaptive s-step Conjugate Gradient Method

On modern large-scale parallel computers, the performance of Krylov subs...

Asynchronous Richardson iterations

We consider asynchronous versions of the first and second order Richards...

The Hierarchical Subspace Iteration Method for Laplace–Beltrami Eigenproblems

Sparse eigenproblems are important for various applications in computer ...

Evaluations of The Hierarchical Subspace Iteration Method

This document contains additional experiments concerned with the evaluat...

Entrywise convergence of iterative methods for eigenproblems

Several problems in machine learning, statistics, and other fields rely ...

Maximal Atomic irRedundant Sets: a Usage-based Dataflow Partitioning Algorithm

Programs admitting a polyhedral representation can be transformed in man...

Fast Gradient Methods with Alignment for Symmetric Linear Systems without Using Cauchy Step

The performance of gradient methods has been considerably improved by th...