Minimizing convex quadratic with variable precision Krylov methods

07/17/2018
by   S. Gratton, et al.
0

Iterative algorithms for the solution of convex quadratic optimization problems are investigated, which exploit inaccurate matrix-vector products. Theoretical bounds on the performance of a Conjugate Gradients and a Full-Orthormalization methods are derived, the necessary quantities occurring in the theoretical bounds estimated and new practical algorithms derived. Numerical experiments suggest that the new methods have significant potential, including in the steadily more important context of multi-precision computations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/24/2018

On the Randomized Complexity of Minimizing a Convex Quadratic Function

Minimizing a convex, quadratic objective is a fundamental problem in mac...
research
06/17/2021

Error bounds for Lanczos-based matrix function approximation

We analyze the Lanczos method for matrix function approximation (Lanczos...
research
08/25/2016

Minimizing Quadratic Functions in Constant Time

A sampling-based optimization method for quadratic functions is proposed...
research
12/09/2018

A note on solving nonlinear optimization problems in variable precision

This short note considers an efficient variant of the trust-region algor...
research
05/18/2021

Approximate solutions of convex semi-infinite optimization problems in finitely many iterations

We develop two adaptive discretization algorithms for convex semi-infini...
research
12/23/2019

Krylov type methods exploiting the quadratic numerical range

The quadratic numerical range W^2(A) is a subset of the standard numeric...
research
08/29/2023

Limited memory gradient methods for unconstrained optimization

The limited memory steepest descent method (Fletcher, 2012) for unconstr...

Please sign up or login with your details

Forgot password? Click here to reset