DeepAI AI Chat
Log In Sign Up

Minimizing convex quadratic with variable precision Krylov methods

07/17/2018
by   S. Gratton, et al.
ENSEEIHT
UNamur
0

Iterative algorithms for the solution of convex quadratic optimization problems are investigated, which exploit inaccurate matrix-vector products. Theoretical bounds on the performance of a Conjugate Gradients and a Full-Orthormalization methods are derived, the necessary quantities occurring in the theoretical bounds estimated and new practical algorithms derived. Numerical experiments suggest that the new methods have significant potential, including in the steadily more important context of multi-precision computations.

READ FULL TEXT

page 1

page 2

page 3

page 4

07/24/2018

On the Randomized Complexity of Minimizing a Convex Quadratic Function

Minimizing a convex, quadratic objective is a fundamental problem in mac...
06/17/2021

Error bounds for Lanczos-based matrix function approximation

We analyze the Lanczos method for matrix function approximation (Lanczos...
08/25/2016

Minimizing Quadratic Functions in Constant Time

A sampling-based optimization method for quadratic functions is proposed...
12/09/2018

A note on solving nonlinear optimization problems in variable precision

This short note considers an efficient variant of the trust-region algor...
05/18/2021

Approximate solutions of convex semi-infinite optimization problems in finitely many iterations

We develop two adaptive discretization algorithms for convex semi-infini...
12/23/2019

Krylov type methods exploiting the quadratic numerical range

The quadratic numerical range W^2(A) is a subset of the standard numeric...
07/04/2022

Approximate Vanishing Ideal Computations at Scale

The approximate vanishing ideal of a set of points X = {𝐱_1, …, 𝐱_m}⊆ [0...