DeepAI AI Chat
Log In Sign Up

Local convergence of alternating low-rank optimization methods with overrelaxation

by   Ivan V. Oseledets, et al.

The local convergence of alternating optimization methods with overrelaxation for low-rank matrix and tensor problems is established. The analysis is based on the linearization of the method which takes the form of an SOR iteration for a positive semidefinite Hessian and can be studied in the corresponding quotient geometry of equivalent low-rank representations. In the matrix case, the optimal relaxation parameter for accelerating the local convergence can be determined from the convergence rate of the standard method. This result relies on a version of Young's SOR theorem for positive semidefinite 2 × 2 block systems.


page 1

page 2

page 3

page 4


Bundle Method Sketching for Low Rank Semidefinite Programming

In this paper, we show that the bundle method can be applied to solve se...

Bures-Wasserstein Barycenters and Low-Rank Matrix Recovery

We revisit the problem of recovering a low-rank positive semidefinite ma...

Noisy Low-rank Matrix Optimization: Geometry of Local Minima and Convergence Rate

This paper is concerned with low-rank matrix optimization, which has fou...

Riemannian optimization using three different metrics for Hermitian PSD fixed-rank constraints: an extended version

We consider smooth optimization problems with a Hermitian positive semi-...

Binary Component Decomposition Part I: The Positive-Semidefinite Case

This paper studies the problem of decomposing a low-rank positive-semide...

Low-Rank plus Sparse Decomposition of Covariance Matrices using Neural Network Parametrization

This paper revisits the problem of decomposing a positive semidefinite m...

A parameter-dependent smoother for the multigrid method

The solution of parameter-dependent linear systems, by classical methods...