On Asymptotic Linear Convergence of Projected Gradient Descent for Constrained Least Squares

12/22/2021
by   Trung Vu, et al.
8

Many recent problems in signal processing and machine learning such as compressed sensing, image restoration, matrix/tensor recovery, and non-negative matrix factorization can be cast as constrained optimization. Projected gradient descent is a simple yet efficient method for solving such constrained optimization problems. Local convergence analysis furthers our understanding of its asymptotic behavior near the solution, offering sharper bounds on the convergence rate compared to global convergence analysis. However, local guarantees often appear scattered in problem-specific areas of machine learning and signal processing. This manuscript presents a unified framework for the local convergence analysis of projected gradient descent in the context of constrained least squares. The proposed analysis offers insights into pivotal local convergence properties such as the condition of linear convergence, the region of convergence, the exact asymptotic rate of convergence, and the bound on the number of iterations needed to reach a certain level of accuracy. To demonstrate the applicability of the proposed approach, we present a recipe for the convergence analysis of PGD and demonstrate it via a beginning-to-end application of the recipe on four fundamental problems, namely, linearly constrained least squares, sparse recovery, least squares with the unit norm constraint, and matrix completion.

READ FULL TEXT

page 4

page 7

page 8

page 9

page 11

page 13

page 14

page 15

research
02/04/2021

Exact Linear Convergence Rate Analysis for Low-Rank Symmetric Matrix Completion via Gradient Descent

Factorization-based gradient descent is a scalable and efficient algorit...
research
07/22/2011

A Unifying Analysis of Projected Gradient Descent for ℓ_p-constrained Least Squares

In this paper we study the performance of the Projected Gradient Descent...
research
06/04/2016

Provable Burer-Monteiro factorization for a class of norm-constrained matrix problems

We study the projected gradient descent method on low-rank matrix proble...
research
03/14/2023

Low-Complexity Iterative Methods for Complex-Variable Matrix Optimization Problems in Frobenius Norm

Complex-variable matrix optimization problems (CMOPs) in Frobenius norm ...
research
01/12/2016

IRLS and Slime Mold: Equivalence and Convergence

In this paper we present a connection between two dynamical systems aris...
research
12/22/2020

Iteratively Reweighted Least Squares for ℓ_1-minimization with Global Linear Convergence Rate

Iteratively Reweighted Least Squares (IRLS), whose history goes back mor...
research
05/31/2019

Data-driven Algorithm Selection and Parameter Tuning: Two Case studies in Optimization and Signal Processing

Machine learning algorithms typically rely on optimization subroutines a...

Please sign up or login with your details

Forgot password? Click here to reset