Matrix Completion via Nonconvex Regularization: Convergence of the Proximal Gradient Algorithm

03/02/2019
by   Fei Wen, et al.
26

Matrix completion has attracted much interest in the past decade in machine learning and computer vision. For low-rank promotion in matrix completion, the nuclear norm penalty is convenient due to its convexity but has a bias problem. Recently, various algorithms using nonconvex penalties have been proposed, among which the proximal gradient descent (PGD) algorithm is one of the most efficient and effective. For the nonconvex PGD algorithm, whether it converges to a local minimizer and its convergence rate are still unclear. This work provides a nontrivial analysis on the PGD algorithm in the nonconvex case. Besides the convergence to a stationary point for a generalized nonconvex penalty, we provide more deep analysis on a popular and important class of nonconvex penalties which have discontinuous thresholding functions. For such penalties, we establish the finite rank convergence, convergence to restricted strictly local minimizer and eventually linear convergence rate of the PGD algorithm. Meanwhile, convergence to a local minimizer has been proved for the hard-thresholding penalty. Our result is the first shows that, nonconvex regularized matrix completion only has restricted strictly local minimizers, and the PGD algorithm can converge to such minimizers with eventually linear rate under certain conditions. Illustration of the PGD algorithm via experiments has also been provided. Code is available at https://github.com/FWen/nmc.

READ FULL TEXT

page 1

page 8

page 9

page 10

research
01/24/2018

Matrix Completion with Nonconvex Regularization: Spectral Operators and Scalable Algorithms

In this paper, we study the popularly dubbed matrix completion problem, ...
research
04/08/2019

Binary matrix completion with nonconvex regularizers

Many practical problems involve the recovery of a binary matrix from par...
research
12/29/2021

On Local Convergence of Iterative Hard Thresholding for Matrix Completion

Iterative hard thresholding (IHT) has gained in popularity over the past...
research
04/15/2016

Positive Definite Estimation of Large Covariance Matrix Using Generalized Nonconvex Penalties

This work addresses the issue of large covariance matrix estimation in h...
research
09/23/2021

Low Rank Vectorized Hankel Lift for Matrix Recovery via Fast Iterative Hard Thresholding

We propose a VHL-FIHT algorithm for matrix recovery in blind super-resol...
research
01/21/2022

LRSVRG-IMC: An SVRG-Based Algorithm for LowRank Inductive Matrix Completion

Low-rank inductive matrix completion (IMC) is currently widely used in I...
research
03/16/2017

Accelerated and Inexact Soft-Impute for Large-Scale Matrix and Tensor Completion

Matrix and tensor completion aim to recover a low-rank matrix / tensor f...

Please sign up or login with your details

Forgot password? Click here to reset