Double Weighted Truncated Nuclear Norm Regularization for Low-Rank Matrix Completion

01/07/2019
by   Shengke Xue, et al.
8

Matrix completion focuses on recovering a matrix from a small subset of its observed elements, and has already gained cumulative attention in computer vision. Many previous approaches formulate this issue as a low-rank matrix approximation problem. Recently, a truncated nuclear norm has been presented as a surrogate of traditional nuclear norm, for better estimation to the rank of a matrix. The truncated nuclear norm regularization (TNNR) method is applicable in real-world scenarios. However, it is sensitive to the selection of the number of truncated singular values and requires numerous iterations to converge. Hereby, this paper proposes a revised approach called the double weighted truncated nuclear norm regularization (DW-TNNR), which assigns different weights to the rows and columns of a matrix separately, to accelerate the convergence with acceptable performance. The DW-TNNR is more robust to the number of truncated singular values than the TNNR. Instead of the iterative updating scheme in the second step of TNNR, this paper devises an efficient strategy that uses a gradient descent manner in a concise form, with a theoretical guarantee in optimization. Sufficient experiments conducted on real visual data prove that DW-TNNR has promising performance and holds the superiority in both speed and accuracy for matrix completion.

READ FULL TEXT

page 6

page 7

page 8

page 9

research
12/03/2017

Low-Rank Tensor Completion by Truncated Nuclear Norm Regularization

Currently, low-rank tensor completion has gained cumulative attention in...
research
06/11/2014

Truncated Nuclear Norm Minimization for Image Restoration Based On Iterative Support Detection

Recovering a large matrix from limited measurements is a challenging tas...
research
02/22/2017

On the Power of Truncated SVD for General High-rank Matrix Estimation Problems

We show that given an estimate A that is close to a general high-rank po...
research
06/27/2012

Efficient and Practical Stochastic Subgradient Descent for Nuclear Norm Regularization

We describe novel subgradient methods for a broad class of matrix optimi...
research
05/04/2021

Implicit Regularization in Deep Tensor Factorization

Attempts of studying implicit regularization associated to gradient desc...
research
10/27/2020

Learning Low-Rank Document Embeddings with Weighted Nuclear Norm Regularization

Recently, neural embeddings of documents have shown success in various l...
research
09/17/2022

Low-Rank Covariance Completion for Graph Quilting with Applications to Functional Connectivity

As a tool for estimating networks in high dimensions, graphical models a...

Please sign up or login with your details

Forgot password? Click here to reset