Geometry of Factored Nuclear Norm Regularization

04/05/2017
by   Qiuwei Li, et al.
0

This work investigates the geometry of a nonconvex reformulation of minimizing a general convex loss function f(X) regularized by the matrix nuclear norm X_*. Nuclear-norm regularized matrix inverse problems are at the heart of many applications in machine learning, signal processing, and control. The statistical performance of nuclear norm regularization has been studied extensively in literature using convex analysis techniques. Despite its optimal performance, the resulting optimization has high computational complexity when solved using standard or even tailored fast convex solvers. To develop faster and more scalable algorithms, we follow the proposal of Burer-Monteiro to factor the matrix variable X into the product of two smaller rectangular matrices X=UV^T and also replace the nuclear norm X_* with (U_F^2+V_F^2)/2. In spite of the nonconvexity of the factored formulation, we prove that when the convex loss function f(X) is (2r,4r)-restricted well-conditioned, each critical point of the factored problem either corresponds to the optimal solution X^ of the original convex optimization or is a strict saddle point where the Hessian matrix has a strictly negative eigenvalue. Such a geometric structure of the factored formulation allows many local search algorithms to converge to the global optimum with random initializations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/23/2014

On the Optimal Solution of Weighted Nuclear Norm Minimization

In recent years, the nuclear norm minimization (NNM) problem has been at...
research
06/02/2016

Unified Scalable Equivalent Formulations for Schatten Quasi-Norms

The Schatten quasi-norm can be used to bridge the gap between the nuclea...
research
07/22/2014

Approximate Regularization Path for Nuclear Norm Based H2 Model Reduction

This paper concerns model reduction of dynamical systems using the nucle...
research
08/18/2015

Robust Subspace Clustering via Smoothed Rank Approximation

Matrix rank minimizing subject to affine constraints arises in many appl...
research
12/02/2022

Fast Algorithm for Constrained Linear Inverse Problems

We consider the constrained Linear Inverse Problem (LIP), where a certai...
research
10/26/2020

A novel variational form of the Schatten-p quasi-norm

The Schatten-p quasi-norm with p∈(0,1) has recently gained considerable ...
research
02/10/2013

Conditional Gradient Algorithms for Norm-Regularized Smooth Convex Optimization

Motivated by some applications in signal processing and machine learning...

Please sign up or login with your details

Forgot password? Click here to reset