Bilinear Parameterization For Differentiable Rank-Regularization

11/27/2018
by   Marcus Valtonen Örnhag, et al.
0

Low rank approximation is a commonly occurring problem in many computer vision and machine learning applications. There are two common ways of optimizing the resulting models. Either the set of matrices with a given sought rank can be explicitly parametrized using a bilinear factorization, or low rank can be implicitly enforced using regularization terms penalizing non-zero singular values. While the former results in differentiable problems that can be efficiently optimized using local quadratic approximation the latter are typically not differentiable (sometimes even discontinuous) and require splitting methods such as Alternating Direction Method of Multipliers (ADMM). It is well known that while ADMM makes rapid improvements the first couple of iterations convergence to the exact minimizer can be tediously slow. On the other hand regularization formulations can in certain cases come with theoretical optimality guarantees. In this paper we show how many non-differentiable regularization methods can be reformulated into smooth objectives using bilinear parameterization. This opens up the possibility of using second order methods such as Levenberg--Marquardt (LM) and Variable Projection (VarPro) to achieve accurate solutions for ill-conditioned problems. We show on several real and synthetic experiments that our second order formulation converges to substantially more accurate solutions than what ADMM formulations provide in a reasonable amount of time.

READ FULL TEXT

page 7

page 8

page 9

research
03/23/2020

Accurate Optimization of Weighted Nuclear Norm for Non-Rigid Structure from Motion

Fitting a matrix of a given rank to data in a least squares sense can be...
research
04/10/2017

Adaptive Relaxed ADMM: Convergence Theory and Practical Implementation

Many modern computer vision and machine learning applications rely on so...
research
09/03/2014

Structured Low-Rank Matrix Factorization with Missing and Grossly Corrupted Observations

Recovering low-rank and sparse matrices from incomplete or corrupted obs...
research
06/10/2016

Extended Gauss-Newton and Gauss-Newton-ADMM Algorithms for Low-Rank Matrix Optimization

We develop a generic Gauss-Newton (GN) framework for solving a class of ...
research
07/01/2020

Second Order Accurate Hierarchical Approximate Factorization of Sparse SPD Matrices

We describe a second-order accurate approach to sparsifying the off-diag...
research
11/15/2016

Oracle Complexity of Second-Order Methods for Finite-Sum Problems

Finite-sum optimization problems are ubiquitous in machine learning, and...

Please sign up or login with your details

Forgot password? Click here to reset