-
Efficient Low-Rank Matrix Learning by Factorizable Nonconvex Regularization
Matrix learning is at the core of many machine learning problems. To enc...
read it
-
Krylov Methods for Low-Rank Regularization
This paper introduces new solvers for the computation of low-rank approx...
read it
-
Double Weighted Truncated Nuclear Norm Regularization for Low-Rank Matrix Completion
Matrix completion focuses on recovering a matrix from a small subset of ...
read it
-
A Rank-Corrected Procedure for Matrix Completion with Fixed Basis Coefficients
For the problems of low-rank matrix completion, the efficiency of the wi...
read it
-
Matrix Completion with Nonconvex Regularization: Spectral Operators and Scalable Algorithms
In this paper, we study the popularly dubbed matrix completion problem, ...
read it
-
A New Low-Rank Tensor Model for Video Completion
In this paper, we propose a new low-rank tensor model based on the circu...
read it
-
Efficiently Using Second Order Information in Large l1 Regularization Problems
We propose a novel general algorithm LHAC that efficiently uses second-o...
read it
Efficient and Practical Stochastic Subgradient Descent for Nuclear Norm Regularization
We describe novel subgradient methods for a broad class of matrix optimization problems involving nuclear norm regularization. Unlike existing approaches, our method executes very cheap iterations by combining low-rank stochastic subgradients with efficient incremental SVD updates, made possible by highly optimized and parallelizable dense linear algebra operations on small matrices. Our practical algorithms always maintain a low-rank factorization of iterates that can be conveniently held in memory and efficiently multiplied to generate predictions in matrix completion settings. Empirical comparisons confirm that our approach is highly competitive with several recently proposed state-of-the-art solvers for such problems.
READ FULL TEXT
Comments
There are no comments yet.