Efficient Low-Rank Matrix Learning by Factorizable Nonconvex Regularization

08/14/2020 ∙ by Yaqing Wang, et al. ∙ 8

Matrix learning is at the core of many machine learning problems. To encourage a low-rank matrix solution, besides using the traditional convex nuclear norm regularizer, a popular recent trend is to use nonconvex regularizers that adaptively penalize singular values. They offer better recovery performance, but require computing the expensive singular value decomposition (SVD) in each iteration. To remove this bottleneck, we consider the "nuclear norm minus Frobenius norm" regularizer. Besides having nice theoretical properties on recovery and shrinkage as the other nonconvex regularizers, it can be reformulated into a factored form that can be efficiently optimized by gradient-based algorithms while avoiding the SVD computations altogether. Extensive low-rank matrix completion experiments on a number of synthetic and real-world data sets show that the proposed method obtains state-of-the-art recovery performance and is much more efficient than existing low-rank convex / nonconvex regularization and matrix factorization algorithms.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.