DeepAI AI Chat
Log In Sign Up

Compact Factorization of Matrices Using Generalized Round-Rank

by   Pouya Pezeshkpour, et al.
University of California, Irvine
University of Washington

Matrix factorization is a well-studied task in machine learning for compactly representing large, noisy data. In our approach, instead of using the traditional concept of matrix rank, we define a new notion of link-rank based on a non-linear link function used within factorization. In particular, by applying the round function on a factorization to obtain ordinal-valued matrices, we introduce generalized round-rank (GRR). We show that not only are there many full-rank matrices that are low GRR, but further, that these matrices cannot be approximated well by low-rank linear factorization. We provide uniqueness conditions of this formulation and provide gradient descent-based algorithms. Finally, we present experiments on real-world datasets to demonstrate that the GRR-based factorization is significantly more accurate than linear factorization, while converging faster and using lower rank representations.


page 1

page 2

page 3

page 4


Implicit regularization and solution uniqueness in over-parameterized matrix sensing

We consider whether algorithmic choices in over-parameterized linear mat...

Pursuits in Structured Non-Convex Matrix Factorizations

Efficiently representing real world data in a succinct and parsimonious ...

Orthogonal iterations on Structured Pencils

We present a class of fast subspace tracking algorithms based on orthogo...

Rank-one partitioning: formalization, illustrative examples, and a new cluster enhancing strategy

In this paper, we introduce and formalize a rank-one partitioning learni...

MLCTR: A Fast Scalable Coupled Tensor Completion Based on Multi-Layer Non-Linear Matrix Factorization

Firms earning prediction plays a vital role in investment decisions, div...

A Boosting Framework of Factorization Machine

Recently, Factorization Machines (FM) has become more and more popular f...

Critical Points and Convergence Analysis of Generative Deep Linear Networks Trained with Bures-Wasserstein Loss

We consider a deep matrix factorization model of covariance matrices tra...