Learnable Graph-regularization for Matrix Decomposition

10/16/2020
by   Penglong Zhai, et al.
0

Low-rank approximation models of data matrices have become important machine learning and data mining tools in many fields including computer vision, text mining, bioinformatics and many others. They allow for embedding high-dimensional data into low-dimensional spaces, which mitigates the effects of noise and uncovers latent relations. In order to make the learned representations inherit the structures in the original data, graph-regularization terms are often added to the loss function. However, the prior graph construction often fails to reflect the true network connectivity and the intrinsic relationships. In addition, many graph-regularized methods fail to take the dual spaces into account. Probabilistic models are often used to model the distribution of the representations, but most of previous methods often assume that the hidden variables are independent and identically distributed for simplicity. To this end, we propose a learnable graph-regularization model for matrix decomposition (LGMD), which builds a bridge between graph-regularized methods and probabilistic matrix decomposition models. LGMD learns two graphical structures (i.e., two precision matrices) in real-time in an iterative manner via sparse precision matrix estimation and is more robust to noise and missing entries. Extensive numerical results and comparison with competing methods demonstrate its effectiveness.

READ FULL TEXT
research
01/15/2013

Matrix Approximation under Local Low-Rank Assumption

Matrix approximation is a common tool in machine learning for building a...
research
07/23/2021

Bayesian Precision Factor Analysis for High-dimensional Sparse Gaussian Graphical Models

Gaussian graphical models are popular tools for studying the dependence ...
research
04/15/2015

Theory of Dual-sparse Regularized Randomized Reduction

In this paper, we study randomized reduction methods, which reduce high-...
research
04/16/2022

Graph-incorporated Latent Factor Analysis for High-dimensional and Sparse Matrices

A High-dimensional and sparse (HiDS) matrix is frequently encountered in...
research
11/29/2018

Graph Multiview Canonical Correlation Analysis

Multiview canonical correlation analysis (MCCA) seeks latent low-dimensi...
research
03/19/2016

L0-norm Sparse Graph-regularized SVD for Biclustering

Learning the "blocking" structure is a central challenge for high dimens...
research
04/26/2019

Neural Ideal Point Estimation Network

Understanding politics is challenging because the politics take the infl...

Please sign up or login with your details

Forgot password? Click here to reset