There and Back Again: A General Approach to Learning Sparse Models

06/25/2017
by   Vatsal Sharan, et al.
0

We propose a simple and efficient approach to learning sparse models. Our approach consists of (1) projecting the data into a lower dimensional space, (2) learning a dense model in the lower dimensional space, and then (3) recovering the sparse model in the original space via compressive sensing. We apply this approach to Non-negative Matrix Factorization (NMF), tensor decomposition and linear classification---showing that it obtains 10× compression with negligible loss in accuracy on real data, and obtains up to 5× speedups. Our main theoretical contribution is to show the following result for NMF: if the original factors are sparse, then their projections are the sparsest solutions to the projected NMF problem. This explains why our method works for NMF and shows an interesting new property of random projections: they can preserve the solutions of non-convex optimization problems such as NMF.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/11/2018

Faster-than-fast NMF using random projections and Nesterov iterations

Random projections have been recently implemented in Nonnegative Matrix ...
research
10/13/2008

Non-Negative Matrix Factorization, Convexity and Isometry

In this paper we explore avenues for improving the reliability of dimens...
research
12/10/2021

On the Relationships between Transform-Learning NMF and Joint-Diagonalization

Non-negative matrix factorization with transform learning (TL-NMF) is a ...
research
12/06/2017

An Efficient Algorithm for Non-Negative Matrix Factorization with Random Projections

Non-negative matrix factorization (NMF) is one of the most popular decom...
research
05/08/2017

Non-negative Matrix Factorization via Archetypal Analysis

Given a collection of data points, non-negative matrix factorization (NM...
research
11/10/2020

Gaussian Compression Stream: Principle and Preliminary Results

Random projections became popular tools to process big data. In particul...
research
08/05/2015

On the Linear Belief Compression of POMDPs: A re-examination of current methods

Belief compression improves the tractability of large-scale partially ob...

Please sign up or login with your details

Forgot password? Click here to reset