Grouped sparse projection

12/09/2019
by   Nicolas Gillis, et al.
12

As evident from deep learning, very large models bring improvements in training dynamics and representation power. Yet, smaller models have benefits of energy efficiency and interpretability. To get the benefits from both ends of the spectrum we often encourage sparsity in the model. Unfortunately, most existing approaches do not have a controllable way to request a desired value of sparsity in an interpretable parameter. In this paper, we design a new sparse projection method for a set of vectors in order to achieve a desired average level of sparsity which is measured using the ratio of the ℓ_1 and ℓ_2 norms. Most existing methods project each vector individuality trying to achieve a target sparsity, hence the user has to choose a sparsity level for each vector (e.g., impose that all vectors have the same sparsity). Instead, we project all vectors together to achieve an average target sparsity, where the sparsity levels of the vectors is automatically tuned. We also propose a generalization of this projection using a new notion of weighted sparsity measured using the ratio of a weighted ℓ_1 and the ℓ_2 norms. These projections can be used in particular to sparsify the columns of a matrix, which we use to compute sparse nonnegative matrix factorization and to learn sparse deep networks.

READ FULL TEXT

page 12

page 16

research
06/29/2020

Binary Random Projections with Controllable Sparsity Patterns

Random projection is often used to project higher-dimensional vectors on...
research
06/13/2020

Sparse Separable Nonnegative Matrix Factorization

We propose a new variant of nonnegative matrix factorization (NMF), comb...
research
08/08/2022

Controlled Sparsity via Constrained Optimization or: How I Learned to Stop Tuning Penalties and Love Constraints

The performance of trained neural networks is robust to harsh levels of ...
research
11/22/2020

A Homotopy-based Algorithm for Sparse Multiple Right-hand Sides Nonnegative Least Squares

Nonnegative least squares (NNLS) problems arise in models that rely on a...
research
12/08/2020

Sparse encoding for more-interpretable feature-selecting representations in probabilistic matrix factorization

Dimensionality reduction methods for count data are critical to a wide r...
research
03/08/2019

Understanding Sparse JL for Feature Hashing

Feature hashing and more general projection schemes are commonly used in...
research
05/12/2018

Randomization Approaches for Reducing PAPR with Partial Transmit Sequences and Semidefinite Relaxation

To reduce peak-to-average power ratio, we propose a method to choose a s...

Please sign up or login with your details

Forgot password? Click here to reset