Deep Double Sparsity Encoder: Learning to Sparsify Not Only Features But Also Parameters

08/23/2016
by   Zhangyang Wang, et al.
1

This paper emphasizes the significance to jointly exploit the problem structure and the parameter structure, in the context of deep modeling. As a specific and interesting example, we describe the deep double sparsity encoder (DDSE), which is inspired by the double sparsity model for dictionary learning. DDSE simultaneously sparsities the output features and the learned model parameters, under one unified framework. In addition to its intuitive model interpretation, DDSE also possesses compact model size and low complexity. Extensive simulations compare DDSE with several carefully-designed baselines, and verify the consistently superior performance of DDSE. We further apply DDSE to the novel application domain of brain encoding, with promising preliminary results achieved.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/23/2020

Embedding Differentiable Sparsity into Deep Neural Network

In this paper, we propose embedding sparsity into the structure of deep ...
research
01/31/2016

Trainlets: Dictionary Learning in High Dimensions

Sparse representations has shown to be a very powerful model for real wo...
research
06/17/2022

Sparse Double Descent: Where Network Pruning Aggravates Overfitting

People usually believe that network pruning not only reduces the computa...
research
01/16/2014

Learning ℓ_1-based analysis and synthesis sparsity priors using bi-level optimization

We consider the analysis operator and synthesis dictionary learning prob...
research
08/31/2023

The Quest of Finding the Antidote to Sparse Double Descent

In energy-efficient schemes, finding the optimal size of deep learning m...
research
07/07/2019

Deep Exponential-Family Auto-Encoders

We consider the problem of learning recurring convolutional patterns fro...

Please sign up or login with your details

Forgot password? Click here to reset