Embedding Differentiable Sparsity into Deep Neural Network

06/23/2020
by   Yongjin Lee, et al.
0

In this paper, we propose embedding sparsity into the structure of deep neural networks, where model parameters can be exactly zero during training with the stochastic gradient descent. Thus, it can learn the sparsified structure and the weights of networks simultaneously. The proposed approach can learn structured as well as unstructured sparsity.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/08/2019

Differentiable Sparsification for Deep Neural Networks

A deep neural network has relieved the burden of feature engineering by ...
research
08/27/2018

Sparsity in Deep Neural Networks - An Empirical Investigation with TensorQuant

Deep learning is finding its way into the embedded world with applicatio...
research
08/23/2016

Deep Double Sparsity Encoder: Learning to Sparsify Not Only Features But Also Parameters

This paper emphasizes the significance to jointly exploit the problem st...
research
01/07/2019

GASL: Guided Attention for Sparsity Learning in Deep Neural Networks

The main goal of network pruning is imposing sparsity on the neural netw...
research
03/30/2021

Training Sparse Neural Network by Constraining Synaptic Weight on Unit Lp Sphere

Sparse deep neural networks have shown their advantages over dense model...
research
12/07/2020

DiffPrune: Neural Network Pruning with Deterministic Approximate Binary Gates and L_0 Regularization

Modern neural network architectures typically have many millions of para...
research
12/04/2017

Learning Sparse Neural Networks through L_0 Regularization

We propose a practical method for L_0 norm regularization for neural net...

Please sign up or login with your details

Forgot password? Click here to reset