Layer Sparsity in Neural Networks

06/28/2020
by   Mohamed Hebiri, et al.
0

Sparsity has become popular in machine learning, because it can save computational resources, facilitate interpretations, and prevent overfitting. In this paper, we discuss sparsity in the framework of neural networks. In particular, we formulate a new notion of sparsity that concerns the networks' layers and, therefore, aligns particularly well with the current trend toward deep networks. We call this notion layer sparsity. We then introduce corresponding regularization and refitting schemes that can complement standard deep-learning pipelines to generate more compact and accurate networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/21/2020

Exploiting the Full Capacity of Deep Neural Networks while Avoiding Overfitting by Targeted Sparsity Regularization

Overfitting is one of the most common problems when training deep neural...
research
05/28/2019

OICSR: Out-In-Channel Sparsity Regularization for Compact Deep Neural Networks

Channel pruning can significantly accelerate and compress deep neural ne...
research
08/12/2016

Learning Structured Sparsity in Deep Neural Networks

High demand for computation resources severely hinders deployment of lar...
research
04/17/2019

Sparseout: Controlling Sparsity in Deep Networks

Dropout is commonly used to help reduce overfitting in deep neural netwo...
research
08/27/2018

Sparsity in Deep Neural Networks - An Empirical Investigation with TensorQuant

Deep learning is finding its way into the embedded world with applicatio...
research
06/01/2019

Sparsity Normalization: Stabilizing the Expected Outputs of Deep Networks

The learning of deep models, in which a numerous of parameters are super...
research
12/11/2022

Statistical guarantees for sparse deep learning

Neural networks are becoming increasingly popular in applications, but o...

Please sign up or login with your details

Forgot password? Click here to reset