Structured Bayesian Pruning via Log-Normal Multiplicative Noise

05/20/2017
by   Kirill Neklyudov, et al.
0

Dropout-based regularization methods can be regarded as injecting random noise with pre-defined magnitude to different parts of the neural network during training. It was recently shown that Bayesian dropout procedure not only improves generalization but also leads to extremely sparse neural architectures by automatically setting the individual noise magnitude per weight. However, this sparsity can hardly be used for acceleration since it is unstructured. In the paper, we propose a new Bayesian model that takes into account the computational structure of neural networks and provides structured sparsity, e.g. removes neurons and/or convolutional channels in CNNs. To do this we inject noise to the neurons outputs while keeping the weights unregularized. We establish the probabilistic model with a proper truncated log-uniform prior over the noise and truncated log-normal variational approximation that ensures that the KL-term in the evidence lower bound is computed in closed-form. The model leads to structured sparsity by removing elements with a low SNR from the computation graph and provides significant acceleration on a number of deep neural architectures. The model is easy to implement as it can be formulated as a separate dropout-like layer.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/02/2018

Network Compression via Recursive Bayesian Pruning

Recently, compression and acceleration of deep neural networks are in cr...
research
02/21/2023

Structured Bayesian Compression for Deep Neural Networks Based on The Turbo-VBI Approach

With the growth of neural network size, model compression has attracted ...
research
11/19/2018

Variational Bayesian Dropout

Variational dropout (VD) is a generalization of Gaussian dropout, which ...
research
10/09/2018

Unifying the Dropout Family Through Structured Shrinkage Priors

Dropout regularization of deep neural networks has been a mysterious yet...
research
06/10/2015

A Scale Mixture Perspective of Multiplicative Noise in Neural Networks

Corrupting the input and hidden layers of deep neural networks (DNNs) wi...
research
11/08/2017

Variational Gaussian Dropout is not Bayesian

Gaussian multiplicative noise is commonly used as a stochastic regularis...
research
09/06/2022

What to Prune and What Not to Prune at Initialization

Post-training dropout based approaches achieve high sparsity and are wel...

Please sign up or login with your details

Forgot password? Click here to reset