1 Introduction
Training deep neural networks (DNNs) under multiplicative noise, by introducing a random variable into the inner product between a hidden layer and a weight matrix, has led to significant improvements in predictive accuracy. Typically the noise is drawn from a Bernoulli distribution, which is equivalent to randomly dropping neurons from the network during training, and hence the practice has been termed
dropout [10, 17]. Recent work [17, 20]suggests equivalent, if not better, performance using Beta or Gaussian distributions for the multiplicative noise. Thus, in this paper we consider multiplicative noise regularization broadly, not limiting our focus just to the Bernoulli distribution.
Despite its empirical success, regularization by way of multiplicative noise is not well understood theoretically, especially for DNNs. The multiplicative noise term eludes analysis as a result of being buried within the DNN’s composition of nonlinear functions. In this paper, by adopting a Bayesian perspective, we show that we can develop closedform analytical expressions that describe the effect of training with multiplicative noise in DNNs and other models. When a zeromean Gaussian prior is placed on the weights of the DNN, the multiplicative noise variable induces a Gaussian scale mixture
(GSM), i.e. the variance of the Gaussian prior becomes a random variable whose distribution is determined by the multiplicative noise model. Conveniently, GSMs can be represented hierarchically with the scale mixing variable—in this case the multiplicative noise—becoming a hyperprior. This allows us to circumvent the problematic coupling of the noise and likelihood through reparameterization, making them conditionally independent. Once in this form a typeII maximum likelihood procedure yields closedform updates for the multiplicative noise term and hence makes the regularization mechanism explicit.
While the GSM reparameterization and learning procedure are not novel in their own right, employing them to understand multiplicative noise in neural networks is new. Moreover, the analysis is not restricted by the network’s depth or activation functions, as previous attempts at understanding dropout have been. We show that regularization via multiplicative noise has a dual nature, forcing weights to become either sparse or invariant to rescaling. This result is consistent with, but also expands upon, previouslyderived adaptive regularization penalties for linear and logistic regression
[22].As for its practical implications, our analysis suggests a new criterion for principled model compression. The closedform regularization penalty isolated herein naturally suggests a new weight pruning strategy. Interestingly, our new rule is in stark disagreement with the commonly used signaltonoise ratio (SNR) [7, 5]. The SNR is quick to prune weights with large variances, deeming them noisy, but our approach finds large variances to be an essential characteristic of robust, wellfit weights. Experimental results on wellknown predictive modeling tasks show that our weight pruning mechanism is not only superior to the SNR criterion by a wide margin, but also competitive to retraining with softtargets produced by the full network [11, 2]. In each experiment our method was able to prune at least 20% more of the model’s parameters than SNR before seeing a vertical asymptote in test error. Furthermore, in two of these experiments, the performance of models pruned with our method reduced or matched the error rate of the retrained networks until reaching 50% reduction.
2 Dropout Training and Previous Work
Below we establish notation for training under multiplicative noise (MN) and review some relevant previous work on dropout. In general, matrices are denoted by bold, uppercase variables, vectors by bold, lowercase, and scalars by both upper and lowercase. Consider a neural network with
total layers ( of them hidden). Forward propagation consists of recursively computing(1) 
where is the dimensional vector of hidden units located at layer , is the dimensional vector of hidden units located at the previous layer , is some (usually nonlinear) elementwise activation function associated with layer , and is the dimensional weight matrix. If , then , a vector of input features corresponding to the th training example out of , and if , then , the class prediction for the th example. For notational simplicity, we’ll assume the bias term is absorbed into the weight matrix and a constant is appended to . Training a neural network consists of minimizing the negative log likelihood: where is a conditional distribution parameterized by the neural network.
is learned through the backpropagation algorithm.
2.1 Training with Multiplicative Noise
Training with multiplicative noise (MN) is a regularization procedure implemented through slightly modifying Equation (1). It causes the intermediate representation to become stochastically corrupted by introducing random variables to the inner product . Rewriting Equation (1) with MN, we have
(2) 
where is a diagonal dimensional matrix of random variables drawn independently from some noise distribution . Dropout corresponds to a Bernoulli distribution on [10, 17].
Training proceeds by sampling a new
matrix for every forward propagation through the network. Backpropagation is done as usual using the corrupted values. We can view the sampling as Monte Carlo integration over the noise distribution, and therefore, the MN loss function can be written as
(3) 
where the expectation is taken with respect to the noise distribution . At test time, the bias introduced by the noise is corrected; for instance, the weights would be multiplied by if we trained with Bernoulli() noise.
2.2 ClosedForm Regularization Penalties
Direct analysis of Equation (3) for neural networks with nonlinear activation functions is currently an open problem. Nevertheless, analysis of dropout has received a significant amount of attention in the recent literature, and progress has been made by considering second order approximations [22], asymptotic assumptions [23], linear networks [3, 24], generative models of the data [21], and convex proxy loss functions [8].
Since this paper is primarily concerned with interpreting MN regularization as a closedform penalty, we summarize below the results of [22]
, which had similar goals, in order to build on them later. A closedform regularization penalty can be derived exactly for linear regression and approximately for logistic regression. For linear regression, training under MN is equivalent to training with the following penalized likelihood
[22, 23, 3]:(4) 
The second term can be viewed as datadriven regularization in that the weights are being penalized not by just their squared value but also by the sum of the squared features in the corresponding dimension. Similarly, an approximate closedform objective can be found for logistic regression via a 2ndorder Taylor expansion around the mean of the noise [22]:
(5) 
Again we find an penalty adjusted to the data and, in this case, the model’s current predictions. However, Helmbold and Long [8] have suggested that this approximation can substantially underestimate the error.
3 Multiplicative Noise as an Induced Gaussian Scale Mixture
In this section below we go beyond prior work to show that analysis of multiplicative noise (MN) regularization can be made tractable by adopting a Bayesian perspective. The key observation is that if we assume the weights to be Gaussian random variables, the product , where is the noise and is a weight, defines a Gaussian scale mixture (GSM). GSMs can be represented hierarchically with the scale mixing variable—in this case the noise —becoming a hyperprior. The reparameterization works even for deep neural networks (DNNs) regardless of their size or activation functions.
3.1 Gaussian Scale Mixtures
First we define a Gaussian scale mixture. A random variable is a Gaussian scale mixture (GSM) if and only if it can be expressed as the product of a Gaussian random variable–call it –with zero mean and some variance and an independent scalar random variable [1, 4]:
(6) 
where denotes equality in distribution. While it may not be obvious from (6) that is a scale
mixture, the result follows from the Gaussian’s closure under linear transformations, resulting in the following marginal density of
:(7) 
where is the mixing distribution. SuperGaussian distributions, such as the Studentt (), can be represented as GSMs, and this hierarchical formulation is often used when employing these distributions as robust priors [18].
Now that we’ve defined GSMs, we demonstrate how MN can give rise to them. Consider the addition of a Gaussian prior to the MN training objective given in Equation (3):
where, for a DNN, , i.e., an independent Gaussian prior on each weight coefficient with some constant variance . Next recall the interlayer computation defined in Equation (2):
is a dimensional vector whose th element can be written in summation notation as
Notice that and ; thereby making the product the definition of a GSM given in (6).
The result follows just from application of the definition, but for a more intuitive explanation, consider the case of a constant multiplied by a Gaussian random variable as above. The product is distributed as due to the Gaussian’s closure under scalar transformation. The definition of a GSM (6) says that the same result holds even if is a random variable—the only difference being the variance is now random itself. See [1] and [4] for rigorous treatments.
3.2 The Hierarchical Parameterization for DNNs
Here we introduce a key insight: the product between the weights of a DNN and the noise can be represented hierarchically, as given in Equation (7), making the intractable likelihood conditionally independent of the noise. Again, the reparameterization follows from the definition, and it can be seen graphically in Figure 1. But to elaborate, it’s equivalent (in distribution) to replacing the product with a new conditionally Gaussian random variable , with drawn from the noise distribution. The random rescaling that explicitly applied to is still present yet collapsed into the distribution from which is drawn^{1}^{1}1Just like it is equivalent, in the previous example using the constant , to represent the distribution of with a random variable .. Because this interaction occurs entirely within the activation function, the complexities it introduces do not come into play. The only dependence that needs to be accounted for when reparameterizing is the shared variance of all weights occupying the same row of (due to the noise being sampled for each hidden unit). This poses no serious complications and is actually a desirable property, as we discuss later. From here forward, the product form of a GSM is referred to as the unidentifiable parameterization—since only the product can be identified in the likelihood—and the hierarchical form the identifiable parametrization.
3.3 Dropout’s Corresponding Prior
We now turn to the case of Bernoulli(), the most widely used noise distribution. Moving the Bernoulli random variable to the Gaussian random variable’s scale reveals the classic prior for Bayesian variable selection, the Spike and Slab [15, 6]:
(8) 
where is the delta function placed at zero. Interestingly, the unidentifiable parameterization has been used previously for linear regression in the work of Kuo and Mallick [12]. They placed the Bernoulli indicators directly in the likelihood as follows,
where
, essentially defining dropout for linear regression over a decade before it was proposed for neural networks. However, Kuo and Mallick were interested in the marginal posterior inclusion probabilities
rather than predictive performance.4 TypeII ML for the Hierarchical Parameterization
Having established can be written as , we next wish to isolate the characteristics of the weights encouraged by multiplicative noise (MN) regularization. Our aim is to write as a function of so we can explicitly see the interplay between the noise and parameters. To do this, we learn from the data via a typeII maximum likelihood procedure (a form of empirical Bayes). Note that this is hard to do in the unidentifiable parameterization due to explaining away [16]
. The identifiable (hierarchical) parameterization, on the other hand, allows for an ExpectationMaximization
^{2}^{2}2Actually, we perform an equivalent minimization, instead of maximization, in the Mstep to keep notation consistent with earlier equations. (EM) formulation, as described in [19]. The derivation of the EM updates is as follows:(9) 
We make two simplifying assumptions to make working with the posterior manageable. The first is, following [19], we choose , which corresponds to approximating the joint posterior with . The second assumption is that factorizes over its dimensions.
Hence, the EStep is computing
(10) 
where the likelihood was dropped since it doesn’t depend on , and the MStep is
(11) 
In our case, is a fullyfactorized Gaussian so the gradient is
(12) 
Unfortunately, the EM formulation cannot handle discrete noise distributions (and by extension, discrete mixtures) since we can’t calculate if
is not a continuous random variable. While this does not allow us to address Bernoulli noise (i.e. dropout) exactly, this is not a severe limitation for a few reasons. Firstly, as discussed later, the noise distribution encourages particular values for
but does not fundamentally change the nature of the regularization being applied to the DNN’s weights. Secondly, empirical observations support that our conclusions apply to Bernoulli noise as well. Lastly, the Beta(,) with can serve as a continuous proxy for the Bernoulli().5 Analysis of the Regularization Mechanism
Equation (12) provides an important window into the effect of multiplicative noise (MN) by revealing the properties of the weights that influence the regularization. Below we analyze Equation (12) in detail, showing that multiplicative noise results in weights becoming either sparse or invariant to rescaling. We start by setting (12) to zero, making the substitution , and rearranging to solve for the variance term:
(13) 
The first term is the squared posterior mean, and the second is the posterior variance. Both are averaged across weights emanating from the same unit due to the dependence discussed in Section 3.2. The third term is the derivative of the noise distribution. Moreover, notice that the term does not contain the DNN’s parameters and therefore only serves as a prior expressing which values of are preferred. The regularization pertinent to the network’s parameters is contained in the first two terms only.
In light of this observation, we discard the noise distribution term for the time being and work with just the first two empirical Bayesian terms. We can substitute them into the variance of the Gaussian prior on to see what regularization penalty MN is applying, in effect, to the weights:
(14) 
Given the Gaussian prior assumption, what results is a sparsityinducing penalty whose strength is inversely proportional to two factors: the squared mean and variance of the weight under the posterior. The posterior mean can be thought of as signal, the strength of the weight, and the variance can be thought of as robustness, the scale invariance of the weight.
To further analyze the properties of (14), let us assume the current values of the weights are near their posterior means: . This assumptions simplifies (14) to
(15) 
The fractional term in the denominator, , represents two alternative paths the weights can take to reduce the penalty during training. The DNN must either send or . The former occurs when weights become sparse, and the latter occurs when weights are robust to rescaling (i.e. they do not have to be finely calibrated). Hence, we observe a dual effect not seen in traditional sparsity penalties. MN allows weights to grow without restraint just so long as they are invariant to rescaling. If not, they are shrunk to zero.
Thinking back to how MN regularization is usually carried out in practice (namely, by Monte Carlo sampling within the likelihood), we see that training in this way is essentially finding the invariant weights by brute force. The only way the negative log likelihood can reliably be decreased is by pruning weights that cannot withstand being tested at random scales. Dropout obscures this fact to some degree by being a discrete mixture over just two scales, zero and one. The superior performance of continuous distributions, observed both in [17, 20] and further supported in our supplemental materials, may be due to searching over a richer, infinite scale space.
On a final note, the closedform dropout penalties from equations (4) and (5) can be recovered from by 1 assuming the Gaussian prior necessary for our analysis be diffuse and therefore negligible, and 2 the posterior mean is the same as the prior mean, which is necessary due to Wager et al. [22] performing the Taylor expansion around the mean. This removes the term from the denominator of (14). Interestingly, this modification results in (14) becoming
(16) 
which is the inverse of the term we isolated in Equation (15) as capturing the nature of MN regularization. The resulting behavior is the same since we found the term in the denominator. See the supplementary material for the details of the derivation. Wager et al. interpreted their findings as an scaled by the inverse diagonal Fisher Information. Yet, via the CramerRao lower bound, their result could also be seen as an scaled by the asymptotic variance of the weights. A notion of variance, then, is just as integral to their frequentist derivation as it is our Bayesian one.
6 Experiments: Weight Pruning
We conducted a number of experiments to empirically investigate if our results present new directions for algorithmic improvements in training DNNs. We implemented the EM algorithm derived in Section 4 using Langevin Dynamics [25]
, an efficient stochastic gradient technique for collecting posterior samples, to calculate the posterior moments needed for the MStep. We found that we could not outperform Monte Carlo MN regularization for any of the deep architectures with which we experimented (see supplemental materials). We conjecture that the practical issue of computing the posterior moments was likely the bottleneck, which is to be expected given that developing efficient Bayesian learning algorithms for DNNs is a challenging and open problem in and of itself
[9, 5].However, we did find immediate and practical benefits in the context of model compression [2, 11]. Our conclusions about how MN regularizes DNNs conspicuously differ from the signaltonoise ratio (SNR) for weight pruning tasks, as used by [7] and more recently by [5]. With this in mind we carried out a series of weight pruning experiments for the dual purpose of validating our analysis and providing a novel weight pruning rule (that turns out to be superior to the SNR).
The SNR heuristic is defined by the following inequality: where is the absolute value of the posterior mean of weight ,
is the posterior standard deviation of the same weight, and
is some positive constant. Pruning is carried out by setting to zero all weights for which the inequality holds (i.e. is below some threshold ). Blundell et al. [5] ran experiments using the SNR and stated it “is in fact related to test performance.”Now consider our alternative method. Recall that the terms in the denominator of Equation (14) are . Our analysis shows that MN deems weights with large means and large variances as being high quality, turning off the sparsity penalty applied to them. This conclusion conflicts with the SNR since using prunes weights with large variances first. Thus we propose the following competing heuristic we call signalplusrobustness (SPR):
(17) 
where the terms are defined the same as above.
We experimentally compared both pruning rules on three datasets, each with very different characteristics. The first is the wellknown MNIST dataset (, ), the second is the large IMDB movie review dataset for sentiment classification [14] (, ), and the third is a prediction (regression) task using features preprocessed from the Million Song Dataset (MSD) [13] (, ). We trained the networks with Bernoulli MN and when convergence was reached, switched to Langevin Dynamics (with no MN) to collected 10,000 samples from the posterior weight distribution of each network [25] () where is the learning rate). A polynomial decay schedule was set by validation set performance.
We ordered the weights of each network by SNR and SPR and then removed weights (i.e. set them to zero) in increasing order according to the two rules. Plots showing test error (number of errors, error rate, mean RMSE) vs. percentage of weights removed can be seen in panels (a),(b) and (c) of Figure 2. For another source of comparison, we also show the performance of a network (completely) retrained on the softtargets [11] produced by the full network^{3}^{3}3No softtarget results are shown for (c), the MSD year prediction task, as we found training with softtargets does not have the same benefits for regression it does for classification. To make comparison fair, the retrained networks had the same depth as the one on which pruning was done, splitting the parameters equally between the layers.
We see that our rule, SPR (), is clearly superior to SNR (). We were able to remove at least more of the weights in each case before seeing a catastrophic increase in test error. The most drastic difference is seen for the IMDB dataset in (b), which we believe is due to the sparsity of the features (word counts), exaggerating SNR’s preference for overdetermined weights. Our method, SPR, even outperformed retraining with softtargets until at least a reduction in parameters was reached. Finally, further empirical support of our findings, a scatter plot showing the first two moments of each weight for two networks–one trained with Bernoulli MN and the other without MN–can be see in panel (d) of Figure 2. We produce the figure to show that although our closedform penalty technically doesn’t hold for discrete noise distributions (due to the need to compute the gradient), the analysis (sparsity vs scale robustness) most likely extends to discrete mixtures.
7 Conclusions
This paper improves our understanding of how multiplicative noise regularizes the weights of deep neural networks. We show that multiplicative noise can be interpreted as a Gaussian scale mixture (under mild assumptions). This perspective not only holds for neural networks regardless of their depth or activation function but allows us to isolate, in closedform, the weight properties encouraged by multiplicative noise. From this penalty we see that under multiplicative noise, the network’s weights become either sparse or invariant to rescaling. We demonstrated the utility of our findings by showing that a new weight pruning rule, naturally derived from our analysis, is significantly more effective than the previously proposed signaltonoise ratio and is even competitive to retraining with softtargets.
References

[1]
David F Andrews and Colin L Mallows.
Scale mixtures of normal distributions.
Journal of the Royal Statistical Society. Series B (Methodological), pages 99–102, 1974.  [2] Jimmy Ba and Rich Caruana. Do deep nets really need to be deep? In Advances in Neural Information Processing Systems, pages 2654–2662, 2014.
 [3] Pierre Baldi and Peter J Sadowski. Understanding dropout. In Advances in Neural Information Processing Systems, pages 2814–2822, 2013.
 [4] EML Beale, CL Mallows, et al. Scale mixing of symmetric distributions with zero means. The Annals of Mathematical Statistics, 30(4):1145–1151, 1959.
 [5] Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty in neural networks. arXiv preprint arXiv:1505.05424, 2015.
 [6] Edward I George and Robert E McCulloch. Variable selection via gibbs sampling. Journal of the American Statistical Association, 88(423):881–889, 1993.
 [7] Alex Graves. Practical variational inference for neural networks. In Advances in Neural Information Processing Systems, pages 2348–2356, 2011.
 [8] David P. Helmbold and Philip M. Long. On the inductive bias of dropout. CoRR, abs/1412.4736, 2014.
 [9] José Miguel HernándezLobato and Ryan P Adams. Probabilistic backpropagation for scalable learning of bayesian neural networks. arXiv preprint arXiv:1502.05336, 2015.
 [10] Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. Improving neural networks by preventing coadaptation of feature detectors. arXiv preprint arXiv:1207.0580, 2012.
 [11] Geoffrey E Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. In NIPS 2014 Deep Learning Workshop, 2014.
 [12] Lynn Kuo and Bani Mallick. Variable selection for regression models. Sankhyā: The Indian Journal of Statistics, Series B, pages 65–81, 1998.

[13]
M. Lichman.
UCI machine learning repository, 2013.

[14]
Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and
Christopher Potts.
Learning word vectors for sentiment analysis.
In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142–150, 2011.  [15] Toby J Mitchell and John J Beauchamp. Bayesian variable selection in linear regression. Journal of the American Statistical Association, 83(404):1023–1032, 1988.
 [16] Kevin P Murphy. Machine learning: a probabilistic perspective. MIT press, 2012.
 [17] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958, 2014.

[18]
Mark FJ Steel et al.
Bayesian regression analysis with scale mixtures of normals.
Econometric Theory, 16(01):80–101, 2000.  [19] Michael E Tipping. Sparse bayesian learning and the relevance vector machine. The journal of machine learning research, 1:211–244, 2001.
 [20] Jakub M Tomczak. Prediction of breast cancer recurrence using classification restricted boltzmann machine with dropping. arXiv preprint arXiv:1308.6324, 2013.
 [21] Stefan Wager, William Fithian, Sida Wang, and Percy S Liang. Altitude training: Strong bounds for singlelayer dropout. In Advances in Neural Information Processing Systems, pages 100–108, 2014.
 [22] Stefan Wager, Sida Wang, and Percy S Liang. Dropout training as adaptive regularization. In Advances in Neural Information Processing Systems, pages 351–359, 2013.
 [23] Sida Wang and Christopher Manning. Fast dropout training. In Proceedings of the 30th International Conference on Machine Learning (ICML13), pages 118–126, 2013.
 [24] David WardeFarley, Ian J Goodfellow, Aaron Courville, and Yoshua Bengio. An empirical analysis of dropout in piecewise linear networks. arXiv preprint arXiv:1312.6197, 2013.
 [25] Max Welling and Yee W Teh. Bayesian learning via stochastic gradient langevin dynamics. In Proceedings of the 28th International Conference on Machine Learning (ICML11), pages 681–688, 2011.