Dropout with Expectation-linear Regularization

09/26/2016 ∙ by Xuezhe Ma, et al. ∙ Carnegie Mellon University 0

Dropout, a simple and effective way to train deep neural networks, has led to a number of impressive empirical successes and spawned many recent theoretical investigations. However, the gap between dropout's training and inference phases, introduced due to tractability considerations, has largely remained under-appreciated. In this work, we first formulate dropout as a tractable approximation of some latent variable model, leading to a clean view of parameter sharing and enabling further theoretical analysis. Then, we introduce (approximate) expectation-linear dropout neural networks, whose inference gap we are able to formally characterize. Algorithmically, we show that our proposed measure of the inference gap can be used to regularize the standard dropout training objective, resulting in an explicit control of the gap. Our method is as simple and efficient as standard dropout. We further prove the upper bounds on the loss in accuracy due to expectation-linearization, describe classes of input distributions that expectation-linearize easily. Experiments on three image classification benchmark datasets demonstrate that reducing the inference gap can indeed improve the performance consistently.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep neural networks (DNNs, e.g., LeCun et al., 2015; Schmidhuber, 2015), if trained properly, have been demonstrated to significantly improve the benchmark performances in a wide range of application domains. As neural networks go deeper and deeper, naturally, its model complexity also increases quickly, hence the pressing need to reduce overfitting in training DNNs. A number of techniques have emerged over the years to address this challenge, among which dropout (Hinton et al., 2012; Srivastava, 2013) has stood out for its simplicity and effectiveness. In a nutshell, dropout randomly “drops” neural units during training as a means to prevent feature co-adaptation—a sign of overfitting (Hinton et al., 2012). Simple as it appears to be, dropout has led to several record-breaking performances (Hinton et al., 2012; Ma & Hovy, 2016), and thus spawned a lot of recent interests in analyzing and justifying dropout from the theoretical perspective, and also in further improving dropout from the algorithmic and practical perspective.

In their pioneering work, Hinton et al. (2012) and Srivastava et al. (2014)

interpreted dropout as an extreme form of model combination (aka. model ensemble) with extensive parameter/weight sharing, and they proposed to learn the combination through minimizing an appropriate expected loss. Interestingly, they also pointed out that for a single logistic neural unit, the output of dropout is in fact the geometric mean of the outputs of the model ensemble with shared parameters. Subsequently, many theoretical justifications of dropout have been explored, and we can only mention a few here due to space limits. Building on the weight sharing perspective,

Baldi & Sadowski (2013, 2014)

analyzed the ensemble averaging property of dropout in deep non-linear logistic networks, and supported the view that dropout is equivalent to applying stochastic gradient descent on some regularized loss function.

Wager et al. (2013) treated dropout as an adaptive regularizer for generalized linear models (GLMs). Helmbold & Long (2016)

discussed the differences between dropout and traditional weight decay regularization. In terms of statistical learning theory,

Gao & Zhou (2014) studied the Rademacher complexity of different types of dropout, showing that dropout is able to reduce the Rademacher complexity polynomially for shallow neural networks (with one or no hidden layers) and exponentially for deep neural networks. This latter work (Gao & Zhou, 2014)

formally demonstrated that dropout, due to its regularizing effect, contributes to reducing the inherent model complexity, in particular the variance component in the generalization error.

Seen as a model combination technique, it is intuitive that dropout contributes to reducing the variance of the model performance. Surprisingly, dropout has also been shown to play some role in reducing the model bias. For instance, Jain et al. (2015) studied the ability of dropout training to escape local minima, hence leading to reduced model bias. Other studies (Chen et al., 2014; Helmbold & Long, 2014; Wager et al., 2014) focus on the effect of the dropout noise on models with shallow architectures. We noted in passing that there are also some work (Kingma et al., 2015; Gal & Ghahramani, 2015, 2016) trying to understand dropout from the Bayesian perspective.

In this work, we first formulate dropout as a tractable approximation of a latent variable model, and give a clean view of weight sharing (§3). Then, we focus on an inference gap in dropout that has somehow gotten under-appreciated: In the inference phase, for computational tractability considerations, the model ensemble generated by dropout is approximated by a single model with scaled weights, resulting in a gap between training and inference, and rendering the many previous theoretical findings inapplicable. In general, this inference gap can be very large and no attempt (to our best knowledge) has been made to control it. We make three contributions in bridging this gap: Theoretically, we introduce expectation-linear dropout neural networks, through which we are able to explicitly quantify the inference gap (§4). In particular, our theoretical results explain why the max-norm constraint on the network weights, a standard practice in training DNNs, can lead to a small inference gap hence potentially improve performance. Algorithmically, we propose to add a sampled version of the inference gap to regularize the standard dropout training objective (expectation-linearization), hence allowing explicit control of the inference gap, and analyze the interaction between expectation-linearization and the model accuracy (§5). Experimentally, through three benchmark datasets we show that our regularized dropout is not only as simple and efficient as standard dropout but also consistently leads to improved performance (§6).

2 Dropout Neural Networks

In this section we set up the notations, review the dropout neural network model, and discuss the inference gap in standard dropout training that we will attempt to study in the rest of the paper.

2.1 DNNs and Notations

Throughout we use uppercase letters for random variables (and occasionally for matrices as well), and lowercase letters for realizations of the corresponding random variables. Let

be the input of the neural network, be the desired output, and be our training sample, where (resp. ) are usually i.i.d. samples of (resp. ).

Let denote a deep neural network with hidden layers, indexed by . Let

denote the output vector from layer

. As usual, is the input, and is the output of the neural network. Denote as the set of parameters in the network , where assembles the parameters in layer . With dropout, we need to introduce a set of dropout random variables , where is the dropout random variable for layer . Then the deep neural network can be described as:

(1)

where is the element-wise product and is the transformation function of layer . For example, if layer is a fully connected layer with weight matrix

, bias vector

, and sigmoid activation function

, then ). We will also use to denote the output of layer with input and dropout value , under parameter .

In the simplest form of dropout, which is also called standard dropout,

is a vector of independent Bernoulli random variables, each of which has probability

of being 1 and of being 0. This corresponds to dropping each of the weights independently with probability .

2.2 Dropout Training

The standard dropout neural networks can be trained using stochastic gradient decent (SGD), with a sub-network sampled by dropping neural units for each training instance in a mini-batch. Forward and backward pass for that training instance are done only on the sampled sub-network. Intuitively, dropout aims at, simultaneously and jointly, training an ensemble of exponentially many neural networks (one for each configuration of dropped units) while sharing the same weights/parameters.

The goal of the stochastic training procedure of dropout can be understood as minimizing an expected loss function, after marginalizing out the dropout variables (Srivastava, 2013; Wang & Manning, 2013)

. In the context of maximal likelihood estimation, dropout training can be formulated as:

(2)

where recall that is the training sample, is the dropout variable (one for each training instance), and is the (conditional) log-likelihood function defined by the conditional distribution of output given input , under parameter and dropout variable . Throughout we use the notation to denote the conditional expectation where all random variables except are conditioned on.

Dropout has also been shown to work well with regularization, such as L2 weight decay (Tikhonov, 1943), Lasso (Tibshirani, 1996), KL-sparsity(Bradley & Bagnell, 2008; Hinton, 2010), and max-norm regularization (Srebro et al., 2004), among which the max-norm regularization — that constrains the norm of the incoming weight matrix to be bounded by some constant — was found to be especially useful for dropout (Srivastava, 2013; Srivastava et al., 2014).

2.3 Dropout Inference and Gap

As mentioned before, dropout is effectively training an ensemble of neural networks with weight sharing. Consequently, at test time, the output of each network in the ensemble should be averaged to deliver the final prediction. This averaging over exponentially many sub-networks is, however, intractable, and standard dropout typically implements an approximation by introducing a deterministic scaling factor for each layer to replace the random dropout variable:

(3)

where the right-hand side is the output of a single deterministic neural network whose weights are scaled to match the expected number of active hidden units on the left-hand side. Importantly, the right-hand side can be easily computed since it only involves a single deterministic network.

Bulò et al. (2016) combined dropout with knowledge distillation methods (Hinton et al., 2015) to better approximate the averaging processing of the left-hand side. However, the quality of the approximation in (3) is largely unknown, and to our best knowledge, no attempt has been made to explicitly control this inference gap. The main goal of this work is to explicitly quantify, algorithmically control, and experimentally demonstrate the inference gap in (3), in the hope of improving the generalization performance of DNNs eventually. To this end, in the next section we first present a latent variable model interpretation of dropout, which will greatly facilitate our later theoretical analysis.

3 Dropout as Latent Variable Models

With the end goal of studying the inference gap in (3) in mind, in this section, we first formulate dropout neural networks as a latent variable model (LVM) in § 3.1. Then, we point out the relation between the training procedure of LVM and that of standard dropout in § 3.2. The advantage of formulating dropout as a LVM is that we need only deal with a single model (with latent structure), instead of an ensemble of exponentially many different models (with weight sharing). This much simplified view of dropout enables us to understand and analyze the model parameter in a much more straightforward and intuitive way.

3.1 An LVM Formulation of Dropout

A latent variable model consists of two types of variables: the observed variables that represent the empirical (observed) data and the latent variables that characterize the hidden (unobserved) structure. To formulate dropout as a latent variable model, the input and output are regarded as observed variables, while the dropout variable , representing the sub-network structure, is hidden. Then, upon fixing the input space , the output space , and the latent space for dropout variables, the conditional probability of given under parameter can be written as

(4)

where is the conditional distribution modeled by the neutral network with configuration (same as in Eq. (2)), is the distribution of dropout variable (e.g. Bernoulli), here assumed to be independent of the input , and is the base measure on the space .

3.2 LVM Dropout training vs. Standard Dropout Training

Building on the above latent variable model formulation (4) of dropout, we are now ready to point out a simple relation between the training procedure of LVM and that of standard dropout. Given an i.i.d. training sample , the maximum likelihood estimate for the LVM formulation of dropout in (4) is equivalent to minimizing the following negative log-likelihood function:

(5)

where is given in Eq. (4). Recall the dropout training objective in Eq. (2). We have the following theorem as a simple consequence of Jensen’s inequality (details in Appendix A):

Theorem 1.

The expected loss function of standard dropout (Eq. (2)) is an upper bound of the negative log-likelihood of LVM dropout (Eq. (5)):

(6)

Theorem 1, in a rigorous sense, justifies dropout training as a convenient and tractable approximation of the LVM formulation in (4). Indeed, since directly minimizing the marginalized negative log-likelihood in (5) may not be easy, a standard practice is to replace the marginalized (conditional) likelihood in (4) with its empirical Monte carlo average through drawing samples from the dropout variable . The dropout training objective in (2) corresponds exactly to this Monte carlo approximation when a single sample is drawn for each training instance . Importantly, we note that the above LVM formulation involves only a single network parameter , which largely simplifies the picture and facilitates our subsequent analysis.

4 Expectation-Linear Dropout Neural Networks

Building on the latent variable model formulation in § 3, we introduce in this section the notion of expectation-linearity that essentially measures the inference gap in (3). We then characterize a general class of neural networks that exhibit expectation-linearity, either exactly or approximately over a distribution on the input space.

We start with defining expectation-linearity in the simplest single-layer neural network, then we extend the notion into general deep networks in a natural way.

Definition 1 (Expectation-linear Layer).

A network layer is expectation-linear with respect to a set , if for all we have

(7)

In this case we say that is expectation-linearizable, and is expectation-linearizing w.r.t .

Obviously, the condition in (7) will guarantee no gap in the dropout inference approximation (3)—an admittedly strong condition that we will relax below. Clearly, if is an affine function, then we can choose and expectation-linearity is trivial. Note that expectation-linearity depends on the network parameter and the dropout distribution .

Expectation-linearity, as defined in (7), is overly strong: under standard regularity conditions, essentially the transformation function has to be affine over the set , ruling out for instance the popular sigmoid or tanh activation functions. Moreover, in practice, downstream use of DNNs are usually robust to small errors resulting from approximate expectation-linearity (hence the empirical success of dropout), so it makes sense to define an inexact extension. We note also that the definition in (7) is uniform over the set , while in a statistical setting it is perhaps more meaningful to have expectation-linearity “on average,” since inputs from lower density regions are not going to play a significant role anyway. Taking into account the aforementioned motivations, we arrive at the following inexact extension:

Definition 2 (Approximately Expectation-linear Layer).

A network layer is -approximately expectation-linear with respect to a distribution over if

(8)

In this case we say that is -approximately expectation-linearizable, and is -approximately expectation-linearizing.

To appreciate the power of cutting some slack from exact expectation-linearity, we remark that even non-affine activation functions often have approximately linear regions. For example, the logistic function, a commonly used non-linear activation function in DNNs, is approximately linear around the origin. Naturally, we can ask whether it is sufficient for a target distribution to be well-approximated by an approximately expectation-linearizable one. We begin by providing an appropriate measurement of the quality of this approximation.

Definition 3 (Closeness, (Andreas et al., 2015)).

A distribution is -close to a set if

(9)

where recall that is the (bounded) space that the dropout variable lives in.

Intuitively, is -close to a set if a random sample from is no more than a distance from

in expectation and under the worst “dropout perturbation”. For example, a standard normal distribution is close to an interval centering at origin (

) with some constant . Our definition of closeness is similar to that in Andreas et al. (2015), who used this notion to analyze self-normalized log-linear models.

We are now ready to state our first major result that quantifies approximate expectation-linearity of a single-layered network (proof in Appendix B.1):

Theorem 2.

Given a network layer , where is expectation-linearizing w.r.t. . Suppose is -close to and for all , where is the usual operator norm. Then, is -approximately expectation-linearizable.

Roughly, Theorem 2 states that the input distribution that place most of its mass on regions close to expectation-linearizable sets are approximately expectation-linearizable on a similar scale. The bounded operator norm assumption on the derivative is satisfied in most commonly used layers. For example, for a fully connected layer with weight matrix , bias vector , and activation function , is bounded by and the supremum of (1/4 when is sigmoid and 1 when is tanh).

Next, we extend the notion of approximate expectation-linearity to deep dropout neural networks.

Definition 4 (Approximately Expectation-linear Network).

A deep neural network with layers (cf. Eq. (1)) is -approximately expectation-linear with respect to over if

(10)

where is the output of the deterministic neural network in standard dropout.

Lastly, we relate the level of approximate expectation-linearity of a deep neural network to the level of approximate expectation-linearity of each of its layers:

Theorem 3.

Given an -layer neural network as in Eq. (1), and suppose that each layer is -approximately expectation-linear w.r.t. , , , and . Then the network is -approximately expectation-linear with

(11)

From Theorem 3 (proof in Appendix B.2) we observe that the level of approximate expectation-linearity of the network mainly depends on four factors: the level of approximate expecatation-linearity of each layer (), the expected variance of each layer (), the operator norm of the derivative of each layer’s transformation function (), and the mean of each layer’s dropout variable (). In practice, is often a constant less than or equal to 1. For example, if , then .

According to the theorem, the operator norm of the derivative of each layer’s transformation function is an important factor in the level of approximate expectation-linearity: the smaller the operator norm is, the better the approximation. Interestingly, the operator norm of a layer often depends on the norm of the layer’s weight (e.g. for fully connected layers). Therefore, adding max-norm constraints to regularize dropout neural networks can lead to better approximate expectation-linearity hence smaller inference gap and the often improved model performance.

It should also be noted that when , the approximation error tends to be a constant when the network becomes deeper. When , grows linearly with , and when , the growth of becomes exponential. Thus, it is essential to keep to achieve good approximation, particularly for deep neural networks.

5 Expectation-Linear Regularized Dropout

In the previous section we have managed to bound the approximate expectation-linearity, hence the inference gap in (3), of dropout neural networks. In this section, we first prove a uniform deviation bound of the sampled approximate expectation-linearity measure from its mean, which motivates adding the sampled (hence computable) expectation-linearity measure as a regularization scheme to standard dropout, with the goal of explicitly controlling the inference gap of the learned parameter, hence potentially improving the performance. Then we give the upper bounds on the loss in accuracy due to expectation-linearization, and describe classes of distributions that expectation-linearize easily.

5.1 A Uniform Deviation Bound for the Sampled Expectation-linear Measure

We now show that an expectation-linear network can be found by expectation-linearizing the network on the training sample. To this end, we prove a uniform deviation bound between the empirical expectation-linearization measure using i.i.d. samples (Eq. (12)) and its mean (Eq. (13)).

Theorem 4.

Let denote a space of -layer dropout neural networks indexed with , where and is the space that lives in. Suppose that the neural networks in satisfy the constraints: 1) ; 2) and ; 3) . Denote empirical expectation-linearization measure and its mean as:

(12)
(13)

Then, with probability at least , we have

(14)

From Theorem 4 (proof in Appendix C.1) we observe that the deviation bound decreases exponentially with the number of layers when the operator norm of the derivative of each layer’s transformation function ( is less than 1 (and the contrary if ). Importantly, the square root dependence on the number of samples () is standard and cannot be improved without significantly stronger assumptions.

It should be noted that Theorem 4 per se does not imply anything between expectation-linearization and the model accuracy (i.e. how well the expectation-linearized neural network actually achieves on modeling the data). Formally studying this relation is provided in § 5.3. In addition, we provide some experimental evidences in § 6 on how improved approximate expectation-linearity (equivalently smaller inference gap) does lead to better empirical performances.

5.2 Expectation-Linearization as Regularization

The uniform deviation bound in Theorem 4 motivates the possibility of obtaining an approximately expectation-linear dropout neural networks through adding the empirical measure (12) as a regularization scheme to the standard dropout training objective, as follows:

(15)

where is the negative log-likelihood defined in Eq. (5), is a regularization constant, and measures the level of approximate expectation-linearity:

(16)

To solve (15), we can minimize via stochastic gradient descent as in standard dropout, and approximate using Monte carlo:

(17)

where is the same dropout sample as in for each training instance in a mini-batch. Thus, the only additional computational cost comes from the deterministic term . Overall, our regularized dropout (15), in its Monte carlo approximate form, is as simple and efficient as the standard dropout.

5.3 On the accuracy of Expectation-linearized Models

So far our discussion has concentrated on the problem of finding expectation-linear neural network models, without any concerns on how well they actually perform at modeling the data. In this section, we characterize the trade-off between maximizing “data likelihood” and satisfying an expectation-linearization constraint.

To achieve the characterization, we measure the likelihood gap between the classical maximum likelihood estimator (MLE) and the MLE subject to a expectation-linearization constraint. Formally, given training data , we define

(18)
(19)

where is the negative log-likelihood defined in Eq. (5), and is the level of approximate expectation-linearity in Eq. (16).

We would like to control the loss of model accuracy by obtaining a bound on the likelihood gap defined as:

(20)

In the following, we focus on neural networks with softmax output layer for classification tasks.

(21)

where , and . We claim:

Theorem 5.

Given an -layer neural network with softmax output layer in (21), where parameter , dropout variable , input and target . Suppose that for every and , makes a unique best prediction—that is, for each , there exists a unique such that , . Suppose additionally that , and . Then

(22)

where and are distribution-dependent constants.

From Theorem 5 (proof in Appendix C.2) we observe that, at one extreme, distributions closed to deterministic can be expectation-linearized with little loss of likelihood.

What about the other extreme — distributions “as close to uniform distribution as possible”? With suitable assumptions about the form of

and , we can achieve an accuracy loss bound for distributions that are close to uniform:

Theorem 6.

Suppose that . Additionally, for each , . Then asymptotically as :

(23)

Theorem 6 (proof in Appendix C.3) indicates that uniform distributions are also an easy class for expectation-linearization.

The next question is whether there exist any classes of conditional distributions for which all distributions are provably hard to expectation-linearize. It remains an open problem and might be an interesting direction for future work.

6 Experiments

In this section, we evaluate the empirical performance of the proposed regularized dropout in (15) on a variety of network architectures for the classification task on three benchmark datasets—MNIST, CIFAR-10 and CIFAR-100. We applied the same data preprocessing procedure as in Srivastava et al. (2014). To make a thorough comparison and provide experimental evidence on how the expectation-linearization interacts with the predictive power of the learned model, we perform experiments of Monte Carlo (MC) dropout, which approximately computes the final prediction (left-hand side of (3)) via Monte Carlo sampling, w/o the proposed regularizer. In the case of MC dropout, we average predictions using randomly sampled configurations. In addition, the network architectures and hyper-parameters for each experiment setup are the same as those in Srivastava et al. (2014), unless we explicitly claim to use different ones. Following previous works, for each data set We held out 10,000 random training images for validation to tune the hyper-parameters, including in Eq. (15). When the hyper-parameters are fixed, we train the final models with all the training data, including the validation data. A more detailed description of the conducted experiments can be provided in Appendix D

. For each experiment, we report the mean test errors with corresponding standard deviations over 5 repetitions.

6.1 Mnist

The MNIST dataset (LeCun et al., 1998) consists of 70,000 handwritten digit images of size 28

28, where 60,000 images are used for training and the rest for testing. This task is to classify the images into 10 digit classes. For the purpose of comparison, we train 6 neural networks with different architectures. The experimental results are shown in Table 

1.

6.2 CIFAR-10 and CIFAR-100

The CIFAR-10 and CIFAR-100 datasets (Krizhevsky, 2009) consist of 60,000 color images of size , drawn from 10 and 100 categories, respectively. 50,000 images are used for training and the rest for testing. The neural network architecture we used for these two datasets has 3 convolutional layers, followed by two fully-connected (dense) hidden layers (again, same as that in Srivastava et al. (2014)). The experimental results are recorded in Table 1, too.

From Table 1 we can see that on MNIST data, dropout network training with expectation-linearization outperforms standard dropout on all 6 neural architectures. On CIFAR data, expectation-linearization reduces error rate from 12.82% to 12.20% for CIFAR-10, achieving 0.62% improvement. For CIFAR-100, the improvement in terms of error rate is 0.97% with reduction from 37.22% to 36.25%.

From the results we see that with or without expectation-linearization, the MC dropout networks achieve similar results. It illustrates that by achieving expectation-linear neural networks, the predictive power of the learned models has not degraded significantly. Moreover, it is interesting to see that with the regularization, on MNIST dataset, standard dropout networks achieve even better accuracy than MC dropout. It may be because that with expectation-linearization, standard dropout inference achieves better approximation of the final prediction than MC dropout with (only) 100 samples. On CIFAR datasets, MC dropout networks achieve better accuracy than the ones with the regularization. But, obviously, MC dropout requires much more inference time than standard dropout (MC dropout with samples requires about times the inference time of standard dropout).

w.o. EL w. EL
Data Architecture Standard MC Standard MC
MNIST 3 dense,1024,logistic 1.230.03 1.060.02 1.070.02 1.060.03

3 dense,1024,relu

1.190.02 1.040.02 1.030.02 1.050.03
3 dense,1024,relu+max-norm 1.050.03 1.020.02 0.980.03 1.020.02
3 dense,2048,relu+max-norm 1.070.02 1.000.02 0.940.02 0.970.03
2 dense,4096,relu+max-norm 1.030.02 0.920.03 0.900.02 0.930.02
2 dense,8192,relu+max-norm 0.990.02 0.960.02 0.870.02 0.920.03
CIFAR-10 3 conv+2 dense,relu+max-norm 12.820.10 12.160.12 12.200.14 12.210.15
CIFAR-100 3 conv+2 dense,relu+max-norm 37.220.22 36.010.21 36.250.12 36.100.18
Table 1: Comparison of classification error percentage on test data with and without using expectation-linearization on MNIST, CIFAR-10 and CIFAR-100, under different network architectures (with standard deviations for 5 repetitions).

6.3 Effect of Regularization Constant

In this section, we explore the effect of varying the hyper-parameter for the expectation-linearization rate . We train the network architectures in Table 1 with the value ranging from 0.1 to 10.0. Figure 1 shows the test errors obtained as a function of on three datasets. In addition, Figure 1, middle and right panels, also measures the empirical expectation-linearization risk of Eq. (12) with varying on CIFAR-10 and CIFAR-100, where is computed using Monte carlo with 100 independent samples.

From Figure 1 we can see that when increases, better expectation-linearity is achieved (i.e. decreases). The model accuracy, however, has not kept growing with increasing , showing that in practice considerations on the trade-off between model expectation-linearity and accuracy are needed.

Figure 1: Error rate and empirical expectation-linearization risk relative to .
Data Network Standard MC w. EL Distillation
CIFAR-10 AllConv 11.180.11 10.580.21 10.860.08 10.810.14
CIFAR-100 AllConv 35.500.23 34.430.25 35.100.13 35.070.20
Table 2: Comparison of test data errors using standard dropout, Monte Carlo dropout, standard dropout with our proposed expectation-linearization, and recently proposed dropout distillation on CIFAR-10 and CIFAR-100 under AllConv, (with standard deviations for 5 repetitions).

6.4 Comparison with Dropout Distillation

To make a thorough empirical comparison with the recently proposed Dropout Distillation method (Bulò et al., 2016), we also evaluate our regularization method on CIFAR-10 and CIFAR-100 datasets with the All Convolutional Network (Springenberg et al., 2014) (AllConv). To facilitate comparison, we adopt the originally reported hyper-parameters and the same setup for training.

Table 2 gives the results comparison the classification error percentages on test data under AllConv using standard dropout, Monte Carlo dropout, standard dropout with our proposed expectation-linearization, and recently proposed dropout distillation on CIFAR-10 and CIFAR-100 111We obtained similar results as that reported in Table 1 of Bulò et al. (2016) on CIFAR-10 corpus, while we cannot reproduce comparable results on CIFAR-100 (around 3% worse). According to Table 2, our proposed expectation-linear regularization method achieves comparable performance to dropout distillation.

7 Conclusions

In this work, we attempted to establish a theoretical basis for the understanding of dropout, motivated by controlling the gap between dropout’s training and inference phases. Through formulating dropout as a latent variable model and introducing the notion of (approximate) expectation-linearity, we have formally studied the inference gap of dropout, and introduced an empirical measure as a regularization scheme to explicitly control the gap. Experiments on three benchmark datasets demonstrate that reducing the inference gap can indeed improve the end performance. In the future, we intend to formally relate the inference gap to the generalization error of the underlying network, hence providing further justification of regularized dropout.

Acknowledgements

This research was supported in part by DARPA grant FA8750-12-2-0342 funded under the DEFT program. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA.

References

Appendix: Dropout with Expectation-linear Regularization

Appendix A LVM Dropout training vs. Standard Dropout Training

Proof of Theorem 1

Proof.

Because is a concave function, from Jensen’s Inequality,

Thus

Appendix B Expectation-Linear Dropout Neural Networks

b.1 Proof of Theorem 2

Proof.

Let , and

Let , and . Then,

In the following, we omit the parameter for convenience. Moreover, we denote

From Taylor Series, there exit some satisfy that

where we denote . Then,

Since , we have

Then,

Then,

Since , and from Jensen’s inequality and property of operator norm,

Finally we have,

b.2 Proof of Theorem 3

Proof.

Induction on the number of the layers . As before, we omit the parameter .
Initial step: when , the statement is obviously true.
Induction on : Suppose that the statement is true for neural networks with layers.
Now we prove the case . From the inductive assumption, we have,

(1)

where is the dropout random variables for the first layers, and

In addition, the layer is -approximately expectation-linear, we have:

(2)

Let , and let and be short for and , respectively, when there is no ambiguity. Moreover, we denote

for convenience. Then,

From Eq. 2 and Jensen’s inequality, we have

(3)

and

(4)

Using Jensen’s inequality, property of operator norm and , we have

(5)

From Eq. 1

(6)

Finally, to sum up with Eq. 3, Eq. 4, , Eq. 5, , Eq. 6, we have

Appendix C Expectation-Linearization

c.1 Proof of Theorem 4: Uniform Deviation Bound

Before proving Theorem 4, we first define the notations.

Let be a set of samples of input . For a function space , we use to denote the empirical Rademacher complexity of ,

and the Rademacher complexity is defined as

In addition, we import the definition of dropout Rademacher complexity from Gao & Zhou (2014):

where is a function space defined on input space and dropout variable space . and are the empirical dropout Rademacher complexity and dropout Rademacher complexity, respectively. We further denote .

Now, we define the following function spaces:

Then, the function space of