Information Dropout: Learning Optimal Representations Through Noisy Computation

11/04/2016 ∙ by Alessandro Achille, et al. ∙ 0

The cross-entropy loss commonly used in deep learning is closely related to the defining properties of optimal representations, but does not enforce some of the key properties. We show that this can be solved by adding a regularization term, which is in turn related to injecting multiplicative noise in the activations of a Deep Neural Network, a special case of which is the common practice of dropout. We show that our regularized loss function can be efficiently minimized using Information Dropout, a generalization of dropout rooted in information theoretic principles that automatically adapts to the data and can better exploit architectures of limited capacity. When the task is the reconstruction of the input, we show that our loss function yields a Variational Autoencoder as a special case, thus providing a link between representation learning, information theory and variational inference. Finally, we prove that we can promote the creation of disentangled representations simply by enforcing a factorized prior, a fact that has been observed empirically in recent work. Our experiments validate the theoretical intuitions behind our method, and we find that information dropout achieves a comparable or better generalization performance than binary dropout, especially on smaller models, since it can automatically adapt the noise to the structure of the network, as well as to the test sample.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

page 11

Code Repositories

keras-information-dropout

Keras implementation of the Information Dropout (arXiv:1611.01353) paper


view repo

information-dropout

Implementation of Information Dropout


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

We call “representation” any function of the data that is useful for a task. An optimal representation is most useful (sufficient), parsimonious (minimal), and minimally affected by nuisance factors (invariant). Do deep neural networks approximate such sufficient invariants?

The cross-entropy loss most commonly used in deep learning does indeed enforce the creation of sufficient representations, but the other defining properties of optimal representations do not seem to be explicitly enforced by the commonly used training procedures. However, we show that this can be done by adding a regularizer, which is related to the injection of multiplicative noise in the activations, with the surprising result that noisy computation facilitates the approximation of optimal representations. In this paper we establish connections between the theory of optimal representations for classification tasks, variational inference, dropout and disentangling in deep neural networks. Our contributions can be summarized in the following steps:

  1. We define optimal representations using established principles of statistical decision and information theory: sufficiency, minimality, invariance (cf. [1, 2]) (Section 3).

  2. We relate the defining properties of optimal representations for classification to the loss function most commonly used in deep learning, but with an added regularizer (Section 4, eq. 3).

  3. We show that, counter-intuitively, injecting multiplicative noise to the computation improves the properties of a representation and results in better approximation of an optimal one (Section 6).

  4. We relate such a multiplicative noise to the regularizer, and show that in the special case of Bernoulli noise, regularization reduces to dropout [3], thus establishing a connection to information theoretic principles. We also provide a more efficient alternative, called Information Dropout, that makes better use of limited capacity, adapts to the data, and is related to Variational Dropout [4] (Section 6).

  5. We show that, when the task is reconstruction, the procedure above yields a generalization of the Variational Autoencoder, which is instead derived from a Bayesian inference perspective

    [5]. This establishes a connection between information theoretic and Bayesian representations, where the former explains the use of a multiplier used in practice but unexplained by Bayesian theory (Section 7).

  6. We show that “disentanglement of the hidden causes,” an often-cited but seldom formalized desideratum for deep networks, can be achieved by assuming a factorized prior for the components of the optimal representation. Specifically, we prove that computing the regularizer term under the simplifying assumption of an independent prior has the effect of minimizing the total correlation of the components, a phenomenon previously observed empirically by [6] (Section 5).

  7. We validate the theory with several experiments including: improved insensitivity/invariance to nuisance factors using Information Dropout using (a) Cluttered MNIST [7] and (b) MNIST+CIFAR, a newly introduced dataset to test sensitivity to occlusion phenomena critical in Vision applications; (c) we show improved efficiency of Information Dropout compared to regular dropout for limited capacity networks, (d) we show that Information Dropout favors disentangled representations; (e) we show that Information Dropout adapts to the data and allows different amounts of information to flow between different layers in a deep network (Section 8).

In the next section we introduce the basic formalism to make the above statements more precise, which we do in subsequent sections.

2 Preliminaries

In the general supervised setting, we want to learn the conditional distribution

of some random variable

, which we refer to as the task, given (samples of the) input data . In typical applications, is often high dimensional (for example an image or a video), while is low dimensional, such as a label or a coarsely-quantized location. In such cases, a large part of the variability in is actually due to nuisance factors that affect the data, but are otherwise irrelevant for the task [1]

. Since by definition these nuisance factors are not predictive of the task, they should be disregarded during the inference process. However, it often happens that modern machine learning algorithms, in part due to their high flexibility, will fit spurious correlations, present in the training data, between the nuisances and the task, thus leading to poor generalization performance.

In view of this, [8] argue that the success of deep learning is in part due to the capability of neural networks to build incrementally better representations that expose the relevant variability, while at the same time discarding nuisances. This interpretation is intriguing, as it establishes a connection between machine learning, probabilistic inference, and information theory. However, common training practice does not seem to stem from this insight, and indeed deep networks may maintain even in the top layers dependencies on easily ignorable nuisances (see for example Figure 2).

To bring the practice in line with the theory, and to better understand these connections, we introduce a modified cost function, that can be seen as an approximation of the Information Bottleneck Lagrangian of [2], which encourages the creation of representations of the data which are increasingly disentangled and insensitive to the action of nuisances, and we show that this loss can be minimized using a new layer, which we call Information Dropout, that allows the network to selectively introduce multiplicative noise in the layer activations, and thus to control the flow of information. As we show in various experiments, this method improves the generalization performance by building better representations and preventing overfitting, and it considerably improves over binary dropout on smaller models, since, unlike dropout, Information Dropout also adapts the noise to the structure of the network and to the individual sample at test time.

Apart from the practical interest of Information Dropout, one of our main results is that Information Dropout can be seen as a generalization to several existing dropout methods, providing a unified framework to analyze them, together with some additional insights on empirical results. As we discuss in Section 3, the introduction of noise to prevent overfitting has already been studied from several points of view. For example the original formulation of dropout of [3], which introduces binary multiplicative noise, was motivated as a way of efficiently training an ensemble of exponentially many networks, that would be averaged at testing time. [4] introduce Variational Dropout, a dropout method which closely resemble ours, and is instead derived from a Bayesian analysis of neural networks. Information Dropout gives an alternative information-theoretic interpretation of those methods.

As we show in Section 7, other than being very closely related to Variational Dropout, Information Dropout directly yields a variational autoencoder as a special case when the task is the reconstruction of the input. This result is in part expected, since our loss function seeks an optimal representation of the input for the task of reconstruction, and the representation given by the latent variables of a variational autoencoder fits the criteria. However, it still rises the question of exactly what and how deep are the links between information theory, representation learning, variational inference and nuisance invariance. This work can be seen as a small step in answering this question.

3 Related work

The main contribution of our work is to establish how two seemingly different areas of research, namely dropout methods to prevent overfitting, and the study of optimal representations, can be linked through the Information Bottleneck principle.

Dropout was introduced by Srivastava et al. [3]. The original motivation was that by randomly dropping the activations during training, we can effectively train an ensemble of exponentially many networks, that are then averaged during testing, therefore reducing overfitting. Wang et al. [9] suggested that dropout could be seen as performing a Monte-Carlo approximation of an implicit loss function, and that instead of multiplying the activations by binary noise, like in the original dropout, multiplicative Gaussian noise with mean 1 can be used as a way of better approximating the implicit loss function. This led to a comparable performance but faster training than binary dropout.

Kingma et al. [4] take a similar view of dropout as introducing multiplicative (Gaussian) noise, but instead study the problem from a Bayesian point of view. In this setting, given a training dataset and a prior distribution , we want to compute the posterior distribution of the weights of the network. As is customary in variational inference, the true posterior can be approximated by minimizing the negative variational lower bound of the marginal log-likelihood of the data,

(1)

This minimization is difficult to perform, since it requires to repeatedly sample new weights for each sample of the dataset. As an alternative, [4] suggest that the uncertainty about the weights that is expressed by the posterior distribution can equivalently be encoded as a multiplicative noise in the activations of the layers (the so called local reparametrization trick). As we will see in the following sections, this loss function closely resemble the one of Information Dropout, which however is derived from a purely information theoretic argument based on the Information Bottleneck principle. One difference is that we allow the parameters of the noise to change on a per-sample basis (which, as we show in the experiments, can be useful to deal with nuisances), and that we allow a scaling constant in front of the KL-divergence term, which can be changed freely. Interestingly, even if the Bayesian derivation does not allow a rescaling of the KL-divergence, [4] notice that choosing a different scale for the KL-divergence term can indeed lead to improvements in practice. A related method, but derived from an information theoretic perspective was also suggested previously by [10].

The interpretation of deep neural network as a way of creating successively better representations of the data has already been suggested and explored by many. Most recently, Tishby et al. [8] put forth an interpretation of deep neural networks as creating sufficient representations of the data that are increasingly minimal. In parallel simultaneous work, [11] approximate the information bottleneck similarly to us, but focus on empirical analysis of robustness to adversarial perturbations rather than tackling disentanglement, invariance and minimality analytically. Some have focused on creating representations that are maximally invariant to nuisances, especially when they have the structure of a (possibly infinite-dimensional) group acting on the data, like [12], or, when the nuisance is a locally compact group acting on each layer, by successive approximations implemented by hierarchical convolutional architectures, like [13] and [14]. In these cases, which cover common nuisances such as translations and rotations of an image (affine group), or small diffeomorphic deformations due to a slight change of point of view (group of diffeomorphisms), the representation is equivalent to the data modulo the action of the group. However, when the nuisances are not a group, as is the case for occlusions, it is not possible to achieve such equivalence, that is, there is a loss. To address this problem, [1] defined optimal representations not in terms of maximality, but in terms of sufficiency, and characterized representations that are both sufficient and invariant. They argue that the management of nuisance factors common in visual data, such as changes of viewpoint, local deformations, and changes of illumination, is directly tied to the specific structure of deep convolutional networks, where local marginalization of simple nuisances at each layer results in marginalization of complex nuisances in the network as a whole.

Our work fits in this last line of thinking, where the goal is not equivalence to the data up to the action of (group) nuisances, but instead sufficiency for the task. Our main contribution in this sense is to show that injecting noise into the layers, and therefore using a non-deterministic function of the data, can actually simplify the theoretical analysis and lead to disentangling and improved insensitivity to nuisances. This is an alternate explanation to that put forth by the references above.

4 Optimal representations and the Information Bottleneck loss

Given some input data , we want to compute some (possibly nondeterministic) function of , called a representation, that has some desirable properties in view of the task , for instance by being more convenient to work with, exposing relevant statistics, or being easier to store. Ideally, we want this representation to be as good as the original data for the task, and not squander resources modeling parts of the data that are irrelevant to the task. Formally, this means that we want to find a random variable satisfying the following conditions:

  1. is a representation of ; that is, its distribution depends only on

    , as expressed by the following Markov chain:

  2. is sufficient for the task , that is , expressed by the Markov chain:

  3. among all random variables satisfying these requirements, the mutual information is minimal. This means that discards all variability in the data that is not relevant to the task.

Using the identity , where denotes the entropy and the mutual information, it is easy to see that the above conditions are equivalent to finding a distribution which solves the optimization problem

s.t.

The minimization above is difficult in general. For this reason, Tishby et al. have introduced a generalization known as the Information Bottleneck Principle and the associated Lagrangian to be minimized [2]

(2)

where is a positive constant that manages the trade-off between sufficiency (the performance on the task, as measured by the first term) and minimality (the complexity of the representation, measured by the second term). It is easy to see that, in the limit , this is equivalent to the original problem, where is a minimal sufficient statistic. When all random variables are discrete and is a deterministic function of , the algorithm proposed by [2]

can be used to minimize the IB Lagrangian efficiently. However, no algorithm is known to minimize the IB Lagrangian for non-Gaussian, high-dimensional continuous random variables.

One of our key results is that, when we restrict to the family of distributions obtained by injecting noise to one layer of a neural network, we can efficiently approximate and minimize the IB Lagrangian.111Since we restrict the family of distributions, there is no guarantee that the resulting representation will be optimal. We can, however, iterate the process to obtain incrementally improved approximations. As we will show, this process can be effectively implemented through a generalization of the dropout layer that we call Information Dropout.

To set the stage, we rewrite the IB Lagrangian as a per-sample loss function. Let denote the true distribution of the data, from which the training set is sampled, and let and

denote the unknown distributions that we wish to estimate, parametrized by

. Then, we can write the two terms in the IB Lagrangian as

where

denotes the Kullback-Leibler divergence. We can therefore approximate the IB Lagrangian empirically as

(3)

Notice that the first term simply is the average cross-entropy, which is the most commonly used loss function in deep learning. The second term can then be seen as a regularization term. In fact, many classical regularizers, like the penalty, can be expressed in the form of eq. 3 (see also [15]). In this work, we interpret the KL term as a reuglarizer that penalizes the transfer of information from to . In the next section, we discuss ways to control such information transfer through the injection of noise.

[Deterministic vs. stochastic representations] Aside from being easier to work with, stochastic representations can attain a lower value of the IB Lagrangian than any deterministic representation. For example, consider the task of reconstructing single random bit given a noisy observation . The only deterministic representations are equivalent to the either the noisy observation itself or to the trivial constant map. It is not difficult to check that for opportune values of and of the noise, neither realize the optimal tradeoff reached by a suitable stochastic representation.

[Approximate sufficiency] The quantity can be seen as a measure of the distance between and the closest distribution such that is a Markov chain. Therefore, by minimizing eq. 2 we find representations that are increasingly “more sufficient”, meaning that they are closer to an actual Markov chain.

5 Disentanglement

In addition to sufficiency and minimality, “disentanglement of hidden factors” is often cited as a desirable property of a representation, but seldom formalized. We can quantify disentanglement by measuring the total correlation, or multivariate mutual information, defined as

Notice that the components of are mutually independent if and only if is zero. Adding this as a penalty in the IB Lagrangian, with a factor yields

In general, minimizing this augmented loss is intractable, since to compute both the KL term and the total correlation, we need to know the marginal distribution , which is not easily computable. However, the following proposition, that we prove in Appendix B, shows that if we choose , then the problem simplifies, and can be easily solved by adding an auxiliary variable. The minimization problem

is equivalent to the following minimization in two variables

In other words, minimizing the standard IB Lagrangian assuming that the activations are independent, i.e. having , is equivalent to enforcing disentanglement of the hidden factors. It is interesting to note that this independence assumption is already adopted often by practitioners on grounds of simplicity, since the actual marginal is often incomputable. That using a factorized model results in “disentanglement” was also observed empirically by [6]

which, however, introduced an ad-hoc metric based on classifiers of low VC-dimension, rather than the more natural Total Correlation adopted here.

In view of the previous proposition, from now on we will assume that the activations are independent and ignore the total correlation term.

6 Information Dropout

Guided by the analysis in the previous sections, and to emphasize the role of stochasticity, we consider representations obtained by computing a deterministic map of the data (for instance a sequence of convolutional and/or fully-connected layers of a neural network), and then multiplying the result component-wise by a random sample drawn from a parametric noise distribution

with unit mean and variance that depends on the input

:

where “” denotes the element-wise product. Notice that, if

is a Bernoulli distribution rescaled to have mean

, this reduces exactly to the classic binary dropout layer. As we discussed in Section 3, there are also variants of dropout that use different distributions.

A natural choice for the distribution

, which also simplifies the theoretical analysis, is the log-normal distribution

. Once we fix this noise distribution, given the above expression for , we can easily compute the distribution that appears in eq. 3. However, to be able to compute the KL-divergence term, we still need to fix a prior distribution . The choice of this prior largely depends on the expected distribution of the activations . Recall that, by Section 5

, we can assume that all activations are independent, thus simplifying the computation. Now, we concentrate on two of the most common activation functions, the

rectified linear unit

(ReLU), which is easy to compute and works well in practice, and the

Softplus function, which can be seen as a strictly positive and differentiable approximation of ReLU.

A network implemented using only ReLU and a final Softmax layer has the remarkable property of being scale-invariant, meaning that multiplying all weights, biases, and activations by a constant does not change the final result. Therefore, from a theoretical point of view, it would be desirable to use a scale-invariant prior. The only such prior is the improper log-uniform,

, or equivalently , which was also suggested by [4], but as a prior for the weights of the network, rather than the activations. Since the ReLU activations are frequently zero, we also assume for some constant . Therefore, the final prior has the form , where is the Dirac delta in zero. In Figure 0(a), we compare this prior distribution with the actual empirical distribution of a network with ReLU activations.

In a network implemented using Softplus activations, a log-normal is a good fit of the distribution of the activations. This is to be expected, especially when using batch-normalization, since the pre-activations will approximately follow a normal distribution with zero mean, and the Softplus approximately resembles a scaled exponential near zero. Therefore, in this case we suggest using a log-normal distribution as our prior

. In Figure 0(b), we compare this prior with the empirical distribution of a network with Softplus activations.

(a) Histogram of ReLU activations
(b) Histogram of Softplus activations
Figure 1: Comparison of the empirical distribution of the post-noise activations with our proposed prior when using: (subfig:relu) ReLU activations, for which we propose a log-uniform prior, and (subfig:softplus) Softplus activations, for which we propose a log-normal prior. In both cases, the empirical distribution approximately follows the proposed prior. Both histograms where obtained from the last dropout layer of the All-CNN-32 network described in Table II, trained on CIFAR-10.

Using these priors, we can finally compute the KL divergence term in eq. 3 for both ReLU activations and Softplus activations. We prove the following two propositions in Appendix A.

[Information dropout cost for ReLU] Let , where , and assume . Then, assuming , we have

In particular, if is chosen to be the log-normal distribution , we have

(4)

If instead , we have

[Information dropout cost for Softplus] Let , where , and assume . Then, we have

(5)

Substituting the expression for the KL divergence in eq. 4 inside eq. 3, and ignoring for simplicity the special case , we obtain the following loss function for ReLU activations

(6)

and a similar expression for Softplus. Notice that the first expectation can be approximated by sampling (in the experiments we use one single sample, as customary for dropout), and is just the average cross-entropy term that is typical in deep learning. The second term, which is new, penalizes the network for choosing a low variance for the noise, i.e. for letting more information pass through to the next layer. This loss can be optimized easily using stochastic gradient descent and the reparametrization trick of

[5] to back-propagate the gradient through the sampling operation.

7 Variational autoencoders and Information Dropout

In this section, we outline the connection between variational autoencoders [5] and Information Dropout. A variational autoencoder (VAE) aims to reconstruct, given a training dataset , a latent random variable such that the observed data can be thought as being generated by the, usually simpler, variable through some unknown generative process . In practice, this is done by minimizing the negative variational lower-bound to the marginal log-likelihood of the data

which can be optimized easily using the SGVB method of [5]. Interestingly, when the task is reconstruction, that is when , the IB loss function in eq. 3 reduces to

(7)

Therefore, by letting in the previous expression, we obtain exactly the loss function of a variational autoencoder, that is, the representation computed by the Information Dropout layer coincides with the latent variable computed by the VAE. This is in part to be expected, since the objective of Information Dropout is to create a representation of the data that is minimal sufficient for the task of reconstruction, and the latent variables of a VAE can be thought as such a representation. The term in this case can be seen as managing the trade off between the fidelity of the reconstruction of the input from the representation (measured by the cross-entropy), against the compression factor (complexity) of the representation (measured by the KL-divergence). As anticipated, Bayesian theory would prescribe , whereas it has been observed empirically that other choices can yield better performance. In the IB framework, the choice of is for the designer or model selection algorithm to choose.

Taking inspiration by experimental evidence in neuroscience, a contemporary work by Higgins et al. [6] also suggests the use of the loss function in eq. (7) to train a VAE. They prove experimentally that for higher values of the resulting representation is increasingly disentangled. This result is indeed compatible with our observation in Section 5, and in Section 8 we prove related experimental results in a more general situation.

8 Experiments

The goal of our experiments is to validate the theory, by showing that indeed increasing noise level yields reduced dependency on nuisance factors, a more disentangled representation, and that by adapting the noise level to the data we can better exploit architectures of limited capacity.

To this end, we first compare Information Dropout with the Dropout baseline on several standard benchmark datasets using different networks architecture, and highlight a few key properties. All the models were implemented using TensorFlow

[16]. As [4] also notice, letting the variance of the noise grow excessively leads to poor generalization. To avoid this problem, we constraint

, so that the maximum variance of the log-normal error distribution will be approximately 1, the same as binary dropout when using a drop probability of 0.5. In all experiments we divide the KL-divergence term by the number of training samples, so that for

the scaling of the KL-divergence term in similar to the one used by Variational Dropout (see Section 3).

Figure 2: Plot of the total KL-divergence at each spatial location in the first three Information Dropout layers (respectively of sizes 48x48, 24x24 and 12x12) of All-CNN-96 (see Table II) trained on Cluttered MNIST with different values of . This measures how much information from each part of the image the Information Dropout layer is letting flow to the next layer. While for low value information about the nuisances is still transmitted to the next layers, for higher value of the Information Dropout layers drop the information as soon as the receptive field is big enough to recognize it as a nuisance. The resulting representation is therefore more robust to nuisances, and provides better generalization performances. Unlike in classical dropout or Variational Dropout, the noise added by Information Dropout is tailored to the specific sample, to the point that the KL-divergence alone provides enough information to localize the digit. We provide more plots for different input samples in Appendix C.
(a) MNIST
(b) CIFAR-10
Figure 3: (subfig:MNIST) Average classification error on MNIST over 3 runs of several dropout methods applied to a fully connected network with three hidden layers and ReLU activations. Information dropout outperforms binary dropout, especially on smaller networks, possibly because dropout severely reduces the already limited capacity of the network, while Information Dropout can adapt the amount of noise to the data and the size of the network. Information dropout also outperforms a dropout layer that uses constant log-normal noise with the same variance, confirming the benefits of adaptive noise. (subfig:CIFAR) Classification error on CIFAR-10 for several dropout methods applied to the All-CNN-32 network (see Table II) using Softplus activations.
Figure 4: A few samples from our Occluded CIFAR dataset and the plot of the testing error on the main task (classifying the CIFAR image) and on the nuisance task (classifying the occluding MNIST digit) as varies. For both tasks, we use the same representation of the data trained for the main task using Information Dropout. For larger values of the representation is increasingly more invariant to nuisances, making the nuisance classification task harder, but improving the performance on the main task by preventing overfitting. For the nuisance task, we test using the learned noisy representation of the data, since we are interested specifically in the effects of the noise. For the main task, we show the result both using the noisy representation (N), and the deterministic representation (D) obtained by disabling the noise at testing time.

Cluttered MNIST. To visually asses the ability of Information Dropout to create a representation that is increasingly insensitive to nuisance factors, we train the All-CNN-96 network (Table II) for classification on a Cluttered MNIST dataset [7], consisting of images containing a single MNIST digit together with 21 distractors. The dataset is divided in 50,000 training images and 10,000 testing images. As shown in Figure 2, for small values of , the network lets through both the objects of interest (digits) and distractors, to upper layers. By increasing the value of , we force the network to disregard the least discriminative components of the data, thereby building a better representation for the task. This behavior depends on the ability of Information Dropout to learn the structure of the nuisances in the dataset which, unlike other methods, is facilitated by the ability to select noise level on a per-sample basis.

Occluded CIFAR. Occlusions are a fundamental phenomenon in vision, for which it is difficult to hand-design invariant representations. To assess that the approximate minimal sufficient representation produced by Information Dropout has this invariance property, we created a new dataset by occluding images from CIFAR-10 with digits from MNIST (Figure 4). We train the All-CNN-32 network (Table II) to classify the CIFAR image. The information relative to the occluding MNIST digit is then a nuisance for the task, and therefore should be excluded from the final representation. To test this, we train a secondary network to classify the nuisance MNIST digit using only the the representation learned for the main task. When training with small values of , the network has very little pressure to limit the effect of nuisances in the representation, so we expect the nuisance classifier to perform better. On the other hand, increasing the value of we expect its performance to degrade, since the representation will become increasingly minimal, and therefore invariant to nuisances. The results in Figure 4 confirm this intuition.

MNIST and CIFAR-10. Similar to [4], to see the effect of Information Dropout on different network sizes and architectures, we train on MNIST a network with 3 fully connected hidden layers with a variable number of hidden units, and we train on CIFAR-10 [17] the All-CNN-32 convolutional network described in Table II

, using a variable percentage of all the filters. The fully connected network was trained for 80 epochs, using stochastic gradient descent with momentum with initial learning rate 0.07 and dropping the learning rate by 0.1 at 30 and 70 epochs. The CNN was trained for 200 epochs with initial learning rate 0.05 and dropping the learning rate by 0.1 at 80, 120 and 160 epochs. We show the results in

Figure 3. Information Dropout is comparable or outperforms binary dropout, especially on smaller networks. A possible explanation is that dropout severely reduces the already limited capacity of the network, while Information Dropout can adapt the amount of noise to the data and to the size of the network so that the relevant information can still flow to the successive layers. Figure 6 shows how the amount of transmitted information also adapts to the size and hierarchical level of the layer.

Disentangling. As we saw Section 6, in the case of Softplus activations, the logarithm of the activations approximately follow a normal distribution. We can then approximate the total correlation using the associated covariance matrix . Precisely, we have

where is the variance of the marginal distribution. In Figure 5 we plot for different values of the testing error and the total correlation of the representation learned by All-CNN-32 on CIFAR-10 when using 25% of the filters. As predicted, when increases the total correlation diminishes, that is, the representation becomes disentangled, and the testing error improves, since we prevent overfitting. When is to large, information flow is insufficient, and the testing error rapidly increases.

Figure 5: Plot of the test error and total correlation for different values of of the final layer of the All-CNN-32 network with Softplus activations trained on CIFAR-10 with 25% of the filters. Increasing the test error decreases (we prevent overfitting) and the representation becomes increasingly disentangled. When is too large, it prevents information from passing through, jeopardizing sufficiency and causingi a drastic increase in error.

VAE. To validate Section 7, we replicate the basic variational autoencoder of [5], implementing it both with Gaussian latent variables, as in the original, and with an Information Dropout layer. We trained both implementations for 300 epochs dropping the learning rate by 0.1 at 30 and 120 epochs. We report the results in the following table. The Information Dropout implementation has similar performance to the original, confirming that a variational autoencoder can be considered a special case of Information Dropout.

(a)
(b)
Figure 6: Plots of (subfig:total_information) the total information transmitted through the two dropout layers of a All-CNN-32 network with Softplus activations trained on CIFAR and (subfig:mean_information) the average quantity of information transmitted through each unit in the two layers. From (subfig:total_information) we see that the total quantity of information transmitted does not vary much with the number of filters and that, as expected, the second layer transmits less information than the first layer, since prior to it more nuisances have been disentangled and discarded. In (subfig:mean_information) we see that when we decrease the number of filters, we force each single unit to let more information flow (i.e. we apply less noise), and that the units in the top dropout layer contain on average more information relevant to the task than the units in the bottom dropout layer.
Gaussian Information
1 -98.8 -100.0
2 -99.0 -99.1
3 -98.7 -99.1
Table I: Average variational lower-bound on the testing dataset for a simple VAE, where the size of the latent variable is and the encoder/decoder each contain hidden units. The latent variable

is implemented either using a Gaussian vector or using Information Dropout. Both methods achieve a similar performance.

Input 32x32
3x3 conv 96 ReLU
3x3 conv 96 ReLU

3x3 conv 96 ReLU stride 2

dropout
3x3 conv 192 ReLU
3x3 conv 192 ReLU
3x3 conv 192 ReLU stride 2
dropout
3x3 conv 192 ReLU
1x1 conv 192 ReLU
1x1 conv 10 ReLU
spatial average
softmax
(a) All-CNN-32
Input 96x96
3x3 conv 32 ReLU
3x3 conv 32 ReLU
3x3 conv 32 ReLU stride 2
dropout
3x3 conv 64 ReLU
3x3 conv 64 ReLU
3x3 conv 64 ReLU stride 2
dropout
3x3 conv 96 ReLU
3x3 conv 96 ReLU
3x3 conv 96 ReLU stride 2
dropout
3x3 conv 192 ReLU
3x3 conv 192 ReLU
3x3 conv 192 ReLU stride 2
dropout
3x3 conv 192 ReLU
1x1 conv 192 ReLU
1x1 conv 10 ReLU
spatial average
softmax
(b) All-CNN-96
Table II: Structure of the networks used in the experiments. The design of network is based on [18], but we also add batch normalization before the activations of each layer. Depending on the experiment, the ReLU activations are replaced by Softplus activations, and the dropout layer is implemented with binary dropout, Information Dropout or completely removed.

9 Discussion

We relate the Information Bottleneck principle and its associated Lagrangian to seemingly unrelated practices and concepts in deep learning, including dropout, disentanglement, variational autoencoding. For classification tasks, we show how an optimal representation can be achieved by injecting multiplicative noise in the activation functions, and therefore into the gradient computation during learning.

A special case of noise (Bernoulli) results in dropout, which is standard practice originally motivated by ensemble averaging rather than information-theoretic considerations. Better (adaptive) noise models result better exploitation of limited capacity, leading to a method we call Information Dropout. We also establish connections with variational inference and variational autoencoding, and show that “disentangling of the hidden causes” can be measured by total correlation and achieved simply by enforcing independence of the components in the representation prior.

So, what may be done by necessity in some computational systems (noisy computation), turns out to be beneficial towards achieving invariance and minimality. Analogously, what has been done for convenience (assuming a factorized prior) turns out to be the beneficial towards achieving “disentanglement.”

Another interpretation of Information Dropout is as a way of biasing the network towards reconstructing representations of the data that are compatible with a Markov chain generative model, making it more suited to data coming from hierarchical models, and in this sense is complementary to architectural constraint, such as convolutions, that instead bias the model toward geometric tasks.

It should be noticed that injecting multiplicative noise to the activations can be thought of as a particular choice of a class of minimizers of the loss function, but can also be interpreted as a regularization terms added to the cost function, or as a particular procedure utilized to carry out the optimization. So the same operation can be interpreted as either of the three key ingredients in the optimization: the function to be minimized, the family over which to minimize, and the procedure with which to minimize. This highlight the intimate interplay between the choice of models and algorithms in deep learning.

Acknowledgments

Work supported by ARO, ONR, AFOSR.

References

  • [1] S. Soatto and A. Chiuso, “Visual representations: Defining properties and deep approximations,” Proceedings of the International Conference on Learning Representations (ICLR); ArXiv: 1411.7676, May 2016.
  • [2] N. Tishby, F. C. Pereira, and W. Bialek, “The information bottleneck method,” in The 37th annual Allerton Conference on Communication, Control, and Computing, 1999, pp. 368–377.
  • [3] N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting.” Journal of Machine Learning Research, vol. 15, no. 1, pp. 1929–1958, 2014.
  • [4] D. P. Kingma, T. Salimans, and M. Welling, “Variational dropout and the local reparameterization trick,” in Proceedings of the 28th International Conference on Neural Information Processing Systems, ser. NIPS’15, 2015, pp. 2575–2583.
  • [5] D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” in Proceedings of the 2nd International Conference on Learning Representations (ICLR), no. 2014, 2013.
  • [6] I. Higgins, L. Matthey, A. Pal, C. Burgess, X. Glorot, M. Botvinick, S. Mohamed, and A. Lerchner, “beta-vae: Learning basic visual concepts with a constrained variational framework,” in Proceedings of the International Conference on Learning Representations (ICLR), 2017.
  • [7] V. Mnih, N. Heess, A. Graves et al., “Recurrent models of visual attention,” in Advances in Neural Information Processing Systems, 2014, pp. 2204–2212.
  • [8] N. Tishby and N. Zaslavsky, “Deep learning and the information bottleneck principle,” in Information Theory Workshop (ITW), 2015 IEEE.   IEEE, 2015, pp. 1–5.
  • [9] S. Wang and C. Manning, “Fast dropout training,” in Proceedings of the 30th International Conference on Machine Learning (ICML), 2013, pp. 118–126.
  • [10] G. E. Hinton and D. Van Camp, “Keeping the neural networks simple by minimizing the description length of the weights,” in

    Proceedings of the 6th annual conference on Computational learning theory

    .   ACM, 1993, pp. 5–13.
  • [11] A. A. Alemi, I. Fischer, J. V. Dillon, and K. Murphy, “Deep variational information bottleneck,” arXiv preprint arXiv:1612.00410, 2016.
  • [12] G. Sundaramoorthi, P. Petersen, V. S. Varadarajan, and S. Soatto, “On the set of images modulo viewpoint and contrast changes,” in

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    , June 2009.
  • [13] F. Anselmi, L. Rosasco, and T. Poggio, “On invariance and selectivity in representation learning,” Information and Inference, 2016.
  • [14] J. Bruna and S. Mallat, “Classification with scattering operators,” in Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition, ser. CVPR ’11, 2011, pp. 1561–1566.
  • [15] Y. Gal and Z. Ghahramani, “Bayesian convolutional neural networks with bernoulli approximate variational inference,” arXiv preprint arXiv:1506.02158, 2015.
  • [16] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: Large-scale machine learning on heterogeneous systems,” 2015, software available from tensorflow.org. [Online]. Available: http://tensorflow.org/
  • [17] A. Krizhevsky and G. Hinton, “Learning multiple layers of features from tiny images,” Technical report, University of Toronto, 2009.
  • [18] J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller, “Striving for simplicity: The all convolutional net,” arXiv preprint arXiv:1412.6806, 2014.

Appendix A Computations

[Information dropout cost for ReLU activations] Let , where , and assume . Then, assuming , we have

In particular, if is chosen to be the log-normal distribution , we have

If instead , we have

Proof.

If , then we also have . Since the KL-divergence is invariant under parameter transformations we can write

For the second part, notice that by definition and

Finally, if , then also , so . It is then easy to see that

[Information dropout cost for Softplus activations] Let , where , and assume . Then, we have

Proof.

Since the KL divergence is invariant for reparametrizations, the divergence between two log-normal distributions is equal to the divergence between the corresponding normal distributions. Therefore, using the known formula for the KL divergence of normals, we get the desired result. ∎

Appendix B Disentanglement

In this appendix, we show that the minimization problem

which is difficult in general since we do not have access to the joint distribution

, is equivalent to the following simpler optimization problem in two variables

In the following proposition, for simplicity, we concentrate on discrete random variables.

Let be a discrete random variable, let

be a generic probability distribution, and let

be a factorized prior distribution. Then, for any function , a minimization problem in the form

is equivalent to

where is the mutual information and is the total correlation of , assuming .

Proof.

To prove the proposition, we just need to minimize with respect to and substitute back the solution. Adding a Lagrange multiplier for the constrain , the problem can be rewritten as

Taking the derivative with respect to to we have

Setting it to zero, we obtain , that is, the optimal factorized prior is the product of the marginals. Substituting it back in the second term (the only containing ), we obtain

Appendix C Additional plots

Figure 7: For two more input samples, plot of the total KL divergence at each spatial location for the first three dropout layers. See Section 8 and Figure 2 for detailed description.