At the end of training a deep neural network, all that is left of past experience is a set of values stored in its weights. So, studying what “information” they contain seems like a natural starting point to understand how deep networks learn.
But how is the information in a deep neural network even defined? The weights are not a random variable, and the network outputs a deterministic function of its input, with degenerate (infinite) Shannon mutual information between the two. This presents a challenge for theories of Deep Learning based on Shannon Informationsaxe2018information
. Several frameworks have been developed to reason about information in fixed sets of values, for instance by Fisher and Kolmogorov, but they either do not relate directly to relevant concepts in Deep Learning, such as generalization and invariance, or cannot be estimated in practice for modern deep neural networks (DNNs).
Beyond how they define information, existing theories of Deep Learning are limited by whose information they address: Most approaches focus on information of the activations of the network – the output of its layers – rather than their parameters, or weights. The weights are a representation of past data (the training set of inputs and outputs), trained for predicting statistics of the training set itself (e.g., the output), relative to prior knowledge. The activations are a representation of (possibly unseen) future inputs (test set), ideally sufficient to predict future outputs, and invariant to nuisance variability in the data that should not affect the output. We have no access to future data, and the Shannon Information their representation contains does not account for the finite training set, hence missing a link to generalization.
But how are these properties of sufficiency and invariance achieved through the training process? Sufficiency alone is trivial — any invertible function of the data is, in theory, sufficient — but it comes at the expense of complexity (or minimality) and invariance of the representation. Invariance alone is similarly trivial – any constant function is invariant. A learning criterion therefore must trade off accuracy, complexity and invariance. The best achievable complexity trade-off is what we define as Information for the task. The challenge is that we wish to characterize sufficiency and invariance of representations of the test data, while we only have access to the training set.
So, throughout this paper, we discuss four distinct concepts: (1) Sufficiency of the weights, captured by a training loss (e.g., empirical cross-entropy); (2) minimality of the weights, captured by the information they contain; (3) sufficiency of the activations, captured by the test loss which we cannot compute, but can bound using the Information in the Weights; (4) invariance of the activations, a property of the test data, which is not explicitly present in the formulation of the learning process when training a deep neural network. To do all that, we first need to formally define both information of the weights and of the information activations.
1.1 Summary of Contributions and Related Work
Our first contribution is to measure the Information in the Weights of a deep neural network as the trade-off between the amount of noise we could add to the weights (measured by its entropy relative to a prior), and the performance the network would achieve in the task at hand. Informally, given an encoding algorithm, this is the number of bits needed to encode the weights in order to solve the task at some level of precision. The optimal trade-off traces a curve that depends on the task and the architecture, and solutions along the curve can be found by optimizing an Information Lagrangian. The Information Lagrangian is in the general form of an Information Bottleneck (IB) tishby2000information , but is fundamentally different from the IB used in most prior work in deep learning tishby2015deep , which refers to the activations, rather than the weights. Our second contribution is to derive a relation between the two (Section 4), where we show that, surprisingly, the Information Lagrangian of the weights of deep networks bounds the Information Bottleneck of the activations, but not vice-versa. This is important, as the IB of the activations is degenerate when computed on the training set, hence cannot be used at training time to enforce properties. On the other hand, the Information Lagrangian of the weights remains well defined, and through our bound it controls invariance at test time.
Our method requires specifying a parametrized noise distribution, as well as a prior, to measure information. While this may seem undesirable, we believe it is essential and key to the flexibility of the method, as it allows us to compute concrete quantities, tailored to DNNs, that relate generalization and invariance in novel ways. Of all possible choices of noise and prior to compute the Information in the Weights, there are a few canonical ones: An uninformative
uniform prior yields the Fisher Information of the weights. A prior obtained by averaging training over all relevant datasets yields the Shannon mutual information between the dataset (now a random variable) and the weights. A third important choice is the noise distribution induced by stochastic gradient descent (SGD) during the training process.
As it turns out, all three resulting notions of information are important to understand learning in deep networks: Shannon’s relates closely to generalization, via the PAC-Bayes Bound (Section 3.1). Fisher’s relates closely to invariance in the representation of test data (activations) as we show in Section 4. The noise distribution of SGD is what connects the two, and establishes the link between invariance and generalization. Although it is possible to minimize Fisher or Shannon Information independently, we show that when the weights are learned using SGD, the two are related. This is our third contribution, which is made possible by the flexibility of our framework (Section 3.3). Finally, in Section 5 we discuss open problems and further relations with prior work.
2 Preliminaries and Notation
We denote with an input (e.g., an image), and with a “task” random variable which we are trying to infer, e.g., a label . A dataset is a finite collection of samples . A DNN model trained with the cross-entropy loss encodes a conditional distribution , parametrized by the weights , meant to approximate the posterior of the task variable given the input . The Kullbach-Liebler, or KL-divergence, is the relative entropy between and : . It is always non-negative, and zero if and only if . It measures the (asymmetric) similarity between two distributions. Given a family of conditional distributions
parametrized by a vector, we can ask how much perturbing the parameter by a small amount will change the distribution, as measured by the KL-divergence. To second-order, this is given by where is the Fisher Information Matrix (or simply “Fisher”), defined by For its relevant properties see martens2014new . It is important to notice that the Fisher depends on the ground-truth data distribution only through the domain variable , not the task variable , since is sampled from the model distribution when computing the Fisher. This property will be used later.
Given two random variables and , their Shannon mutual information is defined as that is, the expected divergence between the distribution of after an observation of , and the prior distribution of . It is positive, symmetric, zero if and only if the variables are independent cover2012elements , and measured in Nats when using the natural logarithm.
In supervised classification one is usually interested in finding weights that minimize the cross-entropy loss on the training set . The loss is usually minimized using stochastic gradient descent (SGD), which updates the weights with an estimate of the gradient computed from a small number of samples (mini-batch). That is, , where are the indices of a randomly sampled mini-batch and . Notice that , so we can think of the mini-batch gradient as a noisy version of the real gradient. Using this intuition we can write:
3 Information in the Weights
We could define the Information in the Weights as their coding length after training. This, however, would not be be meaningful, as only a small subset of the weights matters: If given a weight configuration , we perturb a certain weight () and observe no change in the cross-entropy loss (i.e., ), then arguably that weight contains “no information” about the task. For the purpose of storing the trained model, that weight could be pruned or randomized with no performance loss. On the other hand, imagine slightly perturbing a weight and noticing a large increase in the loss: One could argue that weight is very “informative,” so it is useful to store its value with high precision. But what perturbations should we considered (e.g., additive or multiplicative)? And how “small” should they be? What distribution should we draw them from? To address these issues, we introduce the following definition:
Definition 3.1 (Information in the Weight).
The complexity of the task at level , using the posterior and the prior , is
where is the (expected) reconstruction error of the label under the “noisy” weight distribution ; measures the entropy of relative to the prior . If minimizes eq. 2 for a given , we call the Information in the Weights for the task at level .
Note that the definition of information is based on the loss on the training set, which depends on the number of samples in , and does not require access to the underlying data distribution. We call a “posterior distribution” as it is decided after seeing the dataset , but there is no implied Bayesian interpretation, as can be any distribution. Similarly, is a “prior” because it is picked before the dataset is seen, but is otherwise arbitrary.
Variants of (2) are well known, including for the case , when eq. 2 formally coincides with the evidence lower-bound (ELBO) used to train Bayesian Neural Networks. However, while the ELBO assumes the existence of a Bayesian posterior of which is an approximation, we require no such assumption. Closer to our viewpoint is hinton1993keeping , that shows that, for , eq. 2 is the cost to encode the labels in together with the weights of the network. This justifies considering, for any choice of and , the term as the coding length of the weights using some algorithm, although this is true only if they are encoded together with the dataset. A drawback with these approaches is that they lead to non-trivial results only if is much smaller than the coding length of the labels in the dataset (i.e., Nats, assuming a uniform label distribution). Unfortunately, this is not the case with typical deep neural networks.
Rather than focusing on a particular value of the coding length, we focus on how it changes as a function of for a given noise model, tracing a Pareto-optimal curve which defines the Information in the Weights we have proposed above.
3.1 Information in the Weights Controls Generalization
Equation 2 defines a notion of information that, while related to the learning task, does not immediately relate to generalization error or invariance of the representation. Throughout the rest of this work, we build such connections leveraging on existing work. We start by using the PAC-Bayes bound mcallester2013pac to connect the information that the weights retain about the training set to performance on the test data.
Theorem 3.2 ((mcallester2013pac, , Theorems 2-4)).
Assume the dataset is sampled i.i.d. from a distribution , and assume that the per-sample loss used for training is bounded by (we can reduce to this case by clipping and rescaling the loss).
For any fixed , prior , and weight distribution , with probability at least
, with probability at leastover the sample of , we have:
where is the expected per-sample test error that the model incurs using the weight distribution . Moreover, given a distribution over the datasets, we have the following bound in expectation over all possible datasets:
Hence, minimizing the complexity can be interpreted as minimizing an upper-bound on the test error, rather than merely minimizing the training error. In dziugaite2017computing , a non-vacuous generalization bound is computed for DNNs, using a (non-centered and non-isotropic) Gaussian prior and Gaussian posterior distributions.
3.2 Shannon vs. Fisher Information in the Weights
Definition 3.1 depends on an arbitrary choice of the noise distribution and of the prior. While this may appear cumbersome, it captures the fact that to properly measure the information in a deep network we need to adapt the choice of noise to the model. Suppose some information is encoded, using some algorithm, in the weights of the network: if this does not affect the training or the classification accuracy, we can consider that variability as noise, rather than information, for the network. In this section, we show how different priors and posteriors result in known definitions of information, in particular Shannon’s and Fisher’s. This section is inspired by achille2019information , who derive these relations in the even more general setting of Kolmogorov Complexity.
In some cases, there may be an actual distribution over the possible training sets, so we may aim to find the prior that minimizes the expected test error bound in eq. 4, which we call adapted prior. The following proposition shows that the information measure that minimizes the bound in expectation is the Shannon Mutual Information between weights and dataset.
Proposition 3.3 (Shannon Information in the Weights).
Assume the dataset is sampled from a distribution , and let the outcome of training on a sampled dataset be described by a distribution . Then the prior minimizing the expected complexity is the marginal , and the expected Information in the Weights is given by
Here is Shannon’s mutual information between the weights and the dataset, where the weights are seen as a (stochastic) function of the dataset given by the training algorithm (SGD).
Note that, in this case, the prior is optimal given the choice of the training algorithm (i.e., the map ) and the distribution of training datasets . Using this prior we have the following expression for the expectation over of eq. 2:
Notice that this is the general form of an Information Bottleneck tishby2000information . However, the use of the IB in Deep Learning has focused on the activations shwartz2017opening , which are the bottleneck between the inputs and the output . Instead, the Information Lagrangian eq. 6 concerns the weights of the network, and which are the bottleneck between the training dataset and inference on the future test distribution. Hence, it directly relates to the training process, the finite nature of the dataset, and can yield bounds on future performance. The Information Bottleneck for the weights was first proposed by achille2018emergence , but derived in a more limited setting.
While the adapted prior of Proposition 3.3 allows us to compute an optimal generalization bound, it requires averaging with respect to all possible datasets, which requires knowledge of the task distribution and is, in general, aspirational. At the other extreme, we can consider an uninformative (uniform) prior, and obtain the (log-determinant of the) Fisher as a measure of information.
Proposition 3.4 (Fisher Information in the Weights).
Assume an isotropic Gaussian prior and a Gaussian posterior , where is a global minimum of the cross-entropy loss. Then, for , that is, as the prior becomes uniform, we have that:
For small , the covariance that minimizes tends to , in accordance with the Cramér-Rao bound, where is the Hessian of the cross-entropy loss, and is the Fisher Information Matrix;
The Information in the Weights is given by
Note that the constant does not depend on and hence can be ignored.
The above proposition assumes that the configuration of the weights to which we are adding noise is a global minimum, in which case the Hessian and the Fisher matrix coincide. In fact, we have the following decomposition of the Hessian (martens2014new, , eq. 6 and Sect. 9.2):
where is the output of the network for input , is the cross-entropy loss for the -th sample, and is the Hessian of the -th component of . If most training samples are predicted correctly, then and . Otherwise, there is no guarantee that will be positive definite, making the second-order approximation used in Proposition 3.4 invalid, since it suggests that adding noise along the negative directions can decrease the loss unboundedly. Following martens2014new , we use a more robust second-order approximation by ignoring the second part of eq. 8, hence using the Fisher as a stable positive semi-definite approximation of the curvature. In this setting, eq. 7 remains valid at all points.
3.3 Information in the Learning Dynamics
In Section 3.1 we have seen that the Shannon Information of the weights controls generalization. In Section 4 we will see that the Fisher controls invariance of the activations. Can we just pick one measure of information and use it to characterize both generalization and invariance?
In principle, the Fisher can also be used in creftypecap 3.2 to obtain generalization bounds; however, it is likely to give a vacuous bound if used directly, as it is usually much larger than the optimal Shannon Information. In this section, we argue that, for a deep network trained with stochastic gradient descent on a given domain, Fisher and Shannon go hand-in-hand. This hinges on the fact that: (i) The Fisher depends on the domain, but not on the labels, hence all tasks sharing the same domain share the same Fisher, (ii) SGD implicitly minimizes the Fisher, hence, (iii) SGD tends to concentrate the solutions in a restricted area of low Fisher solutions, hence minimizing the Shannon Information.
While (i) follows directly from the definition of the Fisher Information Matrix (Section 2
), (ii) is not immediate, as SGD does not explicitly minimize the Fisher. The result hinges on the fact that, by adding noise to the optimization process, SGD will tend to escape sharp minima, and hence, since the Fisher is a measure of the curvature of the loss function, it will evade solutions with high Fisher. We can formalize this reasoning using a slight reformulation of the Eyring–Kramers lawberglund2011kramers for stochastic processes in the form of eq. 1.
Proposition 3.6 ((berglund2011kramers, , eq. 1.9)).
Let be a local minimum of the loss function . Consider the path joining with any other minimum which has the least increase in the loss function. The point with the highest loss along the path is a saddle point (the relevant saddle ) with a single negative eigenvalue
) with a single negative eigenvalue. Then, in the limit of small step size , and assuming isotropic gradient noise, the expected time before SGD escapes the minimum is given by
where we have defined the free energy , where is the Fisher computed at , and , where is the batch size. In particular, increasing (the “temperature” of SGD) makes SGD more likely to avoid minima with high Fisher Information.
We can informally summarize the above statement as saying that SGD, rather than minimizing directly the loss function, minimizes a free energy . Hence, surprisingly, the (Fisher) Information in the Weights controls the dynamics by slowing down learning when more information needs to be stored.
We can now finally prove (iii), connecting the Fisher Information with the Shannon, which is at face value unrelated. The proof hinges on an approximation of the mutual information using the Fisher Information presented in brunel1998mutual .
Assume the space of datasets admits a differentiable parametrization.111For example by parametrizing the labels by the weights of an overfitting model, and sampling through a differentiable sampling algorithm. Assume that is concentrated along a single dataset (i.e., the dataset that was used to train). Then, we have the approximation:
where is the entropy and we assume ; are the weights obtained at the end of training on dataset , and we assume that .222That is, that the Fisher does not change much if we perturb the dataset slightly. This assumption is mainly to keep the expression uncluttered, and a similar result can be derived without this additional hypothesis. The term is the Jacobian of the final point with respect to changes of the training set.
Notice that the norm of the Jacobian can be interpreted as a measure of the stability of SGD, that is, how much the final solution changes if the dataset is perturbed hardt2015train . Hence, reducing the Fisher of the final weights found by SGD (i.e., the flatness of the minimum), or making SGD more stable, i.e., reducing , both reduce the mutual information , and hence improve generalization per the PAC-Bayes bound.
4 The Role of Information in the Invariance of the Representation
Thus far we have seen that training a DNN using SGD recovers weights that are a sufficient ( minimizes the training loss) and minimal (they have low Information, either Shannon’s or Fisher’s) representation of the training dataset . The PAC-Bayes Bound guarantees that, on average, sufficiency of weights – a representation of the training set – implies sufficiency of the activations – a representation of the input datum at test time. What we are missing is a guarantee that, in addition to being sufficient, the representation of the test datum is also minimal, that is the information in the activations is also minimized. This is the missing link to invariance since, as shown by achille2018emergence , a sufficient representation is invariant if and only if it is minimal. In this section, we derive this missing link.
As we mentioned in the introduction, there is an unresolved controversy in how to define and measure the information in the activations which, after training is complete, are a deterministic function of the input: shwartz2017opening argue that DNNs work by compressing the inputs, but the experimental setup has been challenged saxe2018information and no consensus has been reached chelombiev2019adaptive
, as using a deterministic map to compress a continuous random variable presents technical challenges.achille2018emergence raises the point that the weights of a DNN should be considered stochastic, where the stochasticity is identified by the amount of information they store, and prove that the the mutual information between activations and inputs is in fact upper-bounded by Information in the Weights, which can be considered as a noisy communication channel. A similar point of view is used later by goldfeld2018estimating , which estimate mutual information under the hypothesis of inputs with isotropic noise. Both achille2018emergence and shwartz2017opening suggest connecting the noise in the weights and/or activations with the noise of SGD, although no formal connection has been established thus far.
Our main contribution, developed in this section, is to establish the connection between minimality of the weights and invariance of the activations, which resolves the conflicting points of view. First, for a fixed deterministic DNNs, without stochasticity, we introduce the notion of effective information
in the activations which, rather than measuring the information that an optimal decoder could extract from the activations, measures the information that the network effectively uses in order to classify. Using this definition, we show thatthe Fisher Information in the Weights bounds both the Fisher and Shannon Information in the activations.Notice that we already related the Fisher Information to the noise of SGD (Proposition 3.6).
4.1 Induced Stochasticity and Effective Information in the Activations
We denote with the activations of a generic intermediate layer of a DNN, a deterministic function of . According to the definition of Information in the Weights, small perturbations of uninformative weights cause small perturbations in the loss. Hence, information in the activations that is not preserved by such perturbations is not used by the classifier. This suggests the following definition.
(Effective Information in the Activations) Let be the value of the weights, and let , with be the optimal Gaussian noise minimizing eq. 2 at level for a uniform prior (Proposition 3.4). We call effective information (at noise level ) the amount of information about that is not destroyed by the added noise:
where are the activations computed by the perturbed weights .
Using this definition, we obtain the following characterization of the information in the activations.
For small values of we have:
The Fisher Information of the activations w.r.t. the inputs is:
where is the Jacobian of the representation given the input, and is the Jacobian of the representation with respect to the weights. In particular, the Fisher of the activations goes to zero when the Fisher of the weights goes to zero.
Under the hypothesis that, for any representation , the distribution of inputs that could generate it concentrates around its maximum, we have:
hence, by the previous point, when the Fisher Information of the weights decreases, the effective mutual information between inputs and activations also decreases.
Hence, surprisingly, decreasing the Fisher Information that the weights have about the training set (which can be done by increasing the noise of SGD) decreases the apparently unrelated effective information between inputs and activations at test time. Moreover, making small, i.e., reducing the Lipschitz constant of the network, also reduces the effective information.
We say that a random variable is a nuisance for the task if affects the input but is not informative of , i.e. . We say that a representation is maximally invariant to if is minimal among the sufficient representations, that is, the representations capture all the information about the task contained in the input, . The following claim in achille2018emergence , connects invariance to compression (minimality):
Proposition 4.3 ((achille2018emergence, , Proposition 3.1)).
A representation is maximally invariant to all nuisances at the same time if and only if is minimal among the sufficient representations.
Together with Proposition 4.2, this shows that a network which has minimal complexity (i.e., minimal Information in the Weights) is forced to learn a representation that is effectively invariant to nuisances, that is, invariance emerge naturally during training by reducing the amount of information stored in the weights.
As a side note, we may wonder what distribution of the inputs would maximize the effective mutual information , that is, what distribution the network is maximally adapted to represent brunel1998mutual . Maximizing with respect to we obtain: . Using this, we obtain the following bound on the mutual information for any input distribution:
Intuitively, this can be interpreted as the volume of the representation space, that is, how many well separated representations can be obtained mapping inputs , taking into account that, because of a small Lipschitz constant of the network, or because of noise, multiple inputs may be mapped to similar representations.
Once trained, deep neural networks are deterministic functions of their input, and it is not clear what “information” they retain, what they discard, and how they process unseen data. Ideally, we would like them to process future data by retaining all that matters for the task (sufficiency) and discarding all that does not (nuisance variability), leading to invariance. But we do not have access to the test data, and there is no rigorous or even formal connection with properties of the training set.
This paper extends and develops results presented informally by achille2018emergence and is, to the best of our knowledge, the first to define the information in a deep network, which is in the weights that represent the training set, in a way that connects it to generalization and invariance, which are properties of the activations of the test data. This Information in the Weights is neither Shannon’s (used in achille2018emergence ) nor Fisher’s, but a more general one that encompasses them as special cases.
We leverage several existing results in the literature: the Information Lagrangian is introduced in achille2018emergence , but we extend it beyond Shannon Information, which presents some challenges when the source of stochasticity is not explicit. We draw on Fisher’s Information, that formalizes a notion of sensitivity of a set of parameters, and is not tied to a particular assumption of generative model. We leverage the PAC-Bayes bound to connect sufficiency of the weights to sufficiency of the activations, and provide the critical missing link to connect minimality of the weights – that arises from inductive bias of SGD when training deep networks – with minimality of the activations.
We put the emphasis on the distinction between Information in the Weights, which thus far only achille2018emergence have studied, and information in the activations, which all other information-theoretic approaches to Deep Learning refer to. One pertains to representations of past data, which we measure. The other pertains to desirable properties of future data, that we cannot measure, but we can bound. We provide the first measurable bound, exploiting the Fisher Information, which enables reasoning about “effective stochasticity” even if a network is a deterministic function.
Our results connect to generalization bounds through PAC-Bayes, and account for the finite nature of the training set, unlike other information-theoretic approaches to Deep Learning that only provide results in expectation. In the Appendix, we provide proofs of the claims.
-  Alessandro Achille, Giovanni Paolini, Glen Mbeng, and Stefano Soatto. The Information Complexity of Learning Tasks, their Structure and their Distance. arXiv e-prints, page arXiv:1904.03292, Apr 2019.
-  Alessandro Achille, Matteo Rovere, and Stefano Soatto. Critical learning periods in deep networks. In International Conference on Learning Representations, 2019.
Alessandro Achille and Stefano Soatto.
Emergence of invariance and disentanglement in deep representations.
Journal of Machine Learning Research, 19(1):1947–1980, 2018.
-  Nils Berglund. Kramers’ law: Validity, derivations and generalisations. arXiv preprint arXiv:1106.5799, 2011.
-  Nicolas Brunel and Jean-Pierre Nadal. Mutual information, fisher information, and population coding. Neural computation, 10(7):1731–1757, 1998.
-  Ivan Chelombiev, Conor Houghton, and Cian O’Donnell. Adaptive estimators show information compression in deep neural networks. arXiv preprint arXiv:1902.09037, 2019.
-  Thomas M Cover and Joy A Thomas. Elements of information theory. John Wiley & Sons, 2012.
Gintare Karolina Dziugaite and Daniel M Roy.
Computing nonvacuous generalization bounds for deep (stochastic)
neural networks with many more parameters than training data.
Proceedings of the 33rd Conference on Uncertainty in Artificial Intelligence, 2017.
-  Ziv Goldfeld, Ewout van den Berg, Kristjan Greenewald, Igor Melnyk, Nam Nguyen, Brian Kingsbury, and Yury Polyanskiy. Estimating information flow in neural networks. arXiv preprint arXiv:1810.05728, 2018.
-  Moritz Hardt, Benjamin Recht, and Yoram Singer. Train faster, generalize better: Stability of stochastic gradient descent. arXiv preprint arXiv:1509.01240, 2015.
Geoffrey E Hinton and Drew Van Camp.
Keeping the neural networks simple by minimizing the description
length of the weights.
Proceedings of the sixth annual conference on Computational learning theory, pages 5–13. ACM, 1993.
-  Qianxiao Li, Cheng Tai, and Weinan E. Stochastic modified equations and adaptive stochastic gradient algorithms. International Conference on Machine Learning, page 2101–2110, 2017.
-  James Martens. New insights and perspectives on the natural gradient method. arXiv preprint arXiv:1412.1193, 2014.
-  David McAllester. A pac-bayesian tutorial with a dropout bound. arXiv preprint arXiv:1307.2118, 2013.
-  Andrew Michael Saxe, Yamini Bansal, Joel Dapello, Madhu Advani, Artemy Kolchinsky, Brendan Daniel Tracey, and David Daniel Cox. On the information bottleneck theory of deep learning. International Conference of Learning Representations, 2018.
-  Ravid Shwartz-Ziv and Naftali Tishby. Opening the black box of deep neural networks via information. arXiv preprint arXiv:1703.00810, 2017.
-  Naftali Tishby, Fernando C Pereira, and William Bialek. The information bottleneck method. The 37th annual Allerton Conference on Communication, Control, and Computing, pages 368–377, 1999.
-  Naftali Tishby and Noga Zaslavsky. Deep learning and the information bottleneck principle. In Information Theory Workshop (ITW), 2015 IEEE, pages 1–5. IEEE, 2015.
Appendix A Empirical validation
Relation between Fisher and Shannon Information
In general, computing the Shannon Information between a dataset and the parameters of a model is not tractable. However, here we show an example of a simple model that can be trained with SGD, replicates some of the aspects typical of the loss landscape of DNNs, and for which both Shannon and Fisher Information can be estimated easily. According to our predictions in Section 3.3, Figure 1 shows that (center) increasing the temperature of SGD, for example by reducing the batch sizes, reduces both the Fisher and the Shannon Information of the weights (Proposition 3.7), and (left) this is due to the solution discovered by SGD concentrating in areas of low Fisher Information of the loss landscape when the temperature is increased (Proposition 3.6).
We now describe the toy model. The dataset , with , is generated by sampling a mean and sampling . The task is to regress the mean of the dataset by minimizing the loss , where are the model parameters (weights) and is some fixed parametrization. To simulate the over-paraemetrization and complex loss landscape of DNN, we pick as in Figure 1 (right). Notice in particular that multiple value of will give the same : This ensures that the loss function has many equivalent minima. However, these minima will have different sharpness, and hence Fisher Information, due to being more sharp near the origin. Proposition 3.6 suggests that SGD in more likely to converge to those minima with low Fisher Information, which is confirmed in Figure 1 (left), which shows the marginal end point over all datasets and SGD trainings. Having found the marginal over all training and datasets, we can compute the the Shannon Information . Note that we take , where is the minimum recovered by SGD at the end of training and
is minimum variance of the estimation, which is given by the Cramér-Rao bound. The (log-determinant of) Fisher-Information can instead easily be computed in close form given the loss function. In Figure 1 (center) we show how these quantities change as the batch size varies. Notice that when , the algorithm reduces to standard gradient descent, which maintains the largest information in the weights.
The value of the Fisher Information cannot be directly compared to the Shannon Information, since it is defined modulo an additive constant due to the improper uniform prior. However, using the proper gaussian prior that leads to the lowest expected value, we obtain a value of the “Gaussian” Information in the Weights between 4000-5000 nats, versus the nats of the Shannon Information: Surprisingly minimizing a much larger (Fisher) bound SGD can still implicitly minimize the optimal Shannon bound.
Fisher Information for CIFAR-10
To validate our predictions on a larger scale architecture, we train an off-the-shelf ResNet-18 on CIFAR-10 with SGD (with momentum 0.9, weight decay 0.0005, annealing the learning rate by 0.97 per epoch).
First, we compute the Fisher Information (more precisely its trace) during training for different values of the batch size (and hence of the “temperature of SGD”). In accordance with Proposition 3.6, Figure 2 (left) shows that after 30 epochs of training the networks with low batch size have a much lower Fisher Information.
Then, to check whether the Fisher Information correlates with the amount of information contained in the dataset, we train using only 2, 3, 4, and so on classes of CIFAR-10. Intuitively, the dataset with only 2 classes should contain less information than the dataset with 10 classes, and correspondingly the Fisher Information in the Weights of the network should be smaller. We confirm this prediction in Figure 2 (right).
Fisher Information and dynamics of feature learning
To see whether changes in the Fisher Information correspond to the network learning features of increasing complexity, in Figure 3 we train a 3-layer fully connected network on a simple classification problem of 2D points and plot both the Fisher and the classification boundaries during training. Since the network is relatively small, in this experiment we compute the Fisher Matrix exactly using the definition. We observe that as different features are learned we observe corresponding “bumps” in the fisher information matrix. This is compatible with the hypothesis advanced by , whereby feature learning may correspond to crossing of narrow bottlenecks in the loss landscape, which is followed by a compression phase as the network moves away toward flatter area of the loss landscape.
Appendix B Proofs
Proof of Proposition 3.3.
For a fixed training algorithm , we want to find the prior that minimizes the expected complexity of the data:
Notice that only the second term depends on . Let be the marginal distribution of , averaged over all possible training datasets. We have
Since the KL divergence is always positive, the optimal “adapted” prior is given by , i.e. the marginal distribution of over all datasets. Finally, by definition of Shannon’s mutual information, we get
Proof of Proposition 3.4.
Since both and
are Gaussian distributions, the KL divergence can be written as
where is the number of components of .
Let be a local minimum of the cross-entropy loss , and let be the Hessian of in . Set . Assuming that a quadratic approximation holds in a sufficiently large neighborhood, we obtain
The gradient with respect to is
Setting it to zero, we obtain the minimizer .
Recall that the Hessian of the cross-entropy loss coincides with the Fisher information matrix at , because is a critical point . Since , and hence , is not normalized by the number of samples , the exact relation is . Taking the limit for , we obtain the desired result. ∎
Proof of Proposition 3.7.
It is shown in  that, given two random variables and , and assuming that is concentrated around its MAP, then the following approximation holds:
where is the Fisher Information that has about , and . We want to apply this approximation to , using the distribution . Hence, we need to compute the Fisher Information
that the dataset has about the weights. Recall that, for a normal distribution, the Fisher Information is given by
Using this expression in our case, and noticing that by our assumptions we can ignore the second part, we obtain:
which we can insert in eq. 11 to obtain:
Proof of Proposition 4.2.
(1) We need to compute the Fisher Information between and , that is:
In the limit of small , and hence small , we expand to the first-order about as follows:
where is the Jacobian of seen as a function of , with . Hence, given that , we obtain that given approximately follows the distribution .
We can now plug this into the expression for and compute: