A representation is a statistic of the data that is “useful”. Classical Information Theory creates a compressed representation and makes it easier to store or transmit data; the goal is always to decode the representation to get the original data back. If we are given images and their labels, we could learn a representation that is useful to predict the correct labels. This representation is thus a statistic of the data sufficient for the task of classification. If it is also minimal—say in its size—it would discard information in the data that is not correlated with the labels. Such a representation is unique to the chosen task, it would perform poorly to predict some other labels correlated with the discarded information. If instead the representation were to have lots of redundant information about the data, it could potentially predict other labels correlated with this extra information.
The premise of this paper is our desire to characterize the information discarded in the representation when it is fit on a task. We want to do so in order to learn representations that can be transferred easily to other tasks.
Our main idea is to choose a canonical task—in this paper, we pick reconstruction of the original data—as a way to measure the discarded information. Although one can use any canonical task, reconstruction is special. It is a “capture all” task in the sense that achieving perfect reconstruction entails that the representation is lossless; information discarded by the original task is therefore readily measured as the one that helps solve the canonical task. This leads to the study of the following Lagrangian which is similar to the Information Bottlenck of Tishby et al. 
where the rate is an upper bound on the mutual information of the representation learnt by the encoder with the input data , distortion measures the quality of reconstruction of the decoder and
measures the classification loss of the classifier. As Alemi and Fischer  show, this Lagrangian can be formally connected to ideas in thermodynamics. We heavily exploit and specialize this point of view, as summarized next.
1.1 Summary of contributions
Our main technical observation is that can be intepreted as a free-energy and a stochastic learning process that minimizes its corresponding Hamiltonian converges to the optimal free-energy. This corresponds to an “equilibrium surface” of information-theoretic functionals and and a surface of the model parameters at convergence. We prove that the equilibrium surface is convex and its dual, the free-energy , is concave. The free-energy is only a function of Lagrange multipliers , the family of model parameters , and the task, and is therefore invariant of the learning dynamics.
Second, we design a quasi-static stochastic process, akin to an equilibrium process in thermodynamics, to keep the model parameters on the equilibrium surface. Such a process allow us to travel to any feasible values of while ensuring that the parameters of the model are on the equilibrium surface. We focus on one process, the “iso-classification process” which automatically trades off the rate and distortion to keep the classification loss constant.
We prescribe a quasi-static process that allows for a controlled transfer of learnt representations. It adapts the model parameters as the task is changed from some source dataset to a target dataset while keeping the classification loss constant. Such a process is in stark contrast to current techniques in transfer learning which do not provide any guarantees on the quality of the model on the target dataset.
We provide extensive experimental results which realize the theory developed in this paper.
2 Theroetical setup
This section introduces notation and preliminaries that form the building blocks of our approach.
Consider an encoder that encodes data into a latent code and a decoder that decodes back into the original data . If the true distribution of the data is we may define the following functionals.
We denote expectation over data using the notation . The first functional is the Shanon entropy of the true data distribution; it quantifies the complexity of the data. The distortion measures the quality of the reconstruction through its log-likelihood. The rate is a Kullback-Leibler (KL) divergence; it measures the average excess bits used to encode samples from using a code that was built for our approximation of the true marginal on the latent factors .
2.2 Rate-Distortion curve
The functionals in Eq. 1 come together to give the inequality
where is the KL-divergence between the learnt encoder and the true (unknown) conditional of the latent factors. The outer inequality forms the basis for a large body of literature on Evidence Lower Bounds (ELBO, see Kingma and Welling ). Consider Fig. 0(a), if the capacity of our candidate distributions and is infinite, we can obtain the equality . This is the thick black line in Fig. 0(a).
For finite capacity variational families, say parameterized by , which we denote by , and respectively, as Alemi et al.  argue, one obtains a convex RD curve (shown in red in Fig. 0(a)) corresponding to the Lagrangian
This Lagrangian is the relaxation of the idea that given a fixed variational family and data distribution , there exists an optimal value of, say, rate that best sandwiches Eq. 2. The optimal Lagrange multiplier is evaluated at the desired value of .
2.3 Incorporating the classification loss
Let us create a classifier that uses the learnt representation as the input and set the classification loss as the negative log-likelihood of the prediction
If the parameters of the model—which now consists of the encoder , decoder and the classifier —are denoted by , the training process for the model induces a distribution where denotes a finite dataset. In addition to and , the authors in Alemi and Fischer  define
which is the relative entropy of the distribution on parameters after training compared to a prior distribution of our choosing. Using a very similar argument as Section 2.2 the four functionals and form a convex three-dimensional surface in the RDCS phase space. A schematic is shown in Fig. 0(b) for . We can again consider a Lagrange relaxation of this surface given by
Remark 1 (‘The ‘First Law” of learning).
Alemi and Fischer  draw formal connections of the Lagrangian in Eq. 6 with the theory of thermodynamics. Just like the first law of thermodynamics is a statement about the conservation of energy in physical processes, the fact that the four functionals are tied together in a smooth constraint leads to an equation of the form
which indicates that information in learning processes is conserved. The information in the latent representation is kept either to reconstruct back the original data or to predict the labels. The former is captured by the encoder-decoder pair, the latter is captured by the classifier.
Remark 2 (Setting ).
The distribution is a posterior on the parameters of the model given the dataset. While this distribution is well-defined under minor technical conditions, e.g., ergodicity, performing computations with this distribution is difficult. We therefore only consider the case when in the sequel and leave the general case for future work.
The following lemma (proved in Appendix B) shows that the constraint surface connecting the information-theoretic functionals and is convex and its dual, the Lagrangian is concave.
Lemma 3 (The constraint surface is convex).
The constraint surface is convex and the Lagrangian is concave.
We can show using a similar proof that the entire surface joining and is convex by considering the cases and separately. Note that the constraint is convex in and ; it need not be convex in the model parameters that parameterize , etc.
2.4 Equilibrium surface of optimal free-energy
We next elaborate upon the objective in Eq. 6. Consider the functionals and parameterized using parameters . First, consider the problem
We can solve this using calculus of variations to get
We assume in this paper that the labels are a deterministic function of the data, i.e., where is the true label of the datum . We therefore have
where the normalization constant is
The objective can now be rewritten as maximizing the log-partition function, also known as the free-energy in statistical physics [Mezard and Montanari, 2009],
Remark 4 (Why is it called the “equilibrium” surface?).
with updates given by
where is the step-size; the gradient is evaluated over samples from and . Using the same technique as that of Chaudhari and Soatto , one can show that the objective
decreases monotonically. Observe that our objective in Eq. 8 corresponds to the limit of this objective along with a uniform non-informative prior in Eq. 5 . In fact, this result is analogous to the classical result that an ergodic Markov chain makes monotonic improvements in the KL-divergence as it converges to the steady-state, also known as, equilibrium, distribution
. In fact, this result is analogous to the classical result that an ergodic Markov chain makes monotonic improvements in the KL-divergence as it converges to the steady-state, also known as, equilibrium, distribution[Levin and Peres, 2017]. The posterior distribution of the model parameters induced by the stochastic updates in Eq. 12 is the Gibbs distribution .
It is for the above reason that we call the surface in Fig. 0(b) parameterized by
as the “equilibrium surface”. Learning, in this case minimizing Eq. 8, is initialized outside this surface and converges to specific parts of the equilibrium surface depending upon ; this is denoted by the red and blue curves in Fig. 0(b). The constraint that ties results in this equilibrium surface is that variational inequalities such as Eq. 2 (more are given in Alemi and Fischer ) are tight up to the capacity of the model. This is analogous to the concept of equilibrium in thermodynamics [Sethna, 2006]
3 Dynamical processes on the equilibrium surface
This section prescribes dynamical processes that explore the equilibrium surface. For any parameters , not necessarily on the equilibrium surface, let us define
If we have which implies
A quasi-static process in thermodynamics happens slowly enough for a system to remain in equilibrium with its surroundings. In our case, we are interested in evolving Lagrange multipliers slowly and simultaneously keep the model parameters on the equilibrium surface; the constraint Eq. 15 thus holds at each time instant. The equilibrium surface is parameterized by and so changing adapts the three functionals to track their optimal values corresponding to .
Let us choose some values and the trivial dynamics and
. The quasi-static constraint leads to the following partial differential equation (PDE)
valid all . At each location the above PDE indicates how the parameters should evolve upon changing the Lagrange multipliers . We can rewrite the PDE using the Hamiltonian in Eq. 11 as shown next.
Lemma 5 (Equilibrium dynamics for parameters ).
Given , the parameters evolve as
where is the Hamiltonian in Eq. 11 and
All the inner expectations above are taken with respect to the Gibbs measure of the Hamiltonian, i.e., . The dynamics for the parameters is therefore a function of the two directional derivatives
with respect to and . Note that in Eq. 17 is the Hessian of a strictly convex functional.
This lemma allows us to implement dynamical processes for the model parameters
on the equilibrium surface. As expected, this is an ordinary differential equationEq. 17 that depends on our chosen evolution for through the directional derivatives . The utility of the above lemma therefore lies in the expressions for these directional derivatives. Appendix C gives the proof of the above lemma.
Remark 6 (Implementing the equilibrium dynamics).
The equations in Lemma 5 may seem complicated to compute but observe that they can be readily estimated using samples from the dataset . These equations can be implemented using Hessian-vector products
may seem complicated to compute but observe that they can be readily estimated using samples from the datasetand those from the encoder . The key difference between Eq. 17 and, say, the ELBO objective is that the gradient in the former depends upon the Hessian of the Hamiltonian
. These equations can be implemented using Hessian-vector products[Pearlmutter, 1994]. If the dynamics involves certain constrains among the functionals, as Remark 7 shows, we simplify the implementation of such equations.
3.1 Iso-classification process
An iso-thermal process in thermodynamics is a quasi-static process where a system exchanges energy with its surroundings and remains in thermal equilibrium with the surroundings. We now analogously define an iso-classification process that adapts parameters of the model while the free-energy is subject to slow changes in . This adaptation is such that the classification loss is kept constant while the rate and distortion change automatically.
Following the development in Lemma 5, it is easy to create an iso-classification process. We simply add a constraint of the form
The quantities and are given by
where is the logarithm of the classification loss. Observe that we are not free to pick any values for for the iso-classification process anymore, the constraint ties the two rates together.
Remark 7 (Implementing an iso-classification process).
The first constraint in Eq. 33 allows us to choose
where is a parameter to scale time. The second equalities in both rows follow because is the optimal free-energy which implies relations like and . We can now compute the two deriatives in Eq. 22 using finite differences to implement an iso-classification process. This is equivalent to running the dynamics in Eq. 33 using finite-difference approximation for the terms , , , . While approximating all these listed quantities at each update of would be cumbersome, exploiting the relations in Eq. 33 is efficient even for large neural networks, as our experiments show.
is efficient even for large neural networks, as our experiments show.
Remark 8 (Other dynamical processes of interest).
In this paper, we focus on iso-classification processes. However, following the same program as that of this section, we can also define other processes of interest, e.g., one that keeps constant while fine-tuning a model. This is similar to the alternative Information Bottleneck of Achille and Soatto  wherein the rate is defined using the weights of a network as the random variable instead of the latent factors
wherein the rate is defined using the weights of a network as the random variable instead of the latent factors. This is also easily seen to be the right-hand side of the PAC-Bayes generalization bound [McAllester, 2013]. A dynamical process that preserves this functional would be able to control the generalization error which is an interesting prospect for future work.
4 Transferring representations to new tasks
Section 3 demonstrated dynamical processes where the Lagrange multipliers change with time and the process adapts the model parameters to remain on the equilibrium surface. This section demonstrates the same concept under a different kind of perturbation, namely the one where the underlying task changes. The prototypical example one should keep in mind in this section is that of transfer learning where a classifier trained on a dataset is further trained on a new dataset, say . We will assume that the input domain of the two distributions is the same.
4.1 Changing the data distribution
If i.i.d samples from the source task are denoted by and those of the target distribution are the empirical source and target distributions can be written as
respectively; here is a Dirac delta distribution at . We will consider a transport problem that transports the source distribution to the target distribution . For any
we interpolate between the two distributions using a mixture
Observe that the interpolated data distribution equals the source and target distribution at and respectively and it is the mixture of the two distributions for other times. We keep the labels of the data the same and do not interpolate them. As discussed in Appendix F we can also use techniques from optimal transportation [Villani, 2008] to obtain a better transport; the same dynamical equations given below remain valid in that case.
4.2 Iso-classification process with a changing data distribution
The equilibrium surface in Fig. 0(b) is a function of the task and also evolves with the task. We now give a dynamical process that keeps the model parameters in equilibrium as the task evolves quasi-statically. We again have the same conditions for the dynamics as those in Eq. 19. The following lemma is analogous to Lemma 5.
Lemma 9 (Dynamical process for changing data distribution).
Given , the evolution of model parameters for a changing data distribution given by Eq. 23 is
and the other quantities are as defined in Lemma 5 with the only change that expectations on data are taken with respect to instead of . The additional term arises because the data distribution changes with time.
A similar computation as that of Section 3.1 gives a quasi-static iso-classification process as the task evolves
where and are as given in Eq. 21 with the only change being that the outer expectation is taken with respect to . The new term that depends on time is
with . Finally get
This indicates that is a surface parameterized by and , equipped with a basis of tangent plane .
4.3 Geodesic transfer of representations
The dynamics of Lemma 9 is valid for any . We provide a locally optimal way to change in this section.
Remark 10 (Rate-distortion trade-off).
The first equality is simply our iso-classification constraint. For , the second one indicates that using Lemma 3 which shows that . This also gives in Eq. 22. The third equality is a powerful observation: it indicates a trade-off between rate and distortion, if we have . It also shows the geometric structure of the equilibrium surface by connecting and together, which we will exploit next.
Computing the functionals and during the iso-classification transfer presents us with a curve in space. Geodesic transfer implies that the functionals follow the shortest path in this space. But notice that if we assume that the model capacity is infinite, the space is Euclidean and therefore the geodesic is simply a straight line. Since we keep the classification loss constant during the transfer, , straight line implies that slope is a constant, say . Thus . Observe that . Combining the iso-classification constraint and the fact that , gives us a linear system:
We solve this system to update during the transfer.
5 Experimental validation
This section presents experimental validation for the ideas in this paper. We first implement the dynamics in Section 3 that traverses the equilibrium surface and then demonstrate the dynamical process for transfer learning devised in Section 4.
Setup. We use the MNIST [LeCun et al., 1998] and CIFAR-10 [Krizhevsky, 2009] datasets for our experiments. We use a 2-layer fully-connected network (same as that of Kingma and Welling ) as the encoder and decoder for MNIST; the encoder for CIFAR-10 is a ResNet-18 [He et al., 2016] architecture while the decoder is a 4-layer deconvolutional network [Noh et al., 2015]. Full details of the pre-processing, network architecture and training are provided in Appendix A.
5.1 Iso-classification process on the equilibrium surface
Details. Given a value of the Lagrange multipliers
we first find a model on the equilibrium surface by training from scratch for 120 epochs with the Adam optimizer[Kingma and Ba, 2014]; the learning rate is set to and drops by a factor of 10 every 50 epochs. We then run the iso-classification process for these models in Remark 7 as follows. We modify according to the equations
Changes in cause the equilibrium surface to change, so it is necessary to adapt the model parameters so as to keep them on the dynamically changing surface; let us call this process of adaptation “equilibriation”. We achieve this by taking gradient-based updates to minimize with a learning rate schedule that looks like a sharp quick increase from zero and then a slow annealing back to zero. The learning rate schedule is given by where is the number of mini-batch updates taken since the last change in and is total number of mini-batch updates of equilibration. The maximum value of the learning rate is set to . The free-energy should be unchanged if the model parameters are on the equilibrium surface after equilibration; this is shown in Fig. 3(a). Partial derivatives in Eq. 31 are computed using finite-differences.
Fig. 2 shows the result for the iso-classification process for MNIST and Fig. 3 shows a similar result for CIFAR-10. We can see that the classification loss remains constant through the process. This experiment shows that we can implement an iso-classification process while keeping the model parameters on the equilibrium surface during it.
5.2 Transferring representations to new data
We next present experimental results of an iso-classification process for transferring the learnt representation. We pick the source dataset to be all images corresponding to digits 0–4 in MNIST and the target dataset is its complement, images of digits 5–9. Our goal is to adapt a model trained on the source task to the target task while keeping its classification loss constant. We run the geodesic transfer dynamics from Section 4.3 and the results are shown in Fig. 5.
It is evident that the classification accuracy is constant throughout the transfer and is also the same as that of training from scratch on the target. MNIST is an simple dataset and the accuracy gap between iso-classification transfer, fine-tuning from the source and training from scratch is minor. The benefit of running the iso-classification transfer however is that we can be guaranteed about the final accuracy of the model. We expect the gap between these three to be significant for more complex datasets. Results for a similar experiment for transferring between a source dataset that consists of all vehicles in CIFAR-10 to a target dataset that consists of all animals are provided in Appendix G.
6 Related work
We are motivated by the Information Bottleneck (IB) principle of Tishby et al. ; Shwartz-Ziv and Tishby , which has been further explored by Achille and Soatto ; Alemi et al. ; Higgins et al. . The key difference in our work is that while these papers seek to understand the representation for a given task, we focus on how the representation can be adapted to a new task. Further, the Lagrangian in Eq. 8 has connections to PAC-Bayes bounds [McAllester, 2013; Dziugaite and Roy, 2017] and training algorithms that use the free-energy [Chaudhari et al., 2019]
. Our use of rate-distortion for transfer learning is close to the work on unsupervised learning ofBrekelmans et al. ; Ver Steeg and Galstyan .
This paper builds upon the work of Alemi et al. ; Alemi and Fischer . We refine some results therein, viz., we provide a proof of the convexity of the equilibrium surface and identify it with the equilibrium distribution of SGD (Remark 4). We introduce new ideas such as dynamical processes on the equilibrium surface. Our use of thermodynamics is purely as an inspiration; the work presented here is mathematically rigorous and also provides an immediate algorithmic realization of the ideas.
This paper has strong connections to works that study stochastic processes inspired from statistical physics for machine learning, e.g., approximate Bayesian inference and implicit regularization of SGD[Mandt et al., 2017; Chaudhari and Soatto, 2017], variational inference [Jordan et al., 1998; Kingma and Welling, 2013]. The iso-classification process instantiates an “automatic” regularization via the trade-off between rate and distortion; this point-of-view is an exciting prospect for future work. The technical content of the paper also draws from optimal transportation [Villani, 2008].
A large number of applications begin with pre-trained models [Sharif Razavian et al., 2014; Girshick et al., 2014] or models trained on tasks different [Doersch and Zisserman, 2017]. Current methods in transfer learning however do not come with guarantees over the performance on the target dataset, although there is a rich body of older work [Baxter, 2000] and ongoing work that studies this [Zamir et al., 2018]. The information-theoretic understanding of transfer and the constrained dynamical processes developed in our paper is a first step towards building such guarantees. In this context, our theory can also be used to tackle catastrophic forgetting Kirkpatrick et al.  to “detune” the model post-training and build up redundant features.
We presented dynamical processes that maintain the parameters of model on an equilibrium surface that arises out of a certain free-energy functional for the encoder-decoder-classifier architecture. The decoder acts as a measure of the information discarded by the encoder-classifier pair while fitting on a given task. We showed how one can develop an iso-classification process that travels on the equilibrium surface while keeping the classification loss constant. We showed an iso-classification transfer learning process which keeps the classification loss constant while adapting the learnt representation from a source task to a target task.
The information-theoretic point-of-view in this paper is rather abstract but its benefit lies in its exploitation of the equilibrium surface. Relationships between the three functionals, namely rate, distortion and classification, that define this surface, as also other functionals that connect to the capacity of the hypothesis class such as the entropy may allow us to define invariants of the learning process. For complex models such as deep neural networks, such a program may lead an understanding of the principles that govern their working.
- Achille and Soatto  Achille, A. and Soatto, S. (2017). On the emergence of invariance and disentangling in deep representations. arXiv:1706.01350.
- Alemi and Fischer  Alemi, A. A. and Fischer, I. (2018). Therml: Thermodynamics of machine learning. arXiv preprint arXiv:1807.04162.
- Alemi et al.  Alemi, A. A., Fischer, I., Dillon, J. V., and Murphy, K. (2016). Deep variational information bottleneck. arXiv:1612.00410.
- Alemi et al.  Alemi, A. A., Poole, B., Fischer, I., Dillon, J. V., Saurous, R. A., and Murphy, K. (2017). Fixing a broken elbo. arXiv preprint arXiv:1711.00464.
Baxter, J. (2000).
A model of inductive bias learning.
Journal of artificial intelligence research, 12:149–198.
Brekelmans et al. 
Brekelmans, R., Moyer, D., Galstyan, A., and Ver Steeg, G. (2019).
Exact rate-distortion in autoencoders via echo noise.In Advances in Neural Information Processing Systems, pages 3884–3895.
- Chaudhari et al.  Chaudhari, P., Choromanska, A., Soatto, S., LeCun, Y., Baldassi, C., Borgs, C., Chayes, J., Sagun, L., and Zecchina, R. (2019). Entropy-sgd: Biasing gradient descent into wide valleys. Journal of Statistical Mechanics: Theory and Experiment, 2019(12):124018.
- Chaudhari and Soatto  Chaudhari, P. and Soatto, S. (2017). Stochastic gradient descent performs variational inference, converges to limit cycles for deep networks. arXiv preprint arXiv:1710.11029.
- Cuturi  Cuturi, M. (2013). Sinkhorn distances: Lightspeed computation of optimal transport. In Advances in neural information processing systems, pages 2292–2300.
Doersch and Zisserman 
Doersch, C. and Zisserman, A. (2017).
Multi-task self-supervised visual learning.
Proceedings of the IEEE International Conference on Computer Vision, pages 2051–2060.
- Dziugaite and Roy  Dziugaite, G. K. and Roy, D. M. (2017). Computing nonvacuous generalization bounds for deep (stochastic) neural networks with many more parameters than training data. arXiv preprint arXiv:1703.11008.
Girshick et al. 
Girshick, R., Donahue, J., Darrell, T., and Malik, J. (2014).
Rich feature hierarchies for accurate object detection and semantic
Proceedings of the IEEE conference on computer vision and pattern recognition, pages 580–587.
- He et al.  He, K., Zhang, X., Ren, S., and Sun, J. (2016). Identity mappings in deep residual networks. arXiv:1603.05027.
- Higgins et al.  Higgins, I., Matthey, L., Pal, A., Burgess, C., Glorot, X., Botvinick, M., Mohamed, S., and A, L. (2017). beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework . In ICLR.
- Ioffe and Szegedy  Ioffe, S. and Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv:1502.03167.
- Jordan et al.  Jordan, M. I., Ghahramani, Z., Jaakkola, T. S., and Saul, L. K. (1998). An introduction to variational methods for graphical models. In Learning in graphical models, pages 105–161. Springer.
- Kingma and Ba  Kingma, D. and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv:1412.6980.
- Kingma and Welling  Kingma, D. P. and Welling, M. (2013). Auto-encoding variational Bayes. arXiv:1312.6114.
- Kirkpatrick et al.  Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A. A., Milan, K., Quan, J., Ramalho, T., Grabska-Barwinska, A., et al. (2017). Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13):3521–3526.
- Krizhevsky  Krizhevsky, A. (2009). Learning multiple layers of features from tiny images. Master’s thesis, Computer Science, University of Toronto.
- LeCun et al.  LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324.
- Levin and Peres  Levin, D. A. and Peres, Y. (2017). Markov chains and mixing times, volume 107. American Mathematical Soc.
- Mandt et al.  Mandt, S., Hoffman, M. D., and Blei, D. M. (2017). Stochastic Gradient Descent as Approximate Bayesian Inference. arXiv:1704.04289.
- McAllester  McAllester, D. (2013). A pac-bayesian tutorial with a dropout bound. arXiv:1307.2118.
- Mezard and Montanari  Mezard, M. and Montanari, A. (2009). Information, physics, and computation. Oxford University Press.
- Noh et al.  Noh, H., Hong, S., and Han, B. (2015). Learning deconvolution network for semantic segmentation. In Proceedings of the IEEE international conference on computer vision, pages 1520–1528.
- Pearlmutter  Pearlmutter, B. A. (1994). Fast exact multiplication by the hessian. Neural computation, 6(1):147–160.
- Robbins and Monro  Robbins, H. and Monro, S. (1951). A stochastic approximation method. The annals of mathematical statistics, pages 400–407.
- Sethna  Sethna, J. (2006). Statistical mechanics: entropy, order parameters, and complexity, volume 14. Oxford University Press.
- Sharif Razavian et al.  Sharif Razavian, A., Azizpour, H., Sullivan, J., and Carlsson, S. (2014). Cnn features off-the-shelf: an astounding baseline for recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pages 806–813.
- Shwartz-Ziv and Tishby  Shwartz-Ziv, R. and Tishby, N. (2017). Opening the black box of deep neural networks via information. arXiv:1703.00810.
- Tishby et al.  Tishby, N., Pereira, F. C., and Bialek, W. (2000). The information bottleneck method. arXiv preprint physics/0004057.
Ver Steeg and Galstyan 
Ver Steeg, G. and Galstyan, A. (2015).
Maximally informative hierarchical representations of high-dimensional data.In Artificial Intelligence and Statistics, pages 1004–1012.
- Villani  Villani, C. (2008). Optimal transport: old and new, volume 338. Springer Science & Business Media.
- Zamir et al.  Zamir, A. R., Sax, A., Shen, W., Guibas, L. J., Malik, J., and Savarese, S. (2018). Taskonomy: Disentangling task transfer learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3712–3722.
Appendix A Details of the experimental setup
Datasets. We use the MNIST [LeCun et al., 1998] and CIFAR-10 [Krizhevsky, 2009] datasets for these experiments. The former consists of 28 28-sized gray-scale images of handwritten digits (60,000 training and 10,000 validation). The latter consists of 3232-sized RGB images (50,000 training and 10,000 for validation) spread across 10 classes; 4 of these classes (airplane, automobile, ship, truck) are transportation-based while the others are images of animals and birds.
Architecture and training.
All models in our experiments consist of an encoder-decoder pair along with a classifier that takes in the latent representation as input. For experiments on MNIST, both encoder and decoder are multi-layer perceptrons with 2 fully-connected layers, the decoder uses a mean-square error loss, i.e., a Gaussian reconstruction likelihood and the classifier consists of a single fully-connected layer. For experiments on CIFAR-10, we use a residual network[He et al., 2016] with 18 layers as an encoder and a decoder with one fully-connected layer and 4 deconvolutional layers [Noh et al., 2015]Ioffe and Szegedy, 2015]. Further details of the architecture are given in Appendix A. We use Adam [Kingma and Ba, 2014] to train all models with cosine learning rate annealing.
The encoder and decoder for MNIST has 784–256–16 neurons on each layer; the encoding
is thus 16-dimensional which is the input to the decoder. The classifier has one hidden layer with 12 neurons and 10 outputs. The encoder for CIFAR-10 is a 18-layer residual neural network (ResNet-18) and the decoder has 4 deconvolutional layers. We used a slightly larger network for the geodesic transfer learning experiment on MNIST. The encoder and decoder have 784–400–64 neurons in each layer with a dropout of probability 0.1 after the hidden layer. The classifier has a single layer that takes the 64-dimensional encoding and predicts 10 classes.
Appendix B Proof of Lemma 3
The second statement directly follows by observing that is a minimum of affine functions in . To see the first, evaluate the Hessian of and
Since we have , we obtain
We then have
Compare the coefficients on both sides to get
Since , we have that , then the constraint surface is convex.
Appendix C Proof of Lemma 5
Appendix D Computation of Iso-classification constraint
We start with computing the gradient of classification loss, clear that , where is the logarithm of the classification loss, then