MINE: Mutual Information Neural Estimation

01/12/2018 ∙ by Ishmael Belghazi, et al. ∙ 0

We argue that the estimation of the mutual information between high dimensional continuous random variables is achievable by gradient descent over neural networks. This paper presents a Mutual Information Neural Estimator (MINE) that is linearly scalable in dimensionality as well as in sample size. MINE is back-propable and we prove that it is strongly consistent. We illustrate a handful of applications in which MINE is succesfully applied to enhance the property of generative models in both unsupervised and supervised settings. We apply our framework to estimate the information bottleneck, and apply it in tasks related to supervised classification problems. Our results demonstrate substantial added flexibility and improvement in these settings.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Mutual information is a fundamental quantity for measuring the relationship between random variables. In data science it has found applications in a wide range of domains and tasks, including biomedical sciences

(Maes et al., 1997), blind source separation 

(BSS, e.g., independent component analysis,

Hyvärinen et al., 2004)
, information bottleneck (IB, Tishby et al., 2000)

, feature selection 

(Kwak & Choi, 2002; Peng et al., 2005), and causality (Butte & Kohane, 2000).

Put simply, mutual information quantifies the dependence of two random variables and . It has the form,

(1)

where

is the joint probability distribution, and

and are the marginals. In contrast to correlation, mutual information captures non-linear statistical dependencies between variables, and thus can act as a measure of true dependence (Kinney & Atwal, 2014).

Despite being a pivotal quantity across data science, mutual information has historically been difficult to compute (Paninski, 2003). Exact computation is only tractable for discrete variables (as the sum can be computed exactly), or for a limited family of problems where the probability distributions are known. For more general problems, this is not possible. Common approaches are non-parametric 

(e.g., binning, likelihood-ratio estimators based on support vector machines, non-parametric kernel-density estimators; see,

Fraser & Swinney, 1986; Darbellay & Vajda, 1999; Suzuki et al., 2008; Kwak & Choi, 2002; Moon et al., 1995; Kraskov et al., 2004), or rely on approximate gaussianity of data distribution (e.g., Edgeworth expansion, Van Hulle, 2005). Unfortunately, these estimators typically do not scale well with sample size or dimension (Gao et al., 2014), and thus cannot be said to be general-purpose. Other recent works include Kandasamy et al. (2017); Singh & Póczos (2016); Moon et al. (2017).

In order to achieve a general-purpose estimator, we rely on the well-known characterization of the mutual information as the Kullback-Leibler (KL-) divergence (Kullback, 1997)

between the joint distribution and the product of the marginals (i.e.,

). Recent work uses a dual formulation to cast the estimation of -divergences (including the KL-divergence, see Nguyen et al., 2010) as part of an adversarial game between competing deep neural networks (Nowozin et al., 2016). This approach is at the cornerstone of generative adversarial networks (GANs, Goodfellow et al., 2014), which train a generative model without any explicit assumptions about the underlying distribution of the data.

In this paper we demonstrate that exploiting dual optimization to estimate divergences goes beyond the minimax objective as formalized in GANs. We leverage this strategy to offer a general-purpose parametric neural estimator of mutual information based on dual representations of the KL-divergence (Ruderman et al., 2012), which we show is valuable in settings that do not necessarily involve an adversarial game. Our estimator is scalable, flexible, and completely trainable via back-propagation. The contributions of this paper are as follows:

  • We introduce the Mutual Information Neural Estimator (MINE), which is scalable, flexible, and completely trainable via back-prop, as well as provide a thorough theoretical analysis.

  • We show that the utility of this estimator transcends the minimax objective as formalized in GANs, such that it can be used in mutual information estimation, maximization, and minimization.

  • We apply MINE to palliate mode-dropping in GANs and to improve reconstructions and inference in Adversarially Learned Inference (ALI, Dumoulin et al., 2016) on large scale datasets.

  • We use MINE to apply the Information Bottleneck method (Tishby et al., 2000) in a continuous setting, and show that this approach outperforms variational bottleneck methods (Alemi et al., 2016).

2 Background

2.1 Mutual Information

Mutual information is a Shannon entropy-based measure of dependence between random variables. The mutual information between and can be understood as the decrease of the uncertainty in given :

(2)

where is the Shannon entropy, and is the conditional entropy of given . As stated in Eqn. 1 and the discussion above, the mutual information is equivalent to the Kullback-Leibler (KL-) divergence between the joint, , and the product of the marginals :

(3)

where is defined as111Although the discussion is more general, we can think of and as being distributions on some compact domain , with density and respect the Lebesgue measure , so that .,

(4)

whenever is absolutely continuous with respect to 222and infinity otherwise. .

The intuitive meaning of Eqn. 3 is clear: the larger the divergence between the joint and the product of the marginals, the stronger the dependence between and . This divergence, hence the mutual information, vanishes for fully independent variables.

2.2 Dual representations of the KL-divergence.

A key technical ingredient of MINE are dual representations of the KL-divergence. We will primarily work with the Donsker-Varadhan representation (Donsker & Varadhan, 1983), which results in a tighter estimator; but will also consider the dual -divergence representation (Keziou, 2003; Nguyen et al., 2010; Nowozin et al., 2016).

The Donsker-Varadhan representation.

The following theorem gives a representation of the KL-divergence  (Donsker & Varadhan, 1983):

Theorem 1 (Donsker-Varadhan representation).

The KL divergence admits the following dual representation:

(5)

where the supremum is taken over all functions such that the two expectations are finite.

Proof.

See the Supplementary Material.

A straightforward consequence of Theorem 1 is as follows. Let be any class of functions satisfying the integrability constraints of the theorem. We then have the lower-bound333The bound in Eqn. 6 is known as the compression lemma in the PAC-Bayes literature  (Banerjee, 2006).:

(6)

Note also that the bound is tight for optimal functions that relate the distributions to the Gibbs density as,

(7)
The -divergence representation.

It is worthwhile to compare the Donsker-Varadhan representation to the -divergence representation proposed in Nguyen et al. (2010); Nowozin et al. (2016), which leads to the following bound:

(8)

Although the bounds in Eqns. 6 and 8 are tight for sufficiently large families , the Donsker-Varadhan bound is stronger in the sense that, for any fixed , the right hand side of Eqn. 6 is larger444To see this, just apply the identity with . than the right hand side of Eqn. 8. We refer to the work by  Ruderman et al. (2012) for a derivation of both representations in Eqns. 6 and 8 from the unifying perspective of Fenchel duality. In Section 3 we discuss versions of MINE based on these two representations, and numerical comparisons are performed in Section 4.

3 The Mutual Information Neural Estimator

In this section we formulate the framework of the Mutual Information Neural Estimator (MINE). We define MINE and present a theoretical analysis of its consistency and convergence properties.

3.1 Method

Using both Eqn. 3 for the mutual information and the dual representation of the KL-divergence, the idea is to choose to be the family of functions parametrized by a deep neural network with parameters . We call this network the statistics network. We exploit the bound:

(9)

where is the neural information measure defined as

(10)

The expectations in Eqn. 10 are estimated using empirical samples555Note that samples and from the marginals are obtained by simply dropping from samples and . from and or by shuffling the samples from the joint distribution along the batch axis. The objective can be maximized by gradient ascent.

It should be noted that Eqn. 10 actually defines a new class information measures, The expressive power of neural network insures that they can approximate the mutual information with arbitrary accuracy.

In what follows, given a distribution , we denote by as the empirical distribution associated to i.i.d. samples.

Definition 3.1 (Mutual Information Neural Estimator (MINE)).

Let be the set of functions parametrized by a neural network. MINE is defined as,

(11)
  
  repeat
     Draw minibatch samples from the joint distribution:
     
     Draw samples from the marginal distribution:
     
     Evaluate the lower-bound:
     
     Evaluate bias corrected gradients (e.g., moving average):
     
     Update the statistics network parameters:
     
  until convergence
Algorithm 1 MINE

Details on the implementation of MINE are provided in Algorithm 1. An analogous definition and algorithm also hold for the -divergence formulation in Eqn. 8, which we refer to as MINE-. Since Eqn. 8 lower-bounds Eqn. 6, it generally leads to a looser estimator of the mutual information, and numerical comparisons of MINE with MINE- can be found in Section 4. However, in a mini-batch setting, the SGD gradients of MINE are biased. We address this in the next section.

3.2 Correcting the bias from the stochastic gradients

A naive application of stochastic gradient estimation leads to the gradient estimate:

(12)

where, in the second term, the expectations are over the samples of a minibatch , leads to a biased estimate of the full batch gradient666From the optimization point of view, the -divergence formulation has the advantage of making the use of SGD with unbiased gradients straightforward..

Fortunately, the bias can be reduced by replacing the estimate in the denominator by an exponential moving average. For small learning rates, this improved MINE gradient estimator can be made to have arbitrarily small bias.
We found in our experiments that this improves all-around performance of MINE.

3.3 Theoretical properties

In this section we analyze the consistency and convergence properties of MINE. All the proofs can be found in the Supplementary Material.

3.3.1 Consistency

MINE relies on a choice of a statistics network and samples from the data distribution .

Definition 3.2 (Strong consistency).

The estimator is strongly consistent if for all , there exists a positive integer and a choice of statistics network such that:

where the probability is over a set of samples.

In a nutshell, the question of consistency is divided into two problems: an approximation problem related to the size of the family, , and an estimation problem related to the use of empirical measures. The first problem is addressed by universal approximation theorems for neural networks (Hornik, 1989). For the second problem, classical consistency theorems for extremum estimators apply (Van de Geer, 2000) under mild conditions on the parameter space.

This leads to the two lemmas below. The first lemma states that the neural information measures , defined in Eqn. 10, can approximate the mutual information with arbitrary accuracy:

Lemma 1 (approximation).

Let . There exists a neural network parametrizing functions with parameters in some compact domain , such that

The second lemma states the almost sure convergence of MINE to a neural information measure as the number of samples goes to infinity:

Lemma 2 (estimation).

Let . Given a family of neural network functions with parameters in some bounded domain , there exists an , such that

(13)

Combining the two lemmas with the triangular inequality, we have,

Theorem 2.

MINE is strongly consistent.

3.3.2 Sample complexity

In this section we discuss the sample complexity of our estimator. Since the focus here is on the empirical estimation problem, we assume that the mutual information is well enough approximated by the neural information measure . The theorem below is a refinement of Lemma 2: it gives how many samples we need for an empirical estimation of the neural information measure at a given accuracy and with high confidence.

We make the following assumptions: the functions are -bounded (i.e., ) and -Lipschitz with respect to the parameters . The domain is bounded, so that for some constant . The theorem below shows a sample complexity of , where is the dimension of the parameter space.

Theorem 3.

Given any values of the desired accuracy and confidence parameters, we have,

(14)

whenever the number of samples satisfies

(15)

4 Empirical comparisons

Before diving into applications, we perform some simple empirical evaluation and comparisons of MINE. The objective is to show that MINE is effectively able to estimate mutual information and account for non-linear dependence.

4.1 Comparing MINE to non-parametric estimation

We compare MINE and MINE- to the -NN-based non-parametric estimator found in Kraskov et al. (2004). In our experiment, we consider multivariate Gaussian random variables, and , with componentwise correlation, , where and is Kronecker’s delta. As the mutual information is invariant to continuous bijective transformations of the considered variables, it is enough to consider standardized Gaussians marginals. We also compare MINE (using the Donsker-Varadhan representation in Eqn. 6) and MINE- (based on the -divergence representation in Eqn. 8).

Our results are presented in Figs. 1. We observe that both MINE and Kraskov’s estimation are virtually indistinguishable from the ground truth when estimating the mutual information between bivariate Gaussians. MINE shows marked improvement over Krakov’s when estimating the mutual information between twenty dimensional random variables. We also remark that MINE provides a tighter estimate of the mutual information than MINE-.

Figure 1: Mutual information between two multivariate Gaussians with component-wise correlation .

4.2 Capturing non-linear dependencies

An important property of mutual information between random variables with relationship , where

is a deterministic non-linear transformation and

is random noise, is that it is invariant to the deterministic nonlinear transformation, but should only depend on the amount of noise, . This important property, that guarantees the quantification dependence without bias for the relationship, is called equitability (Kinney & Atwal, 2014). Our results (Fig. 2) show that MINE captures this important property.

Figure 2: MINE is invariant to choice of deterministic nonlinear transformation. The heatmap depicts mutual information estimated by MINE between 2-dimensional random variables and , where and .

5 Applications

In this section, we use MINE to present applications of mutual information and compare to competing methods designed to achieve the same goals. Specifically, by using MINE to maximize the mutual information, we are able to improve mode representation and reconstruction of generative models. Finally, by minimizing mutual information, we are able to effectively implement the information bottleneck in a continuous setting.

5.1 Maximizing mutual information to improve GANs

Mode collapse (Che et al., 2016; Dumoulin et al., 2016; Donahue et al., 2016; Salimans et al., 2016; Metz et al., 2017; Saatchi & Wilson, 2017; Nguyen et al., 2017; Lin et al., 2017; Ghosh et al., 2017) is a common pathology of generative adversarial networks (GANs, Goodfellow et al., 2014), where the generator fails to produces samples with sufficient diversity (i.e., poorly represent some modes).

GANs as formulated in Goodfellow et al. (2014) consist of two components: a discriminator, and a generator, , where is a domain such as a compact subspace of . Given follows some simple prior distribution (e.g., a spherical Gaussian with density, ), the goal of the generator is to match its output distribution to a target distribution, (specified by the data samples). The discriminator and generator are optimized through the value function,

(16)

A natural approach to diminish mode collapse would be regularizing the generator’s loss with the neg-entropy of the samples. As the sample entropy is intractable, we propose to use the mutual information as a proxy.

Following Chen et al. (2016), we write the prior as the concatenation of noise and code variables, . We propose to palliate mode collapse by maximizing the mutual information between the samples and the code. . The generator objective then becomes,

(17)

As the samples are differentiable w.r.t. the parameters of

, and the statistics network being a differentiable function, we can maximize the mutual information using back-propagation and gradient ascent by only specifying this additional loss term. Since the mutual information is theoretically unbounded, we use adaptive gradient clipping (see the Supplementary Material) to ensure that the generator receives learning signals similar in magnitude from the discriminator and the statistics network.

Related works on mode-dropping

Methods to address mode dropping in GANs can readily be found in the literature. Salimans et al. (2016) use mini-batch discrimination. In the same spirit, Lin et al. (2017) successfully mitigates mode dropping in GANs by modifying the discriminator to make decisions on multiple real or generated samples. Ghosh et al. (2017) uses multiple generators that are encouraged to generate different parts of the target distribution. Nguyen et al. (2017) uses two discriminators to minimize the KL and reverse KL divergences between the target and generated distributions. Che et al. (2016) learns a reconstruction distribution, then teach the generator to sample from it, the intuition being that the reconstruction distribution is a de-noised or smoothed version of the data distribution, and thus easier to learn. Srivastava et al. (2017) minimizes the reconstruction error in the latent space of bi-directional GANs (Dumoulin et al., 2016; Donahue et al., 2016). Metz et al. (2017) includes many steps of the discriminator’s optimization as part of the generator’s objective. While Chen et al. (2016) maximizes the mutual information between the code and the samples, it does so by minimizing a variational upper bound on the conditional entropy (Barber & Agakov, 2003) therefore ignoring the entropy of the samples. Chen et al. (2016) makes no claim about mode-dropping.

Experiments: Spiral, 25-Gaussians datasets

We apply MINE to improve mode coverage when training a generative adversarial network (GAN, Goodfellow et al., 2014). We demonstrate using Eqn. 17 on the spiral and the 25-Gaussians datasets, comparing two models, one with (which corresponds to the orthodox GAN as in Goodfellow et al. (2014)) and one with , which corresponds to mutual information maximization.

(a) GAN
(b) GAN+MINE
Figure 3: The generator of the GAN model without mutual information maximization after iterations suffers from mode collapse (has poor coverage of the target dataset) compared to GAN+MINE on the spiral experiment.

Our results on the spiral (Fig. 3) and the -Gaussians (Fig. 4) experiments both show improved mode coverage over the baseline with no mutual information objective. This confirms our hypothesis that maximizing mutual information helps against mode-dropping in this simple setting.

(a) Original data
(b) GAN
(c) GAN+MINE
Figure 4: Kernel density estimate (KDE) plots for GAN+MINE samples and GAN samples on 25 Gaussians dataset.
Experiment: Stacked MNIST

Following Che et al. (2016); Metz et al. (2017); Srivastava et al. (2017); Lin et al. (2017), we quantitatively assess MINE’s ability to diminish mode dropping on the stacked MNIST dataset which is constructed by stacking three randomly sampled MNIST digits. As a consequence, stacked MNIST offers 1000 modes. Using the same architecture and training protocol as in Srivastava et al. (2017); Lin et al. (2017)

, we train a GAN on the constructed dataset and use a pre-trained classifier on 26,000 samples to count the number of modes in the samples, as well as to compute the KL divergence between the sample and expected data distributions. Our results in Table 

1 demonstrate the effectiveness of MINE in preventing mode collapse on Stacked MNIST.

Stacked MNIST
Modes (Max 1000) KL
DCGAN
ALI
Unrolled GAN
VEEGAN
PacGAN
GAN+MINE (Ours)
Table 1: Number of captured modes and Kullblack-Leibler divergence between the training and samples distributions for DCGAN (Radford et al., 2015), ALI (Dumoulin et al., 2016), Unrolled GAN (Metz et al., 2017), VeeGAN (Srivastava et al., 2017), PacGAN (Lin et al., 2017).
(a) Training set
(b) DCGAN
(c) DCGAN+MINE
Figure 5: Samples from the Stacked MNIST dataset along with generated samples from DCGAN and DCGAN with MINE. While DCGAN only shows a very limited number of modes, the inclusion of MINE generates a much better representative set of samples.

5.2 Maximizing mutual information to improve inference in bi-directional adversarial models

Adversarial bi-directional models were introduced in Adversarially Learned Inference (ALI, Dumoulin et al., 2016) and BiGAN (Donahue et al., 2016) and are an extension of GANs which incorporate a reverse model, jointly trained with the generator. These models formulate the problem in terms of the value function in Eqn. 16 between two joint distributions, and induced by the forward (encoder) and reverse (decoder) models, respectively777We switch to density notations for convenience throughout this section..

One goal of bi-directional models is to do inference as well as to learn a good generative model. Reconstructions are one desirable property of a model that does both inference and generation, but in practice ALI can lack fidelity  (i.e., reconstructs less faithfully than desired, see Li et al., 2017; Ulyanov et al., 2017; Belghazi et al., 2018). To demonstrate the connection to mutual information, it can be shown (see the Supplementary Material for details) that the reconstruction error, , is bounded by,

(18)

If the joint distributions are matched, tends to , which is fixed as long as the prior, , is itself fixed. Subsequently, maximizing the mutual information minimizes the expected reconstruction error.

Assuming that the generator is the same as with GANs in the previous section, the objectives for training a bi-directional adversarial model then become:

(19)
Related works

Ulyanov et al. (2017) improves reconstructions quality by forgoing the discriminator and expressing the adversarial game between the encoder and decoder. Kumar et al. (2017) augments the bi-directional objective by considering the reconstruction and the corresponding encodings as an additional fake pair. Belghazi et al. (2018) shows that a Markovian hierarchical generator in a bi-directional adversarial model provide a hierarchy of reconstructions with increasing levels of fidelity (increasing reconstruction quality). Li et al. (2017) shows that the expected reconstruction error can be diminished by minimizing the conditional entropy of the observables given the latent representations. The conditional entropy being intractable for general posterior, Li et al. (2017) proposes to augment the generator’s loss with an adversarial cycle consistency loss (Zhu et al., 2017) between the observables and their reconstructions.

Experiment: ALI+MINE

In this section we compare MINE to existing bi-directional adversarial models. As the decoder’s density is generally intractable, we use three different metrics to measure the fidelity of the reconstructions with respect to the samples; the euclidean reconstruction error, reconstruction accuracy, which is the proportion of labels preserved by the reconstruction as identified by a pre-trained classifier; the Multi-scale structural similarity metric (MS-SSIM, Wang et al., 2004) between the observables and their reconstructions.

We train MINE on datasets of increasing order of complexity: a toy dataset composed of 25-Gaussians, MNIST (LeCun, 1998), and the CelebA dataset (Liu et al., 2015). Fig. 6 shows the reconstruction ability of MINE compared to ALI. Although ALICE does perfect reconstruction (which is in its explicit formulation), we observe significant mode-dropping in the sample space. MINE does a balanced job of reconstructing along with capturing all the modes of the underlying data distribution.

Next, we measure the fidelity of the reconstructions over ALI, ALICE, and MINE. Tbl. 2 compares MINE to the existing baselines in terms of euclidean reconstruction errors, reconstruction accuracy, and MS-SSIM. On MNIST, MINE outperforms ALI in terms of reconstruction errors by a good margin and is competitive to ALICE with respect to reconstruction accuracy and MS-SSIM. Our results show that MINE’s effect on reconstructions is even more dramatic when compared to ALI and ALICE on the CelebA dataset.

(a) ALI (b) ALICE () (c) ALICE (A) (d) ALI+MINE
Figure 6: Reconstructions and model samples from adversarially learned inference (ALI) and variations intended to increase improve reconstructions. Shown left to right are the baseline (ALI), ALICE with the loss to minimize the reconstruction error, ALICE with an adversarial loss, and ALI+MINE. Top to bottom are the reconstructions and samples from the priors. ALICE with the adversarial loss has the best reconstruction, though at the expense of poor sample quality, where as ALI+MINE captures all the modes of the data in sample space.
Model Recons. Error Recons. Acc.(%) MS-SSIM
MNIST
ALI 14.24 45.95 0.97
ALICE() 3.20 99.03 0.97
ALICE(Adv.) 5.20 98.17 0.98
MINE 9.73 96.10 0.99
CelebA
ALI 53.75 57.49 0.81
ALICE() 8.01 32.22 0.93
ALICE(Adv.) 92.56 48.95 0.51
MINE 36.11 76.08 0.99
Table 2: Comparison of MINE with other bi-directional adversarial models in terms of euclidean reconstruction error, reconstruction accuracy, and MS-SSIM on the MNIST and CelebA datasets. MINE does a good job compared to ALI in terms of reconstructions. Though the explicit reconstruction based baselines (ALICE) can sometimes do better than MINE in terms of reconstructions related tasks, they consistently lag behind in MS-SSIM scores and reconstruction accuracy on CelebA.

5.3 Information Bottleneck

The Information Bottleneck (IB, Tishby et al., 2000) is an information theoretic method for extracting relevant information, or yielding a representation, that an input contains about an output . An optimal representation of would capture the relevant factors and compress by diminishing the irrelevant parts which do not contribute to the prediction of . IB was recently covered in the context of deep learning (Tishby & Zaslavsky, 2015), and as such can be seen as a process to construct an approximation of the minimally sufficient statistics of the data. IB seeks an encoder, , that induces the Markovian structure . This is done by minimizing the IB Lagrangian,

(20)

which appears as a standard cross-entropy loss augmented with a regularizer promoting minimality of the representation (Achille & Soatto, 2017). Here we propose to estimate the regularizer with MINE.

Related works

In the discrete setting, (Tishby et al., 2000) uses the Blahut-Arimoto Algorithm (Arimoto, 1972), which can be understood as cyclical coordinate ascent in function spaces. While IB is successful and popular in a discrete setting, its application to the continuous setting was stifled by the intractability of the continuous mutual information. Nonetheless, IB was applied in the case of jointly Gaussian random variables in (Chechik et al., 2005).

In order to overcome the intractability of in the continuous setting, Alemi et al. (2016); Kolchinsky et al. (2017); Chalk et al. (2016) exploit the variational bound of Barber & Agakov (2003) to approximate the conditional entropy in . These approaches differ only on their treatment of the marginal distribution of the bottleneck variable: Alemi et al. (2016) assumes a standard multivariate normal marginal distribution, Chalk et al. (2016) uses a Student-t distribution, and Kolchinsky et al. (2017) uses non-parametric estimators. Due to their reliance on a variational approximation, these methods require a tractable density for the approximate posterior, while MINE does not.

Experiment: Permutation-invariant MNIST classification

Here, we demonstrate an implementation of the IB objective on permutation invariant MNIST using MINE. We compare to the Deep Variational Bottleneck (DVB, Alemi et al., 2016) and use the same empirical setup. As the DVB relies on a variational bound on the conditional entropy, it therefore requires a tractable density. Alemi et al. (2016) opts for a conditional Gaussian encoder , where . As MINE does not require a tractable density, we consider three type of encoders: a Gaussian encoder as in Alemi et al. (2016); an additive noise encoder, ; and a propagated noise encoder, . Our results can be seen in Tbl. 3, and this shows MINE as being superior in these settings.

Model Misclass. rate(%)
Baseline 1.38%
Dropout 1.34%
Confidence penalty 1.36%
Label Smoothing 1.40%
DVB 1.13%
DVB + Additive noise 1.06%
MINE(Gaussian) (ours) 1.11%
MINE(Propagated) (ours) 1.10%
MINE(Additive) (ours) 1.01%
Table 3: Permutation Invariant MNIST misclassification rate using  Alemi et al. (2016) experimental setup for regularization by confidence penalty (Pereyra et al., 2017), label smoothing (Pereyra et al., 2017), Deep Variational Bottleneck(DVB) (Alemi et al., 2016) and MINE. The misclassification rate is averaged over ten runs. In order to control for the regularizing impact of the additive Gaussian noise in the additive conditional, we also report the results for DVB with additional additive Gaussian noise at the input. All non-MINE results are taken from Alemi et al. (2016).

6 Conclusion

We proposed a mutual information estimator, which we called the mutual information neural estimator (MINE), that is scalable in dimension and sample-size. We demonstrated the efficiency of this estimator by applying it in a number of settings. First, a term of mutual information can be introduced alleviate mode-dropping issue in generative adversarial networks (GANs, Goodfellow et al., 2014). Mutual information can also be used to improve inference and reconstructions in adversarially-learned inference (ALI, Dumoulin et al., 2016). Finally, we showed that our estimator allows for tractable application of Information bottleneck methods (Tishby et al., 2000) in a continuous setting.

7 Acknowledgements

We would like to thank Martin Arjovsky, Caglar Gulcehre, Marcin Moczulski, Negar Rostamzadeh, Thomas Boquet, Ioannis Mitliagkas, Pedro Oliveira Pinheiro for helpful comments, as well as Samsung and IVADO for their support.

References

8 Appendix

In this Appendix, we provide additional experiment details and spell out the proofs omitted in the text.

8.1 Experimental Details

8.1.1 Adaptive Clipping

Here we assume we are in the context of GANs described in Sections 5.1 and 5.2, where the mutual information shows up as a regularizer in the generator objective.

Notice that the generator is updated by two gradients. The first gradient is that of the generator’s loss, with respect to the generator’s parameters , . The second flows from the mutual information estimate to the generator, . If left unchecked, because mutual information is unbounded, the latter can overwhelm the former, leading to a failure mode of the algorithm where the generator puts all of its attention on maximizing the mutual information and ignores the adversarial game with the discriminator. We propose to adaptively clip the gradient from the mutual information so that its Frobenius norm is at most that of the gradient from the discriminator. Defining to be the adapted gradient following from the statistics network to the generator, we have,

(21)

Note that adaptive clipping can be considered in any situation where MINE is to be maximized.

8.1.2 GAN+MINE: Spiral and 25-gaussians

In this section we state the details of experiments supporting mode dropping experiments on the spiral and 25-Gaussians dataset. For both the datasets we use 100,000 examples sampled from the target distributions, using a standard deviation of

in the case of 25-gaussians, and using additive noise for the spiral. The generator for the GAN consists of two fully connected layers with

units in each layer with batch-normalization 

(Ioffe & Szegedy, 2015)

and Leaky-ReLU as activation function as in

Dumoulin et al. (2016). The discriminator and statistics networks have three fully connected layers with units each. We use the Adam (Kingma & Ba, 2014) optimizer with a learning rate of . Both GAN baseline and GAN+MINE were trained for iterations with a mini batch-size of .

8.1.3 GAN+MINE: Stacked-MNIST

Here we describe the experimental setup and architectural details of stacked-MNIST task with GAN+MINE. We compare to the exact same experimental setup followed and reported in PacGAN(Lin et al., 2017) and VEEGAN(Srivastava et al., 2017)

. We use a pre-trained classifier to classify generated samples on each of the three stacked channels. Evaluation is done on 26,000 test samples as followed in the baselines. We train GAN+MINE for 50 epochs on

samples. Details for generator and discriminator networks are given below in the table4 and table5. Specifically the statistics network has the same architecture as discriminator in DCGAN with ELU (Clevert et al., 2015) as activation function for the individual layers and without batch-normalization as highlighted in Table 6. In order to condition the statistics network on the variable, we use linear MLPs at each layer, whose output are reshaped to the number of feature maps. The linear MLPs output is then added as a dynamic bias.

Generator
Layer Number of outputs Kernel size Stride Activation function
vhv Input 100
Fully-connected 2*2*512 ReLU
Transposed convolution 4*4*256 2 ReLU
Transposed convolution 7*7*128 2 ReLU
Transposed convolution 14*14*64 2 ReLU
Transposed convolution 28*28*3 2 Tanh
Table 4: Generator network for Stacked-MNIST experiment using GAN+MINE.
Discriminator
Layer Number of outputs Kernel size Stride Activation function
vhv Input
Convolution 14*14*64 2 ReLU
Convolution 7*7*128 2 ReLU
Convolution 4*4*256 2 ReLU
Convolution 2*2*512 2 ReLU
Fully-connected 1 1 Valid Sigmoid
Table 5: Discriminator network for Stacked-MNIST experiment.
Statistics Network
Layer number of outputs kernel size stride activation function
vhv Input
Convolution 14*14*16 2 ELU
Convolution 7*7*32 2 ELU
Convolution 4*4*64 2 ELU
Flatten - - - -
Fully-Connected 1024 1 Valid None
Fully-Connected 1 1 Valid None
Table 6: Statistics network for Stacked-MNIST experiment.

8.1.4 ALI+MINE: MNIST and CelebA

In this section we state the details of experimental setup and the network architectures used for the task of improving reconstructions and representations in bidirectional adversarial models with MINE. The generator and discriminator network architectures along with the hyper parameter setup used in these tasks are similar to the ones used in DCGAN (Radford et al., 2015).

Statistics network conditioning on the latent code was done as in the Stacked-MNIST experiments. We used Adam as the optimizer with a learning rate of 0.0001. We trained the model for a total of iterations on CelebA and iterations on MNIST, both with a mini batch-size of .

Encoder
Layer Number of outputs Kernel size Stride Activation function
vhv Input 28*28*129
Convolution 14*14*64 2 ReLU
Convolution 7*7*128 2 ReLU
Convolution 4*4*256 2 ReLU
Convolution 256 Valid ReLU
Fully-connected 128 - - None
Table 7: Encoder network for bi-directional models on MNIST. .
Decoder
Layer Number of outputs Kernel size Stride Activation function
vhv Input 128
Fully-connected 4*4*256 ReLU
Transposed convolution 7*7*128 2 ReLU
Transposed convolution 14*14*64 2 ReLU
Transposed convolution 28*28*1 2 Tanh
Table 8: Decoder network for bi-directional models on MNIST.
Discriminator
Layer Number of outputs Kernel size Stride Activation function
vhv Input
Convolution 14*14*64 2 LearkyReLU
Convolution 7*7*128 2 LeakyReLU
Convolution 4*4*256 2 LeakyReLU
Flatten - - -
Concatenate - - -
Fully-connected 1024 - - LeakyReLU
Fully-connected 1 - - Sigmoid
Table 9: Discriminator network for bi-directional models experiments MINE on MNIST.
Statistics Network
Layer number of outputs kernel size stride activation function
vhv Input
Convolution 14*14*64 2 LeakyReLU
Convolution 7*7*128 2 LeakyReLU
Convolution 4*4*256 2 LeakyReLU
Flatten - - - -
Fully-connected 1 - - None
Table 10: Statistics network for bi-directional models using MINE on MNIST.
Encoder
Layer Number of outputs Kernel size Stride Activation function
vhv Input 64*64*259
Convolution 32*32*64 2 ReLU
Convolution 16*16*128 2 ReLU
Convolution 8*8*256 2 ReLU
Convolution 4*4*512 2 ReLU
Convolution 512 Valid ReLU
Fully-connected 256 - - None
Table 11: Encoder network for bi-directional models on CelebA. .
Decoder
Layer Number of outputs Kernel size Stride Activation function
vhv Input 256
Fully-Connected 4*4*512 - - ReLU
Transposed convolution 8*8*256 2 ReLU
Transposed convolution 16*16*128 2 ReLU
Transposed convolution 32*32*64 2 ReLU
Transposed convolution 64*64*3 2 Tanh
Table 12: Decoder network for bi-directional model(ALI, ALICE) experiments using MINE on CelebA.
Discriminator
Layer Number of outputs Kernel size Stride Activation function
vhv Input
Convolution 32*32*64 2 LearkyReLU
Convolution 16*16*128 2 LeakyReLU
Convolution 8*8*256 2 LeakyReLU
Convolution 4*4*512 2 LeakyReLU
Flatten - - -
Concatenate - - -
Fully-connected 1024 - - LeakyReLU
Fully-connected 1 - - Sigmoid
Table 13: Discriminator network for bi-directional models on CelebA.
Statistics Network
Layer number of outputs kernel size stride activation function
vhv Input
Convolution 32*32*16 2 ELU
Convolution 16*16*32 2 ELU
Convolution 8*8*64 2 ELU
Convolution 4*4*128 2 ELU
Flatten - - - -
Fully-connected 1 - - None
Table 14: Statistics network for bi-directional models on CelebA.

8.1.5 Information bottleneck with MINE

In this section we outline the network details and hyper-parameters used for the information bottleneck task using MINE. To keep comparison fair all hyperparameters and architectures are those outlined in  

Alemi et al. (2016). The statistics network is shown, a two layer MLP with additive noise at each layer and 512 ELUs (Clevert et al., 2015) activations, is outlined in table15.

Statistics Network
Layer number of outputs activation function
vhv input
Gaussian noise(std=0.3) - -
dense layer 512 ELU
Gaussian noise(std=0.5) - -
dense layer 512 ELU
Gaussian noise(std=0.5) - -
dense layer 1 None
Table 15: Statistics network for Information-bottleneck experiments on MNIST.

8.2 Proofs

8.2.1 Donsker-Varadhan Representation

Theorem 4 (Theorem 1 restated).

The KL divergence admits the following dual representation:

(22)

where the supremum is taken over all functions such that the two expectations are finite.

Proof.

A simple proof goes as follows. For a given function , consider the Gibbs distribution defined by , where . By construction,

(23)

Let be the gap,

(24)

Using Eqn 23, we can write as a KL-divergence:

(25)

The positivity of the KL-divergence gives . We have thus shown that for any ,

(26)

and the inequality is preserved upon taking the supremum over the right-hand side. Finally, the identity (25) also shows that this bound is tight whenever , namely for optimal functions taking the form for some constant . ∎

8.2.2 Consistency Proofs

This section presents the proofs of the Lemma and consistency theorem stated in the consistency in Section 3.3.1.

In what follows, we assume that the input space is a compact domain of

, and all measures are absolutely continuous with respect to the Lebesgue measure. We will restrict to families of feedforward functions with continuous activations, with a single output neuron, so that a given architecture defines a continuous mapping

from to .

To avoid unnecessary heavy notation, we denote and for the joint distribution and product of marginals, and for their empirical versions. We will use the notation for the quantity:

(27)

so that .

Lemma 3 (Lemma 1 restated).

Let . There exists a family of neural network functions with parameters in some compact domain , such that

(28)

where

(29)
Proof.

Let . By construction, satisfies:

(30)

For a function , the (positive) gap can be written as

(31)

where we used the inequality .

Fix . We first consider the case where is bounded from above by a constant . By the universal approximation theorem (see corollary 2.2 of Hornik (1989)888Specifically, the argument relies on the density of feedforward network functions in the space of integrable functions with respect the measure .), we may choose a feedforward network function such that

(32)

Since is Lipschitz continuous with constant on , we have

(33)

From Equ 31 and the triangular inequality, we then obtain:

(34)

In the general case, the idea is to partition in two subset and for a suitably chosen large value of . For a given subset , we will denote by

its characteristic function,

if and otherwise. is integrable with respect to 999This can be seen from the identity (Györfi & van der Meulen, 1987)
, and is integrable with respect to , so by the dominated convergence theorem, we may choose so that the expectations and are lower than . Just like above, we then use the universal approximation theorem to find a feed forward network function , which we can assume without loss of generality to be upper-bounded by , such that

(35)

We then write

(36)
(37)

where the inequality in the second line arises from the convexity and positivity of . Eqns. 35 and 36, together with the triangular inequality, lead to Eqn.  34, which proves the Lemma.

Lemma 4 (Lemma 2 restated).

Let . Given a family of neural network functions with parameters in some compact domain , there exists such that

(38)
Proof.

We start by using the triangular inequality to write,

(39)

The continuous function , defined on the compact domain , is bounded. So the functions are uniformly bounded by a constant , i.e for all . Since is Lipschitz continuous with constant in the interval , we have

(40)

Since is compact and the feedforward network functions are continuous, the families of functions and

satisfy the uniform law of large numbers 

(Van de Geer, 2000). Given we can thus choose such that and with probability one,

(41)

Together with Eqns. 39 and 40, this leads to

(42)

Theorem 5 (Theorem 2 restated).

MINE is strongly consistent.

Proof.

Let . We apply the two Lemmas to find a a family of neural network function and such that (28) and (38) hold with . By the triangular inequality, for all and with probability one, we have:

(43)

which proves consistency. ∎

8.2.3 Sample complexity proof

Theorem 6 (Theorem 3 restated).

Assume that the functions in are -bounded (i.e., ) and -Lipschitz with respect to the parameters . The domain is bounded, so that for some constant . Given any values of the desired accuracy and confidence parameters, we have,

(44)

whenever the number of samples satisfies

(45)
Proof.

The assumptions of Lemma 2 apply, so let us begin with Eqns. 39 and 40. By the Hoeffding inequality, for all function ,

(46)

To extend this inequality to a uniform inequality over all functions and , the standard technique is to choose a minimal cover of the domain by a finite set of small balls of radius , , and to use the union bound. The minimal cardinality of such covering is bounded by the covering number of , known to satisfy(Shalev-Schwartz & Ben-David, 2014)

(47)

Successively applying a union bound in Eqn 46 with the set of functions and gives

(48)

and