1 Introduction
One core objective of deep learning is to discover useful representations, and the simple idea explored here is to train a representationlearning function, i.e. an encoder, to maximize the mutual information (MI) between its inputs and outputs. MI is notoriously difficult to compute, particularly in continuous and highdimensional settings. Fortunately, recent advances enable effective computation of MI between high dimensional input/output pairs of deep neural networks
(Belghazi et al., 2018). We leverage MI estimation for representation learning and show that, depending on the downstream task, maximizing MI between the complete input and the encoder output (i.e., global MI) is often insufficient for learning useful representations. Rather, structure matters: maximizing the average MI between the representation and local regions of the input (e.g. patches rather than the complete image) can greatly improve the representation’s quality for, e.g., classification tasks, while global MI plays a stronger role in the ability to reconstruct the full input given the representation. Usefulness of a representation is not just a matter of information content: representational characteristics like independence also play an important role (Gretton et al., 2012; Hyvärinen & Oja, 2000; Hinton, 2002; Schmidhuber, 1992; Bengio et al., 2013; Thomas et al., 2017). We combine MI maximization with prior matching in a manner similar to adversarial autoencoders
(AAE, Makhzani et al., 2015) to constrain representations according to desired statistical properties. This approach is closely related to the infomax optimization principle (Linsker, 1988; Bell & Sejnowski, 1995), so we call our method Deep InfoMax (DIM). Our main contributions are the following:
We formalize Deep InfoMax (DIM), which simultaneously estimates and maximizes the mutual information between input data and learned highlevel representations.

Our mutual information maximization procedure can prioritize global or local information, which we show can be used to tune the suitability of learned representations for classification or reconstructionstyle tasks.

We use adversarial learning (à la Makhzani et al., 2015) to constrain the representation to have desired statistical characteristics specific to a prior.

We introduce two new measures of representation quality, one based on Mutual Information Neural Estimation (MINE, Belghazi et al., 2018) and a neural dependency measure (NDM) based on the work by Brakel & Bengio (2017), and we use these to bolster our comparison of DIM to different unsupervised methods.
2 Related Work
There are many popular methods for learning representations. Classic methods, such as independent component analysis
(ICA, Bell & Sejnowski, 1995)(Kohonen, 1998), generally lack the representational capacity of deep neural networks. More recent approaches include deep volumepreserving maps (Dinh et al., 2014, 2016), deep clustering (Xie et al., 2016; Chang et al., 2017), noise as targets (NAT, Bojanowski & Joulin, 2017), and selfsupervised or colearning (Doersch & Zisserman, 2017; Dosovitskiy et al., 2016; Sajjadi et al., 2016). Generative models are also commonly used for building representations (Vincent et al., 2010; Kingma et al., 2014; Salimans et al., 2016; Rezende et al., 2016; Donahue et al., 2016), and mutual information (MI) plays an important role in the quality of the representations they learn. In generative models that rely on reconstruction (e.g., denoising, variational, and adversarial autoencoders, Vincent et al., 2008; Rifai et al., 2012; Kingma & Welling, 2013; Makhzani et al., 2015), the reconstruction error can be related to the MI as follows:(1) 
where and denote the input and output of an encoder which is applied to inputs sampled from some source distribution. denotes the expected reconstruction error of given the codes . and denote the marginal and conditional entropy of in the distribution formed by applying the encoder to inputs sampled from the source distribution. Thus, in typical settings, models with reconstructiontype objectives provide some guarantees on the amount of information encoded in their intermediate representations. Similar guarantees exist for bidirectional adversarial models (Dumoulin et al., 2016; Donahue et al., 2016)
, which adversarially train an encoder / decoder to match their respective joint distributions or to minimize the reconstruction error
(Chen et al., 2016).Mutualinformation estimation
Methods based on mutual information have a long history in unsupervised feature learning. The infomax principle (Linsker, 1988; Bell & Sejnowski, 1995), as prescribed for neural networks, advocates maximizing MI between the input and output. This is the basis of numerous ICA algorithms, which can be nonlinear (Hyvärinen & Pajunen, 1999; Almeida, 2003) but are often hard to adapt for use with deep networks. Mutual Information Neural Estimation (MINE, Belghazi et al., 2018) learns an estimate of the MI of continuous variables, is strongly consistent, and can be used to learn better implicit bidirectional generative models. Deep InfoMax (DIM) follows MINE in this regard, though we find that the generator is unnecessary. We also find it unnecessary to use the exact KLbased formulation of MI. For example, a simple alternative based on the JensenShannon divergence (JSD) is more stable and provides better results. We will show that DIM can work with various MI estimators. Most significantly, DIM can leverage local structure in the input to improve the suitability of representations for classification. Leveraging known structure in the input when designing objectives based on MI maximization is nothing new (Becker, 1992, 1996; Wiskott & Sejnowski, 2002), and some very recent works also follow this intuition. It has been shown in the case of discrete MI that data augmentations and other transformations can be used to avoid degenerate solutions (Hu et al., 2017). Unsupervised clustering and segmentation is attainable by maximizing the MI between images associated by transforms or spatial proximity (Ji et al., 2018). Our work investigates the suitability of representations learned across two different MI objectives that focus on local or global structure, a flexibility we believe is necessary for training representations intended for different applications. Proposed independently of DIM, Contrastive Predictive Coding (CPC, Oord et al., 2018) is a MIbased approach that, like DIM, maximizes MI between global and local representation pairs. CPC shares some motivations and computations with DIM, but there are important ways in which CPC and DIM differ. CPC processes local features sequentially to build partial “summary features”, which are used to make predictions about specific local features in the “future” of each summary feature. This equates to ordered autoregression over the local features, and requires training separate estimators for each temporal offset at which one would like to predict the future. In contrast, the basic version of DIM uses a single summary feature that is a function of all local features, and this “global” feature predicts all local features simultaneously in a single step using a single estimator. Note that, when using occlusions during training (see Section 4.3 for details), DIM performs both “self” predictions and orderless autoregression.
3 Deep InfoMax
Here we outline the general setting of training an encoder to maximize mutual information between its input and output. Let and be the domain and range of a continuous and (almost everywhere) differentiable parametric function, with parameters (e.g., a neural network). These parameters define a family of encoders, over . Assume that we are given a set of training examples on an input space, :
, with empirical probability distribution
. We define to be the marginal distribution induced by pushing samples from through . I.e., is the distribution over encodings produced by sampling observations and then sampling . An example encoder for image data is given in Figure 2, which will be used in the following sections, but this approach can easily be adapted for temporal data. Similar to the infomax optimization principle (Linsker, 1988), we assert our encoder should be trained according to the following criteria:
Mutual information maximization: Find the set of parameters, , such that the mutual information, , is maximized. Depending on the endgoal, this maximization can be done over the complete input, , or some structured or “local” subset.

Statistical constraints: Depending on the endgoal for the representation, the marginal should match a prior distribution, . Roughly speaking, this can be used to encourage the output of the encoder to have desired characteristics (e.g., independence).
The formulation of these two objectives covered below we call Deep InfoMax (DIM).
3.1 Mutual information estimation and maximization
Our basic mutual information maximization framework is presented in Figure 2. The approach follows Mutual Information Neural Estimation (MINE, Belghazi et al., 2018)
, which estimates mutual information by training a classifier to distinguish between samples coming from the joint,
, and the product of marginals,, of random variables
and . MINE uses a lowerbound to the MI based on the DonskerVaradhan representation (DV, Donsker & Varadhan, 1983) of the KLdivergence,(2) 
where is a discriminator function modeled by a neural network with parameters . At a high level, we optimize by simultaneously estimating and maximizing ,
(3) 
where the subscript denotes “global” for reasons that will be clear later. However, there are some important differences that distinguish our approach from MINE. First, because the encoder and mutual information estimator are optimizing the same objective and require similar computations, we share layers between these functions, so that and ,^{1}^{1}1Here we slightly abuse the notation and use for both parts of . where is a function that combines the encoder output with the lower layer. Second, as we are primarily interested in maximizing MI, and not concerned with its precise value, we can rely on nonKL divergences which may offer favourable tradeoffs. For example, one could define a JensenShannon MI estimator (following the formulation of Nowozin et al., 2016),
(4) 
where is an input sample, is an input sampled from , and is the softplus function. A similar estimator appeared in Brakel & Bengio (2017) in the context of minimizing the total correlation, and it amounts to the familiar binary crossentropy. This is wellunderstood in terms of neural network optimization and we find works better in practice (e.g., is more stable) than the DVbased objective (e.g., see App. A.3). Intuitively, the JensenShannonbased estimator should behave similarly to the DVbased estimator in Eq. 2, since both act like classifiers whose objectives maximize the expected ratio of the joint over the product of marginals. We show in App. A.1
the relationship between the JSD estimator and the formal definition of mutual information. NoiseContrastive Estimation
(NCE, Gutmann & Hyvärinen, 2010, 2012) was first used as a bound on MI in Oord et al. (and called “infoNCE”, 2018), and this loss can also be used with DIM by maximizing:(5) 
For DIM, a key difference between the DV, JSD, and infoNCE formulations is whether an expectation over appears inside or outside of a . In fact, the JSDbased objective mirrors the original NCE formulation in Gutmann & Hyvärinen (2010), which phrased unnormalized density estimation as binary classification between the data distribution and a noise distribution. DIM sets the noise distribution to the product of marginals over , and the data distribution to the true joint. The infoNCE formulation in Eq. 5 follows a softmaxbased version of NCE (Jozefowicz et al., 2016), similar to ones used in the language modeling community (Mnih & Kavukcuoglu, 2013; Mikolov et al., 2013), and which has strong connections to the binary crossentropy in the context of noisecontrastive learning (Ma & Collins, 2018). In practice, implementations of these estimators appear quite similar and can reuse most of the same code. We investigate JSD and infoNCE in our experiments, and find that using infoNCE often outperforms JSD on downstream tasks, though this effect diminishes with more challenging data. However, as we show in the App. (A.3), infoNCE and DV require a large number of negative samples (samples from ) to be competitive. We generate negative samples using all combinations of global and local features at all locations of the relevant feature map, across all images in a batch. For a batch of size , that gives negative samples per positive example, which quickly becomes cumbersome with increasing batch size. We found that DIM with the JSD loss is insensitive to the number of negative samples, and in fact outperforms infoNCE as the number of negative samples becomes smaller.
3.2 Local mutual information maximization
The objective in Eq. 3
can be used to maximize MI between input and output, but ultimately this may be undesirable depending on the task. For example, trivial pixellevel noise is useless for image classification, so a representation may not benefit from encoding this information (e.g., in zeroshot learning, transfer learning, etc.). In order to obtain a representation more suitable for classification, we can instead maximize the average MI between the highlevel representation and local patches of the image. Because the same representation is encouraged to have high MI with all the patches, this favours encoding aspects of the data that are shared across patches. Suppose the feature vector is of limited capacity (number of units and range) and assume the encoder does not support infinite output configurations. For maximizing the MI between the whole input and the representation, the encoder can pick and choose what type of information in the input is passed through the encoder, such as noise specific to local patches or pixels. However, if the encoder passes information specific to only some parts of the input, this
does not increase the MI with any of the other patches that do not contain said noise. This encourages the encoder to prefer information that is shared across the input, and this hypothesis is supported in our experiments below. Our local DIM framework is presented in Figure 3. First we encode the input to a feature map, that reflects useful structure in the data (e.g., spatial locality), indexed in this case by . Next, we summarize this local feature map into a global feature, . We then define our MI estimator on global/local pairs, maximizing the average estimated MI:(6) 
We found success optimizing this “local” objective with multiple easytoimplement architectures, and further implementation details are provided in the App. (A.2).
3.3 Matching representations to a prior distribution
Absolute magnitude of information is only one desirable property of a representation; depending on the application, good representations can be compact (Gretton et al., 2012), independent (Hyvärinen & Oja, 2000; Hinton, 2002; Dinh et al., 2014; Brakel & Bengio, 2017), disentangled (Schmidhuber, 1992; Rifai et al., 2012; Bengio et al., 2013; Chen et al., 2018; GonzalezGarcia et al., 2018), or independently controllable (Thomas et al., 2017). DIM imposes statistical constraints onto learned representations by implicitly training the encoder so that the pushforward distribution, , matches a prior, . This is done (see Figure 7 in the App. A.2) by training a discriminator, , to estimate the divergence, , then training the encoder to minimize this estimate:
(7) 
This approach is similar to what is done in adversarial autoencoders (AAE, Makhzani et al., 2015), but without a generator. It is also similar to noise as targets (Bojanowski & Joulin, 2017), but trains the encoder to match the noise implicitly rather than using a priori noise samples as targets. All three objectives – global and local MI maximization and prior matching – can be used together, and doing so we arrive at our complete objective for Deep InfoMax (DIM):
(8) 
where and are the discriminator parameters for the global and local objectives, respectively, and , , and
are hyperparameters. We will show below that choices in these hyperparameters affect the learned representations in meaningful ways. As an interesting aside, we also show in the App. (
A.8) that this prior matching can be used alone to train a generator of image data.4 Experiments
We test Deep InfoMax (DIM) on four imaging datasets to evaluate its representational properties:

CIFAR10 and CIFAR100 (Krizhevsky & Hinton, 2009): two smallscale labeled datasets composed of images with and classes respectively.

STL10 (Coates et al., 2011): a dataset derived from ImageNet composed of images with a mixture of unlabeled training examples and labeled examples per class. We use data augmentation with this dataset, taking random crops and flipping horizontally during unsupervised learning.
For our experiments, we compare DIM against various unsupervised methods: Variational AutoEncoders (VAE, Kingma & Welling, 2013), VAE (Higgins et al., 2016; Alemi et al., 2016), Adversarial AutoEncoders (AAE, Makhzani et al., 2015), BiGAN (a.k.a. adversarially learned inference with a deterministic encoder: Donahue et al., 2016; Dumoulin et al., 2016), Noise As Targets (NAT, Bojanowski & Joulin, 2017), and Contrastive Predictive Coding (CPC, Oord et al., 2018). Note that we take CPC to mean ordered autoregression using summary features to predict “future” local features, independent of the constrastive loss used to evaluate the predictions (JSD, infoNCE, or DV). See the App. (A.2) for details of the neural net architectures used in the experiments.
4.1 How do we evaluate the quality of a representation?
Evaluation of representations is casedriven and relies on various proxies. Linear separability is commonly used as a proxy for disentanglement and mutual information (MI) between representations and class labels. Unfortunately, this will not show whether the representation has high MI with the class labels when the representation is not disentangled. Other works (Bojanowski & Joulin, 2017) have looked at transfer learning classification tasks by freezing the weights of the encoder and training a small fullyconnected neural network classifier using the representation as input. Others still have more directly measured the MI between the labels and the representation (Rifai et al., 2012; Chen et al., 2018), which can also reveal the representation’s degree of entanglement. Class labels have limited use in evaluating representations, as we are often interested in information encoded in the representation that is unknown to us. However, we can use mutual information neural estimation (MINE, Belghazi et al., 2018) to more directly measure the MI between the input and output of the encoder. Next, we can directly measure the independence of the representation using a discriminator. Given a batch of representations, we generate a factorwise independent distribution with the same perfactor marginals by randomly shuffling each factor along the batch dimension. A similar trick has been used for learning maximally independent representations for sequential data (Brakel & Bengio, 2017). We can train a discriminator to estimate the KLdivergence between the original representations (joint distribution of the factors) and the shuffled representations (product of the marginals, see Figure 12). The higher the KL divergence, the more dependent the factors. We call this evaluation method Neural Dependency Measure (NDM) and show that it is sensible and empirically consistent in the App. (A.6). To summarize, we use the following metrics for evaluating representations. For each of these, the encoder is held fixed unless noted otherwise:

Linear classification
using a support vector machine (SVM). This is simultaneously a proxy for MI of the representation with linear separability.

Nonlinear classification using a single hidden layer neural network ( units) with dropout. This is a proxy on MI of the representation with the labels separate from linear separability as measured with the SVM above.

Semisupervised learning (STL10 here), that is, finetuning the complete encoder by adding a small neural network on top of the last convolutional layer (matching architectures with a standard fullysupervised classifier).

MSSSIM (Wang et al., 2003), using a decoder trained on the reconstruction loss. This is a proxy for the total MI between the input and the representation and can indicate the amount of encoded pixellevel information.

Mutual information neural estimate (MINE), , between the input, , and the output representation, , by training a discriminator with parameters to maximize the DV estimator of the KLdivergence.

Neural dependency measure (NDM) using a second discriminator that measures the KL between and a batchwise shuffled version of .
For the neural network classification evaluation above, we performed experiments on all datasets except CelebA, while for other measures we only looked at CIFAR10. For all classification tasks, we built separate classifiers on the highlevel vector representation (
), the output of the previous fullyconnected layer (fc) and the last convolutional layer (conv). Model selection for the classifiers was done by averaging the last 100 epochs of optimization, and the dropout rate and decaying learning rate schedule was set uniformly to alleviate overfitting on the test set across all models.
4.2 Representation learning comparison across models
In the following experiments, DIM(G) refers to DIM with a globalonly objective () and DIM(L) refers to DIM with a localonly objective (), the latter chosen from the results of an ablation study presented in the App. (A.5
). For the prior, we chose a compact uniform distribution on
, which worked better in practice than other priors, such as Gaussian, unit ball, or unit sphere.Model  CIFAR10  CIFAR100  

conv  fc  conv  fc  
Fully supervised  75.39  42.27  
VAE  
AE  
VAE  
AAE  
BiGAN  
NAT  
DIM(G)  
DIM(L) (DV)  
DIM(L) (JSD)  
DIM(L) (infoNCE) 
Tiny ImageNet  STL10 (random crop pretraining)  

conv  fc  conv  fc  SS  
Fully supervised  36.60  68.7  
VAE  
AE  
VAE  
AAE  
BiGAN  
NAT  
DIM(G)  
DIM(L) (DV)  
DIM(L) (JSD)  
DIM(L) (infoNCE) 
Model  CIFAR10 (no data augmentation)  STL10 (random crop pretraining) 

DIM(L) single global  
CPC  
DIM(L) multiple globals 
. These experiments used a stridedcrop architecture similar to the one used in
Oord et al. (2018). For CIFAR10 we used a ResNet50 encoder, and for STL10 we used the same architecture as for Table 2. We also tested a version of DIM that computes the global representation from a 3x3 block of local features randomly selected from the full 7x7 set of local features. This is a particular instance of the occlusions described in Section 4.3. DIM(L) is competitive with CPC in these settings.Classification comparisons
Our classification results can be found in Tables 1, 2, and 3. In general, DIM with the local objective, DIM(L), outperformed all models presented here by a significant margin on all datasets, regardless of which layer the representation was drawn from, with exception to CPC. For the specific settings presented (architectures, no data augmentation for datasets except for STL10), DIM(L) performs as well as or outperforms a fullysupervised classifier without finetuning, which indicates that the representations are nearly as good as or better than the raw pixels given the model constraints in this setting. Note, however, that a fully supervised classifier can perform much better on all of these benchmarks, especially when specialized architectures and carefullychosen data augmentations are used. Competitive or better results on CIFAR10 also exist (albeit in different settings, e.g., Coates et al., 2011; Dosovitskiy et al., 2016), but to our knowledge our STL10 results are stateoftheart for unsupervised learning. The results in this setting support the hypothesis that our local DIM objective is suitable for extracting class information. Our results show that infoNCE tends to perform best, but differences between infoNCE and JSD diminish with larger datasets. DV can compete with JSD with smaller datasets, but DV performs much worse with larger datasets. For CPC, we were only able to achieve marginally better performance than BiGAN with the settings above. However, when we adopted the strided crop architecture found in Oord et al. (2018), both CPC and DIM performance improved considerably. We chose a crop size of of the image size in width and depth with a stride of the image size (e.g., crops with strides for CIFAR10, crops with strides for STL10), so that there were a total of local features. For both DIM(L) and CPC, we used infoNCE as well as the same “encodeanddotproduct” architecture (tantamount to a deep bilinear model), rather than the shallow bilinear model used in Oord et al. (2018). For CPC, we used a total of such networks, where each network for CPC is used for a separate prediction task of local feature maps in the next rows of a summary predictor feature within each column.^{2}^{2}2Note that this is slightly different from the setup used in Oord et al. (2018), which used a total of such predictors, though we found other configurations performed similarly. For simplicity, we omitted the prior term, , from DIM. Without data augmentation on CIFAR10, CPC performs worse than DIM(L) with a ResNet50 (He et al., 2016) type architecture. For experiments we ran on STL10 with data augmentation (using the same encoder architecture as Table 2), CPC and DIM were competitive, with CPC performing slightly better. CPC makes predictions based on multiple summary features, each of which contains different amounts of information about the full input. We can add similar behavior to DIM by computing less global features which condition on blocks of local features sampled at random from the full sets of local features. We then maximize mutual information between these less global features and the full sets of local features. We share a single MI estimator across all possible blocks of local features when using this version of DIM. This represents a particular instance of the occlusion technique described in Section 4.3. The resulting model gave a significant performance boost to DIM for STL10. Surprisingly, this same architecture performed worse than using the fully global representation with CIFAR10. Overall DIM only slightly outperforms CPC in this setting, which suggests that the strictly ordered autoregression of CPC may be unnecessary for some tasks.
Model  Proxies  Neural Estimators  
SVM (conv)  SVM (fc)  SVM ()  MSSSIM  NDM  
VAE  0.72  93.02  1.62  
AAE  0.67  87.48  0.03  
BiGAN  0.46  37.69  24.49  
NAT  0.29  6.04  0.02  
DIM(G)  
DIM(L+G)  
DIM(L) 
Extended comparisons
Tables 4
shows results on linear separability, reconstruction (MSSSIM), mutual information, and dependence (NDM) with the CIFAR10 dataset. We did not compare to CPC due to the divergence of architectures. For linear classifier results (SVC), we trained five support vector machines with a simple hinge loss for each model, averaging the test accuracy. For MINE, we used a decaying learning rate schedule, which helped reduce variance in estimates and provided faster convergence. MSSSIM correlated well with the MI estimate provided by MINE, indicating that these models encoded pixelwise information well. Overall, all models showed much lower dependence than BiGAN, indicating the marginal of the encoder output is not matching to the generator’s spherical Gaussian input prior, though the mixed local/global version of DIM is close. For MI, reconstructionbased models like VAE and AAE have high scores, and we found that combining local and global DIM objectives had very high scores (
, is presented here as DIM(L+G)). For more indepth analyses, please see the ablation studies and the nearestneighbor analysis in the App. (A.4, A.5).4.3 Adding coordinate information and occlusions
Model  CIFAR10  CIFAR100  

fc  conv  fc  conv  
DIM  
DIM (coord)  
DIM (occlude)  
DIM (coord + occlude) 
Maximizing MI between global and local features is not the only way to leverage image structure. We consider augmenting DIM by adding input occlusion when computing global features and by adding auxiliary tasks which maximize MI between local features and absolute or relative spatial coordinates given a global feature. These additions improve classification results (see Table 5). For occlusion, we randomly occlude part of the input when computing the global features, but compute local features using the full input. Maximizing MI between occluded global features and unoccluded local features aggressively encourages the global features to encode information which is shared across the entire image. For coordinate prediction, we maximize the model’s ability to predict the coordinates of a local feature after computing the global features . To accomplish this, we maximize (i.e., minimize the crossentropy). We can extend the task to maximize conditional MI given global features between pairs of local features and their relative coordinates . This objective can be written as . We use both these objectives in our results. Additional implementation details can be found in the App. (A.7). Roughly speaking, our input occlusions and coordinate prediction tasks can be interpreted as generalizations of inpainting (Pathak et al., 2016) and context prediction (Doersch et al., 2015)
tasks which have previously been proposed for selfsupervised feature learning. Augmenting DIM with these tasks helps move our method further towards learning representations which encode images (or other types of inputs) not just in terms of compressing their lowlevel (e.g. pixel) content, but in terms of distributions over relations among higherlevel features extracted from their lowerlevel content.
5 Conclusion
In this work, we introduced Deep InfoMax (DIM), a new method for learning unsupervised representations by maximizing mutual information, allowing for representations that contain locallyconsistent information across structural “locations” (e.g., patches in an image). This provides a straightforward and flexible way to learn representations that perform well on a variety of tasks. We believe that this is an important direction in learning higherlevel representations.
6 Acknowledgements
RDH received partial support from IVADO, NIH grants 2R01EB005846, P20GM103472, P30GM122734, and R01EB020407, and NSF grant 1539067. AF received partial support from NIH grants R01EB020407, R01EB006841, P20GM103472, P30GM122734. We would also like to thank Geoff Gordon (MSR), Ishmael Belghazi (MILA), Marc Bellemare (Google Brain), Mikołaj Bińkowski (Imperial College London), Simon Sebbagh, and Aaron Courville (MILA) for their useful input at various points through the course of this research.
References
 Alemi et al. (2016) Alexander A Alemi, Ian Fischer, Joshua V Dillon, and Kevin Murphy. Deep variational information bottleneck. arXiv preprint arXiv:1612.00410, 2016.

Almeida (2003)
Luís B Almeida.
Linear and nonlinear ica based on mutual information.
The Journal of Machine Learning Research
, 4:1297–1318, 2003.  Arjovsky & Bottou (2017) Martin Arjovsky and Léon Bottou. Towards principled methods for training generative adversarial networks. In International Conference on Learning Representations, 2017.
 Becker (1992) Suzanna Becker. An informationtheoretic unsupervised learning algorithm for neural networks. University of Toronto, 1992.
 Becker (1996) Suzanna Becker. Mutual information maximization: models of cortical selforganization. Network: Computation in neural systems, 7(1):7–31, 1996.
 Belghazi et al. (2018) Ishmael Belghazi, Aristide Baratin, Sai Rajeswar, Sherjil Ozair, Yoshua Bengio, Aaron Courville, and R Devon Hjelm. Mine: mutual information neural estimation. arXiv preprint arXiv:1801.04062, ICML’2018, 2018.
 Bell & Sejnowski (1995) Anthony J Bell and Terrence J Sejnowski. An informationmaximization approach to blind separation and blind deconvolution. Neural computation, 7(6):1129–1159, 1995.
 Bengio et al. (2013) Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE Trans. Pattern Analysis and Machine Intelligence (PAMI), 35(8):1798–1828, 2013.
 Bojanowski & Joulin (2017) Piotr Bojanowski and Armand Joulin. Unsupervised learning by predicting noise. arXiv preprint arXiv:1704.05310, 2017.
 Brakel & Bengio (2017) Philemon Brakel and Yoshua Bengio. Learning independent features with adversarial nets for nonlinear ica. arXiv preprint arXiv:1710.05050, 2017.

Chang et al. (2017)
Jianlong Chang, Lingfeng Wang, Gaofeng Meng, Shiming Xiang, and Chunhong Pan.
Deep adaptive image clustering.
In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
, pp. 5879–5887, 2017.  Chen et al. (2018) Tian Qi Chen, Xuechen Li, Roger Grosse, and David Duvenaud. Isolating sources of disentanglement in variational autoencoders. arXiv preprint arXiv:1802.04942, 2018.
 Chen et al. (2016) Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In Advances in neural information processing systems, pp. 2172–2180, 2016.

Coates et al. (2011)
Adam Coates, Andrew Ng, and Honglak Lee.
An analysis of singlelayer networks in unsupervised feature
learning.
In
Proceedings of the fourteenth international conference on artificial intelligence and statistics
, pp. 215–223, 2011.  Dinh et al. (2014) Laurent Dinh, David Krueger, and Yoshua Bengio. Nice: nonlinear independent components estimation. arXiv preprint arXiv:1410.8516, 2014.
 Dinh et al. (2016) Laurent Dinh, Jascha SohlDickstein, and Samy Bengio. Density estimation using real nvp. arXiv preprint arXiv:1605.08803, 2016.
 Doersch & Zisserman (2017) Carl Doersch and Andrew Zisserman. Multitask selfsupervised visual learning. In The IEEE International Conference on Computer Vision (ICCV), 2017.
 Doersch et al. (2015) Carl Doersch, Abhinav Gupta, and Alexei A Efros. Unsupervised visual representation learning by context prediction. In Proceedings of the IEEE International Conference on Computer Vision, 2015.
 Donahue et al. (2016) Jeff Donahue, Philipp Krähenbühl, and Trevor Darrell. Adversarial feature learning. arXiv preprint arXiv:1605.09782, 2016.
 Donsker & Varadhan (1983) M.D Donsker and S.R.S Varadhan. Asymptotic evaluation of certain markov process expectations for large time, iv. Communications on Pure and Applied Mathematics, 36(2):183–212, 1983.

Dosovitskiy et al. (2016)
Alexey Dosovitskiy, Philipp Fischer, Jost Tobias Springenberg, Martin
Riedmiller, and Thomas Brox.
Discriminative unsupervised feature learning with exemplar convolutional neural networks.
IEEE transactions on pattern analysis and machine intelligence, 38(9):1734–1747, 2016.  Dumoulin et al. (2016) Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropietro, and Aaron Courville. Adversarially learned inference. arXiv preprint arXiv:1606.00704, 2016.
 GonzalezGarcia et al. (2018) Abel GonzalezGarcia, Joost van de Weijer, and Yoshua Bengio. Imagetoimage translation for crossdomain disentanglement. arXiv preprint arXiv:1805.09730, 2018.
 Gretton et al. (2012) Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Schölkopf, and Alexander Smola. A kernel twosample test. Journal of Machine Learning Research, 13(Mar):723–773, 2012.
 Gulrajani et al. (2017) Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville. Improved training of wasserstein gans. arXiv preprint arXiv:1704.00028, 2017.
 Gutmann & Hyvärinen (2010) Michael Gutmann and Aapo Hyvärinen. Noisecontrastive estimation: A new estimation principle for unnormalized statistical models. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 297–304, 2010.
 Gutmann & Hyvärinen (2012) Michael U Gutmann and Aapo Hyvärinen. Noisecontrastive estimation of unnormalized statistical models, with applications to natural image statistics. Journal of Machine Learning Research, 13(Feb):307–361, 2012.
 He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016.
 Higgins et al. (2016) Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. betavae: Learning basic visual concepts with a constrained variational framework. Openreview, 2016.

Hinton (2002)
Geoffrey E Hinton.
Training products of experts by minimizing contrastive divergence.
Neural computation, 14(8):1771–1800, 2002.  Hjelm et al. (2018) R Devon Hjelm, Athul Paul Jacob, Tong Che, Adam Trischler, Kyunghyun Cho, and Yoshua Bengio. Boundaryseeking generative adversarial networks. In International Conference on Learning Representations, 2018.
 Hu et al. (2017) Weihua Hu, Takeru Miyato, Seiya Tokui, Eiichi Matsumoto, and Masashi Sugiyama. Learning discrete representations via information maximizing selfaugmented training. arXiv preprint arXiv:1702.08720, 2017.
 Hyvärinen & Oja (2000) Aapo Hyvärinen and Erkki Oja. Independent component analysis: algorithms and applications. Neural networks, 13(4):411–430, 2000.
 Hyvärinen & Pajunen (1999) Aapo Hyvärinen and Petteri Pajunen. Nonlinear independent component analysis: Existence and uniqueness results. Neural Networks, 12(3):429–439, 1999.
 Ioffe & Szegedy (2015) Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
 Ji et al. (2018) Xu Ji, João F Henriques, and Andrea Vedaldi. Invariant information distillation for unsupervised image segmentation and clustering. arXiv preprint arXiv:1807.06653, 2018.
 Jozefowicz et al. (2016) Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410, 2016.
 Kingma & Welling (2013) Diederik Kingma and Max Welling. Autoencoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
 Kingma et al. (2014) Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semisupervised learning with deep generative models. In Advances in Neural Information Processing Systems, pp. 3581–3589, 2014.
 Kohonen (1998) Teuvo Kohonen. The selforganizing map. Neurocomputing, 21(13):1–6, 1998.
 Krizhevsky & Hinton (2009) Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009.
 Krizhevsky et al. (2012) Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105, 2012.
 Linsker (1988) Ralph Linsker. Selforganization in a perceptual network. IEEE Computer, 21(3):105–117, 1988. doi: 10.1109/2.36. URL https://doi.org/10.1109/2.36.
 Liu et al. (2015) Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), December 2015.
 Ma & Collins (2018) Zhuang Ma and Michael Collins. Noise contrastive estimation and negative sampling for conditional models: Consistency and statistical efficiency. arXiv preprint arXiv:1809.01812, 2018.
 Makhzani et al. (2015) Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow, and Brendan Frey. Adversarial autoencoders. arXiv preprint arXiv:1511.05644, 2015.
 Mescheder et al. (2018) Lars Mescheder, Andreas Geiger, and Sebastian Nowozin. Which training methods for gans do actually converge? In International Conference on Machine Learning, pp. 3478–3487, 2018.
 Mikolov et al. (2013) Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pp. 3111–3119, 2013.
 Mnih & Kavukcuoglu (2013) Andriy Mnih and Koray Kavukcuoglu. Learning word embeddings efficiently with noisecontrastive estimation. In Advances in neural information processing systems, pp. 2265–2273, 2013.
 Nowozin et al. (2016) Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. fgan: Training generative neural samplers using variational divergence minimization. In Advances in Neural Information Processing Systems, pp. 271–279, 2016.
 Oord et al. (2016) Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759, 2016.
 Oord et al. (2018) Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018.
 Pathak et al. (2016) Deepak Pathak, Philipp Krähenbühl, Jeff Donahue, Trevor Darrell, and Alexei A. Efros. Context encoders: Feature learning by inpainting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016.
 Radford et al. (2015) Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
 Rezende et al. (2016) Danilo Jimenez Rezende, Shakir Mohamed, Ivo Danihelka, Karol Gregor, and Daan Wierstra. Oneshot generalization in deep generative models. arXiv preprint arXiv:1603.05106, 2016.
 Rifai et al. (2011) Salah Rifai, Pascal Vincent, Xavier Muller, Xavier Glorot, and Yoshua Bengio. Contractive autoencoders: Explicit invariance during feature extraction. In Proceedings of the 28th International Conference on International Conference on Machine Learning, pp. 833–840. Omnipress, 2011.
 Rifai et al. (2012) Salah Rifai, Yoshua Bengio, Aaron Courville, Pascal Vincent, and Mehdi Mirza. Disentangling factors of variation for facial expression recognition. In European Conference on Computer Vision, pp. 808–822. Springer, 2012.
 Sajjadi et al. (2016) Mehdi Sajjadi, Mehran Javanmardi, and Tolga Tasdizen. Regularization with stochastic transformations and perturbations for deep semisupervised learning. In Advances in Neural Information Processing Systems, pp. 1163–1171, 2016.
 Salimans et al. (2016) Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In Advances in Neural Information Processing Systems, pp. 2234–2242, 2016.
 Schmidhuber (1992) Jürgen Schmidhuber. Learning factorial codes by predictability minimization. Neural Computation, 4(6):863–879, 1992.
 Thomas et al. (2017) Valentin Thomas, Jules Pondard, Emmanuel Bengio, Marc Sarfati, Philippe Beaudoin, MarieJean Meurs, Joelle Pineau, Doina Precup, and Yoshua Bengio. Independently controllable features. arXiv preprint arXiv:1708.01289, 2017.

Vincent et al. (2008)
Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and PierreAntoine Manzagol.
Extracting and composing robust features with denoising autoencoders.
In Proceedings of the 25th international conference on Machine learning, pp. 1096–1103. ACM, 2008.  Vincent et al. (2010) Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and PierreAntoine Manzagol. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of machine learning research, 11(Dec):3371–3408, 2010.
 Wang et al. (2003) Zhou Wang, Eero P Simoncelli, and Alan C Bovik. Multiscale structural similarity for image quality assessment. In Signals, Systems and Computers, 2004. Conference Record of the ThirtySeventh Asilomar Conference on, volume 2, pp. 1398–1402. Ieee, 2003.
 Wiskott & Sejnowski (2002) Laurenz Wiskott and Terrence J Sejnowski. Slow feature analysis: Unsupervised learning of invariances. Neural computation, 14(4):715–770, 2002.

Xie et al. (2016)
Junyuan Xie, Ross Girshick, and Ali Farhadi.
Unsupervised deep embedding for clustering analysis.
In International conference on machine learning, pp. 478–487, 2016. 
Yang et al. (2015)
Shuo Yang, Ping Luo, ChenChange Loy, and Xiaoou Tang.
From facial parts responses to face detection: A deep learning approach.
In Proceedings of the IEEE International Conference on Computer Vision, pp. 3676–3684, 2015.  Yu et al. (2015) Fisher Yu, Yinda Zhang, Shuran Song, Ari Seff, and Jianxiong Xiao. Lsun: Construction of a largescale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365, 2015.
Appendix A Appendix
a.1 On the JensenShannon divergence and mutual information
Here we show the relationship between the JensenShannon divergence (JSD) between the joint and the product of marginals and the pointwise mutual information (PMI). Let and be two marginal densities, and define and as the conditional and joint distribution, respectively. Construct a probability mixture density, . It follows that , , and . Note that:
(9) 
Discarding some constants:
(10) 
The quantity inside the expectation of Eqn. 10 is a concave, monotonically increasing function of the ratio , which is exactly . Note this relationship does not hold for the JSD of arbitrary distributions, as the the joint and product of marginals are intimately coupled. We can verify our theoretical observation by plotting the JSD and KL divergences between the joint and the product of marginals, the latter of which is the formal definition of mutual information (MI). As computing the continuous MI is difficult, we assume a discrete input with uniform probability, (e.g., these could be onehot variables indicating one of i.i.d. random samples), and a randomly initialized joint distribution, , such that . For this joint distribution, we sample from a uniform distribution, then apply dropout to encourage sparsity to simulate the situation when there is no bijective function between and , then apply a softmax. As the distributions are discrete, we can compute the KL and JSD between and . We ran these experiments with matched input / output dimensions of , , , , and , randomly drawing joint distributions, and computed the KL and JSD divergences directly. Our results (Figure 4) indicate that the KL (traditional definition of mutual information) and the JSD have an approximately monotonic relationship. Overall, the distributions with the highest mutual information also have the highest JSD.
a.2 Experiment and architecture details
Here we provide architectural details for our experiments. Example code for running Deep Infomax (DIM) can be found at https://github.com/rdevon/DIM.
Encoder
We used an encoder similar to a deep convolutional GAN (DCGAN, Radford et al., 2015) discriminator for CIFAR10 and CIFAR100, and for all other datasets we used an Alexnet (Krizhevsky et al., 2012) architecture similar to that found in Donahue et al. (2016)
. ReLU activations and batch norm
(Ioffe & Szegedy, 2015) were used on every hidden layer. For the DCGAN architecture, a single hidden layer with units was used after the final convolutional layer, and for the Alexnet architecture it was two hidden layers with . For all experiments, the output of all encoders was a dimensional vector.Mutual information discriminators
For the global mutual information objective, we first encode the input into a feature map, , which in this case is the output of the last convolutional layer. We then encode this representation further using linear layers as detailed above to get . is then flattened, then concatenated with . We then pass this to a fullyconnected network with two unit hidden layers (see Table 6).
Operation  Size  Activation 

Input Linear layer  ReLU  
Linear layer  ReLU  
Linear layer 
We tested two different architectures for the local objective. The first (Figure 7) concatenated the global feature vector with the feature map at every location, i.e., . A convolutional discriminator is then used to score the (feature map, feature vector) pair,
(11) 
Fake samples are generated by combining global feature vectors with local feature maps coming from different images, :
(12) 
This architecture is featured in the results of Table 4, as well as the ablation and nearestneighbor studies below. We used a convnet with two unit hidden layers as discriminator (Table 7).
Operation  Size  Activation 

Input conv  ReLU  
conv  ReLU  
conv 
The other architecture we tested (Figure 7) is based on nonlinearly embedding the global and local features in a (much) higherdimensional space, and then computing pairwise scores using dot products between their highdimensional embeddings. This enables efficient evaluation of a large number of pairwise scores, thus allowing us to use large numbers of positive/negative samples. Given a sufficiently highdimensional embedding space, this approach can represent (almost) arbitrary classes of pairwise functions that are nonlinear in the original, lowerdimensional features. For more information, refer to Reproducing Kernel Hilbert Spaces. We pass the global feature through a fully connected neural network to get the encoded global feature, . In our experiments, we used a single hidden layer network with a linear shortcut (See Table 8).
Operation  Size  Activation  Output 

Input Linear  ReLU  
Linear  Output 1  
Input Linear  ReLU  Output 2  
Output 1 Output 2 
We embed each local feature in the local feature map using an architecture which matches the one for global feature embedding. We apply it via convolutions. Details are in Table 9.
Operation  Size  Activation  Output 

Input  ReLU  
conv  Output 1  
Input conv  ReLU  Output 2  
Output 1 Output 2  
Block Layer Norm 
Finally, the outputs of these two networks are combined by matrix multiplication, summing over the feature dimension ( in the example above). As this is computed over a batch, this allows us to efficiently compute both positive and negative examples simultaneously. This architecture is featured in our main classification results in Tables 1, 2, and 5. For the local objective, the feature map, , can be taken from any level of the encoder,
. For the global objective, this is the last convolutional layer, and this objective was insensitive to which layer we used. For the local objectives, we found that using the nexttolast layer worked best for CIFAR10 and CIFAR100, while for the other larger datasets it was the previous layer. This sensitivity is likely due to the relative size of the of the receptive fields, and further analysis is necessary to better understand this effect. Note that all feature maps used for DIM included the final batch normalization and ReLU activation.
Prior matching
Figure 7 shows a highlevel overview of the prior matching architecture. The discriminator used to match the prior in DIM was a fullyconnected network with two hidden layers of and units (Table 10).
Operation  Size  Activation 

Input Linear layer  ReLU  
Linear layer  ReLU  
Linear layer 
Generative models
For generative models, we used a similar setup as that found in Donahue et al. (2016) for the generators / decoders, where we used a generator from DCGAN in all experiments. All models were trained using Adam with a learning rate of for epochs for CIFAR10 and CIFAR100 and for epochs for all other datasets.
Contrastive Predictive Coding
a.3 Sampling strategies
We found both infoNCE and the DVbased estimators were sensitive to negative sampling strategies, while the JSDbased estimator was insensitive. JSD worked better ( accuracy improvement) by excluding positive samples from the product of marginals, so we exclude them in our implementation. It is quite likely that this is because our batchwise sampling strategy overestimate the frequency of positive examples as measured across the complete dataset. infoNCE was highly sensitive to the number of negative samples for estimating the logexpectation term (see Figure 9). With high sample size, infoNCE outperformed JSD on many tasks, but performance drops quickly as we reduce the number of images used for this estimation. This may become more problematic for larger datasets and networks where available memory is an issue. DV was outperformed by JSD even with the maximum number of negative samples used in these experiments, and even worse was highly unstable as the number of negative samples dropped.
a.4 Nearestneighbor analysis
In order to better understand the metric structure of DIM’s representations, we did a nearestneighbor analysis, randomly choosing a sample from each class in the test set, ordering the test set in terms of distance in the representation space (to reflect the uniform prior), then selecting the four with the lowest distance. Our results in Figure 8 show that DIM with a localonly objective, DIM(L), learns a representation with a much more interpretable structure across the image. However, our result potentially highlights an issue with using only consistent information across patches, as many of the nearest neighbors share patterns (colors, shapes, texture) but not class.
a.5 Ablation studies
To better understand the effects of hyperparameters , , and on the representational characteristics of the encoder, we performed several ablation studies. These illuminate the relative importance of global verses local mutual information objectives as well as the role of the prior.
Local versus global mutual information maximization
The results of our ablation study for DIM on CIFAR10 are presented in Figure 10. In general, good classification performance is highly dependent on the local term, , while good reconstruction is highly dependent on the global term, . However, a small amount of helps in classification accuracy and a small about of improves reconstruction. For mutual information, we found that having a combination of and yielded higher MINE estimates. Finally, for CelebA (Figure 11), where the classification task is more finegrained (is composed of potentially locallyspecified labels, such as “lipstick” or “smiling”), the global objective plays a stronger role than with classification on other datasets (e.g., CIFAR10).
The effect of the prior
We found including the prior term, , was absolutely necessary for ensuring low dependence between components of the highlevel representation, , as measured by NDM. In addition, a small amount of the prior term helps improve classification results when used with the local term, . This may be because the additional constraints imposed on the representation help to encourage the local term to focus on consistent, rather than trivial, information.
a.6 Empirical consistency of Neural Dependency Measure (NDM)
Here we evaluate the Neural Dependency Measure (NDM) over a range of VAE (Alemi et al., 2016; Higgins et al., 2016) models. VAE encourages disentangled representations by increasing the role of the KLdivergence term in the ELBO objective. We hypthesized that NDM would consistenly measure lower dependence (lower NDM) as the values increase, and our results in Figure 13 confirm this. As we increase , there is a strong downward trend in the NDM, though and give similar numbers. In addition, the variance over estimates and models is relatively low, meaning the estimator is empirically consistent in this setting.
a.7 Additional details on occlusion and coordinate prediction experiments
Here we present experimental details on the occlusion and coordinate prediction tasks.
Occlusions.
For the occlusion experiments, the sampling distribution for patches to occlude was adhoc. Roughly, we randomly occlude the input image under the constraint that at least one block of pixels remains visible and at least one block of pixels is fully occluded. We chose based on the receptive fields of local features in our encoder, since it guarantees that occlusion leaves at least one local feature fully observed and at least one local feature fully unobserved. Figure 14 shows the distribution of occlusions used in our tests.
Absolute coordinate prediction
For absolute coordinate prediction, the global features and local features are sampled by 1) feeding an image from the data distribution through the feature encoder, and 2) sampling a random spatial location from which to take the local features . Given and , we treat the coordinates and
as independent categorical variables and measure the required log probability using a sum of categorical crossentropies. In practice, we implement the prediction function
as an MLP with two hidden layers, each with units, ReLU activations, and batchnorm. We marginalize this objective over all local features associated with a given global feature when computing gradients.Relative coordinate prediction
For relative coordinate prediction, the global features and local features are sampled by 1) feeding an image from the data distribution through the feature encoder, 2) sampling a random spatial location from which to take source local features , and 3) sampling another random location from which to take target local features . In practice, our predictive model for this task uses the same architecture as for the task described previously. For each global feature we select one source feature and marginalize over all possible target features when computing gradients.
a.8 Training a generator by matching to a prior implicitly
We show here and in our experiments below that we can use prior objective in DIM (Equation 7) to train a highquality generator of images by training to map to a onedimensional mixture of two Gaussians implicitly. One component of this mixture will be a target for the pushforward distribution of through the encoder while the other will be a target for the pushforward distribution of the generator, , through the same encoder. Let be a generator function, where the input is drawn from a simple prior, (such as a spherical Gaussian). Let be the generated distribution and be the empirical distribution of the training set. Like in GANs, we will pass the samples of the generator or the training data through another function, , in order to get gradients to find the parameters, . However, unlike GANs, we will not play the minimax game between the generator and this function. Rather will be trained to generate a mixture of Gaussians conditioned on whether the input sample came from or :
(13) 
where and
are normal distributions with unit variances and means
and respectively. In order to find the parameters , we introduce two discriminators, , and use the lower bounds following defined by the JSD fGAN:(14) 
The generator is trained to move the firstorder moment of
to :(15) 
Some intuition might help understand why this might work. As discussed in Arjovsky & Bottou (2017), if and have support on a lowdimensional manifolds on , unless they are perfectly aligned, there exists a discriminator that will be able to perfectly distinguish between samples coming from and , which means that and must also be disjoint. However, to train the generator, and need to share support on in order to ensure stable and nonzero gradients for the generator. Our own experiments by overtraining the discriminator (Figure 15) confirm that lack of overlap between the two modes of the discriminator is symptomatic of poor training. Suppose we start with the assumption that the encoder targets, and , should overlap. Unless and are perfectly aligned (which according to Arjovsky & Bottou (2017) is almost guaranteed not to happen with natural images), then the discriminator can always accomplish this task by discarding information about or . This means that, by choosing the overlap, we fix the strength of the encoder.
Model  Inception score  FID 

Real data  
IE (ours)  
NSGANCP  
WGANGP 
a.9 Generation experiments and results
For the generator and encoder, we use a ResNet architecture (He et al., 2016) identical to the one found in Gulrajani et al. (2017). We used the contractive penalty (found in Mescheder et al. (2018)
but first introduced in contractive autoencoders
(Rifai et al., 2011)) on the encoder, gradient clipping on the discriminators, and no regularization on the generator. Batch norm
(Ioffe & Szegedy, 2015) was used on the generator, but not on the discriminator. We trained on dimensional LSUN (Yu et al., 2015), CelebA (Liu et al., 2015), and Tiny Imagenet dataset.a.10 Images Generation
Here, we train a generator mapping to two Gaussian implicitly as described in Section A.8. Our results (Figure 16) show highly realistic images qualitatively competitive to other methods (Gulrajani et al., 2017; Hjelm et al., 2018). In order to quantitatively compare our method to GANs, we trained a nonsaturating GAN with contractive penalty (NSGANCP) and WGANGP (Gulrajani et al., 2017) with identical architectures and training procedures. Our results (Table 11) show that, while our mehtod did not surpass NSGANCP or WGANGP in our experiments, they came reasonably close.
Comments
There are no comments yet.