bigan
code for "Adversarial Feature Learning"
view repo
The ability of the Generative Adversarial Networks (GANs) framework to learn generative models mapping from simple latent distributions to arbitrarily complex data distributions has been demonstrated empirically, with compelling results showing that the latent space of such generators captures semantic variation in the data distribution. Intuitively, models trained to predict these semantic latent representations given data may serve as useful feature representations for auxiliary problems where semantics are relevant. However, in their existing form, GANs have no means of learning the inverse mapping -- projecting data back into the latent space. We propose Bidirectional Generative Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and demonstrate that the resulting learned feature representation is useful for auxiliary supervised discrimination tasks, competitive with contemporary approaches to unsupervised and self-supervised feature learning.
READ FULL TEXT VIEW PDFcode for "Adversarial Feature Learning"
Deep convolutional networks (convnets) have become a staple of the modern computer vision pipeline. After training these models on a massive database of image-label pairs like ImageNet
(Russakovsky et al., 2015), the network easily adapts to a variety of similar visual tasks, achieving impressive results on image classification (Donahue et al., 2014; Zeiler & Fergus, 2014; Razavian et al., 2014) or localization (Girshick et al., 2014; Long et al., 2015)tasks. In other perceptual domains such as natural language processing or speech recognition, deep networks have proven highly effective as well
(Bahdanau et al., 2015; Sutskever et al., 2014; Vinyals et al., 2015; Graves et al., 2013). However, all of these recent results rely on a supervisory signal from large-scale databases of hand-labeled data, ignoring much of the useful information present in the structure of the data itself.Meanwhile, Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) have emerged as a powerful framework for learning generative models of arbitrarily complex data distributions. The GAN framework learns a generator mapping samples from an arbitrary latent distribution to data, as well as an adversarial discriminator which tries to distinguish between real and generated samples as accurately as possible. The generator’s goal is to “fool” the discriminator by producing samples which are as close to real data as possible. When trained on databases of natural images, GANs produce impressive results (Radford et al., 2016; Denton et al., 2015).
Interpolations in the latent space of the generator produce smooth and plausible semantic variations, and certain directions in this space correspond to particular semantic attributes along which the data distribution varies. For example, Radford et al. (2016) showed that a GAN trained on a database of human faces learns to associate particular latent directions with gender and the presence of eyeglasses.
A natural question arises from this ostensible “semantic juice” flowing through the weights of generators learned using the GAN framework: can GANs be used for unsupervised learning of rich feature representations for arbitrary data distributions? An obvious issue with doing so is that the generator maps latent samples to generated data, but the framework does not include an
inverse mapping from data to latent representation.Hence, we propose a novel unsupervised feature learning framework, Bidirectional Generative Adversarial Networks (BiGAN). The overall model is depicted in Figure 1. In short, in addition to the generator from the standard GAN framework (Goodfellow et al., 2014), BiGAN includes an encoder which maps data to latent representations . The BiGAN discriminator discriminates not only in data space ( versus ), but jointly in data and latent space (tuples versus ), where the latent component is either an encoder output or a generator input .
It may not be obvious from this description that the BiGAN encoder should learn to invert the generator . The two modules cannot directly “communicate” with one another: the encoder never “sees” generator outputs ( is not computed), and vice versa. Yet, in Section 3, we will both argue intuitively and formally prove that the encoder and generator must learn to invert one another in order to fool the BiGAN discriminator.
Because the BiGAN encoder learns to predict features given data , and prior work on GANs has demonstrated that these features capture semantic attributes of the data, we hypothesize that a trained BiGAN encoder may serve as a useful feature representation for related semantic tasks, in the same way that fully supervised visual models trained to predict semantic “labels” given images serve as powerful feature representations for related visual tasks. In this context, a latent representation may be thought of as a “label” for , but one which came for “free,” without the need for supervision.
An alternative approach to learning the inverse mapping from data to latent representation is to directly model , predicting generator input given generated data . We’ll refer to this alternative as a latent regressor, later arguing (Section 4.1) that the BiGAN encoder may be preferable in a feature learning context, as well as comparing the approaches empirically.
BiGANs are a robust and highly generic approach to unsupervised feature learning, making no assumptions about the structure or type of data to which they are applied, as our theoretical results will demonstrate. Our empirical studies will show that despite their generality, BiGANs are competitive with contemporary approaches to self-supervised and weakly supervised feature learning designed specifically for a notoriously complex data distribution – natural images.
Dumoulin et al. (2016) independently proposed an identical model in their concurrent work, exploring the case of a stochastic encoder and the ability of such models to learn in a semi-supervised setting.
Let be the distribution of our data for
(e.g. natural images). The goal of generative modeling is capture this data distribution using a probabilistic model. Unfortunately, exact modeling of this probability density function is computationally intractable
(Hinton et al., 2006; Salakhutdinov & Hinton, 2009) for all but the most trivial models. Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) instead model the data distribution as a transformation of a fixed latent distribution for . This transformation, called a generator, is expressed as a deterministic feed forward network with and . The goal is to train a generator such that .The GAN framework trains a generator, such that no discriminative model can distinguish samples of the data distribution from samples of the generative distribution. Both generator and discriminator are learned using the adversarial (minimax) objective , where
(1) |
Goodfellow et al. (2014) showed that for an ideal discriminator the objective is equivalent to the Jensen-Shannon divergence between the two distributions and .
The adversarial objective 1 does not directly lend itself to an efficient optimization, as each step in the generator requires a full discriminator to be learned. Furthermore, a perfect discriminator no longer provides any gradient information to the generator, as the gradient of any global or local maximum of is . To provide a strong gradient signal nonetheless, Goodfellow et al. (2014) slightly alter the objective between generator and discriminator updates, while keeping the same fixed point characteristics. They also propose to optimize (1) using an alternating optimization switching between updates to the generator and discriminator. While this optimization is not guaranteed to converge, empirically it works well if the discriminator and generator are well balanced.
Despite the empirical strength of GANs as generative models of arbitrary data distributions, it is not clear how they can be applied as an unsupervised feature representation. One possibility for learning such representations is to learn an inverse mapping regressing from generated data back to the latent input . However, unless the generator perfectly models the data distribution , a nearly impossible objective for a complex data distribution such as that of high-resolution natural images, this idea may prove insufficient.
In Bidirectional Generative Adversarial Networks (BiGANs) we not only train a generator, but additionally train an encoder . The encoder induces a distribution mapping data points into the latent feature space of the generative model. The discriminator is also modified to take input from the latent space, predicting , where if is real (sampled from the real data distribution ), and if is generated (the output of ).
The BiGAN training objective is defined as a minimax objective
(2) |
where
(3) |
We optimize this minimax objective using the same alternating gradient based optimization as Goodfellow et al. (2014). See Section 3.4 for details.
BiGANs share many of the theoretical properties of GANs (Goodfellow et al., 2014), while additionally guaranteeing that at the global optimum, and
are each other’s inverse. BiGANs are also closely related to autoencoders with an
loss function. In the following sections we highlight some of the appealing theoretical properties of BiGANs.Let and
be the joint distributions modeled by the generator and encoder respectively.
is the joint latent and data space. For a region ,are probability measures over that region. We also define
as measures over regions and . We refer to the set of features and data samples in the support of and as and respectively. and respectively denote the Kullback-Leibler (KL) and Jensen-Shannon divergences between probability measures and . By definition,
where is the Radon-Nikodym (RN) derivative of measure with respect to measure , with the defining property that . The RN derivative is defined for any measures and on space such that is absolutely continuous with respect to : i.e., for any , .
We start by characterizing the optimal discriminator for any generator and encoder, following Goodfellow et al. (2014). This optimal discriminator then allows us to reformulate objective (3), and show that it reduces to the Jensen-Shannon divergence between the joint distributions and .
For any and , the optimal discriminator is the Radon-Nikodym derivative of measure with respect to measure .
Proof. Given in Appendix A.1.
This optimal discriminator now allows us to characterize the optimal generator and encoder.
The encoder and generator’s objective for an optimal discriminator can be rewritten in terms of the Jensen-Shannon divergence between measures and as .
Proof. Given in Appendix A.2.
The global minimum of is achieved if and only if . At that point, and .
Proof. From Proposition 2, we have that . The Jensen-Shannon divergence for any and , and if and only if . Therefore, the global minimum of occurs if and only if , and at this point the value is . Finally, implies that the optimal discriminator is chance: .
The optimal discriminator, encoder, and generator of BiGAN are similar to the optimal discriminator and generator of the GAN framework (Goodfellow et al., 2014). However, an important difference is that BiGAN optimizes a Jensen-Shannon divergence between a joint distribution over both data and latent features . This joint divergence allows us to further characterize properties of and , as shown below.
We first present an intuitive argument that, in order to “fool” a perfect discriminator, a deterministic BiGAN encoder and generator must invert each other. (Later we will formally state and prove this property.) Consider a BiGAN discriminator input pair . Due to the sampling procedure, must satisfy at least one of the following two properties:
If only one of these properties is satisfied, a perfect discriminator can infer the source of with certainty: if only (a) is satisfied, must be an encoder pair and ; if only (b) is satisfied, must be a generator pair and .
Therefore, in order to fool a perfect discriminator at (so that ), and must satisfy both (a) and (b). In this case, we can substitute the equality required by (a) into the equality required by (b), and vice versa, giving the inversion properties and .
Formally, we show in Theorem 2 that the optimal generator and encoder invert one another almost everywhere on the support and of and .
If and are an optimal encoder and generator, then almost everywhere; that is, for -almost every , and for -almost every .
Proof. Given in Appendix A.4.
While Theorem 2 characterizes the encoder and decoder at their optimum, due to the non-convex nature of the optimization, this optimum might never be reached. Experimentally, Section 4 shows that on standard datasets, the two are approximate inverses; however, they are rarely exact inverses. It is thus also interesting to show what objective BiGAN optimizes in terms of and . Next we show that BiGANs are closely related to autoencoders with an loss function.
As argued in Section 1, a model trained to predict features given data should learn useful semantic representations. Here we show that the BiGAN objective forces the encoder to do exactly this: in order to fool the discriminator at a particular , the encoder must invert the generator at that , such that .
The encoder and generator objective given an optimal discriminator can be rewritten as an autoencoder loss function
with and -almost and -almost everywhere.
Proof. Given in Appendix A.5.
Here the indicator function in the first term is equivalent to an autoencoder with loss, while the indicator in the second term shows that the BiGAN encoder must invert the generator, the desired property for feature learning. The objective further encourages the functions and to produce valid outputs in the support of and respectively. Unlike regular autoencoders, the loss function does not make any assumptions about the structure or distribution of the data itself; in fact, all the structural properties of BiGAN are learned as part of the discriminator.
In practice, as in the GAN framework (Goodfellow et al., 2014), each BiGAN module , , and is a parametric function (with parameters , , and , respectively). As a whole, BiGAN can be optimized using alternating stochastic gradient steps. In one iteration, the discriminator parameters are updated by taking one or more steps in the positive gradient direction , then the encoder parameters and generator parameters are together updated by taking a step in the negative gradient direction . In both cases, the expectation terms of
are estimated using mini-batches of
samples and drawn independently for each update step.Goodfellow et al. (2014) found that an objective in which the real and generated labels are swapped provides stronger gradient signal to . We similarly observed in BiGAN training that an “inverse” objective provides stronger gradient signal to and . For efficiency, we also update all modules , , and simultaneously at each iteration, rather than alternating between updates and , updates. See Appendix B for details.
It is often useful to parametrize the output of the generator and encoder in a different, usually smaller, space and rather than the original and . For example, for visual feature learning, the images input to the encoder should be of similar resolution to images used in the evaluation. On the other hand, generating high resolution images remains difficult for current generative models. In this situation, the encoder may take higher resolution input while the generator output and discriminator input remain low resolution.
We generalize the BiGAN objective (3) with functions and , and encoder , generator , and discriminator :
An identity and (and , ) yields the original objective. For visual feature learning with higher resolution encoder inputs, is an image resizing function that downsamples a high resolution image to a lower resolution image , as output by the generator. ( is identity.)
In this case, the encoder and generator respectively induce probability measures and over regions of the joint space , with , and defined analogously. For optimal and , we can show : a generalization of Theorem 1. When and are deterministic and optimal, Theorem 2 – that and invert one another – can also be generalized: for -almost every , and for -almost every .
We evaluate the feature learning capabilities of BiGANs by first training them unsupervised as described in Section 3.4
, then transferring the encoder’s learned feature representations for use in auxiliary supervised learning tasks. To demonstrate that BiGANs are able to learn meaningful feature representations both on arbitrary data vectors, where the model is agnostic to any underlying structure, as well as very high-dimensional and complex distributions, we evaluate on both permutation-invariant MNIST
(LeCun et al., 1998) and on the high-resolution natural images of ImageNet (Russakovsky et al., 2015).In all experiments, each module , , and is a parametric deep (multi-layer) network. The BiGAN discriminator takes data as its initial input, and at each linear layer thereafter, the latent representation
is transformed using a learned linear transformation to the hidden layer dimension and added to the non-linearity input.
Besides the BiGAN framework presented above, we considered alternative approaches to learning feature representations using different GAN variants.
The discriminator in a standard GAN takes data samples as input, making its learned intermediate representations natural candidates as feature representations for related tasks. This alternative is appealing as it requires no additional machinery, and is the approach used for unsupervised feature learning in Radford et al. (2016). On the other hand, it is not clear that the task of distinguishing between real and generated data requires or benefits from intermediate representations that are useful as semantic feature representations. In fact, if successfully generates the true data distribution , may ignore the input data entirely and predict unconditionally, not learning any meaningful intermediate representations.
We consider an alternative encoder training by minimizing a reconstruction loss , after or jointly during a regular GAN training, called latent regressor or joint latent regressor respectively. We use a sigmoid cross entropy loss
as it naturally maps to a uniformly distributed output space. Intuitively, a drawback of this approach is that, unlike the encoder in a BiGAN, the latent regressor encoder
is trained only on generated samples , and never “sees” real data . While this may not be an issue in the theoretical optimum where exactly – i.e., perfectly generates the data distribution – in practice, for highly complex data distributions , such as the distribution of natural images, the generator will almost never achieve this perfect result. The fact that the real data are never input to this type of encoder limits its utility as a feature representation for related tasks, as shown later in this section.We first present results on permutation-invariant MNIST (LeCun et al., 1998). In the permutation-invariant setting, each digit image must be treated as an unstructured D vector (Goodfellow et al., 2013)
. In our case, this condition is met by designing each module as a multi-layer perceptron (MLP), agnostic to the underlying spatial structure in the data (as opposed to a convnet, for example). See Appendix
C.1 for more architectural and training details. We set the latent distribution – a D continuous uniform distribution.Table 1 compares the encoding learned by a BiGAN-trained encoder with the baselines described in Section 4.1, as well as autoencoders (Hinton & Salakhutdinov, 2006) trained directly to minimize either or reconstruction error. The same architecture and optimization algorithm is used across all methods. All methods, including BiGAN, perform at roughly the same level. This result is not overly surprising given the relative simplicity of MNIST digits. For example, digits generated by in a GAN nearly perfectly match the data distribution (qualitatively), making the latent regressor (LR) baseline method a reasonable choice, as argued in Section 4.1. Qualitative results are presented in Figure 2.
BiGAN | LR | JLR | AE () | AE () | |
---|---|---|---|---|---|
97.39 | 97.30 | 97.44 | 97.13 | 97.58 | 97.63 |
Next, we present results from training BiGANs on ImageNet LSVRC (Russakovsky et al., 2015), a large-scale database of natural images. GANs trained on ImageNet cannot perfectly reconstruct the data, but often capture some interesting aspects. Here, each of , , and is a convnet. In all experiments, the encoder architecture follows AlexNet (Krizhevsky et al., 2012) through the fifth and last convolution layer (conv5). We also experiment with an AlexNet-based discriminator as a baseline feature learning approach. We set the latent distribution – a D continuous uniform distribution. Additionally, we experiment with higher resolution encoder input images – rather than the used elsewhere – using the generalization described in Section 3.5. See Appendix C.2 for more architectural and training details.
The convolutional filters learned by each of the three modules are shown in Figure 3. We see that the filters learned by the encoder have clear Gabor-like structure, similar to those originally reported for the fully supervised AlexNet model (Krizhevsky et al., 2012). The filters also have similar “grouping” structure where one half (the bottom half, in this case) is more color sensitive, and the other half is more edge sensitive. (This separation of the filters occurs due to the AlexNet architecture maintaining two separate filter paths for computational efficiency.)
In Figure 4 we present sample generations , as well as real data samples and their BiGAN reconstructions . The reconstructions, while certainly imperfect, demonstrate empirically that the BiGAN encoder and generator learn approximate inverse mappings, as shown theoretically in Theorem 2. In Appendix C.2, we present nearest neighbors in the BiGAN learned feature space.
Following Noroozi & Favaro (2016), we evaluate by freezing the first layers of our pretrained network and randomly reinitializing and training the remainder fully supervised for ImageNet classification. Results are reported in Table 2.
conv1 | conv2 | conv3 | conv4 | conv5 | |
Random (Noroozi & Favaro, 2016) | 48.5 | 41.0 | 34.8 | 27.1 | 12.0 |
Wang & Gupta (2015) | 51.8 | 46.9 | 42.8 | 38.8 | 29.8 |
Doersch et al. (2015) | 53.1 | 47.6 | 48.7 | 45.6 | 30.4 |
Noroozi & Favaro (2016)* | 57.1 | 56.0 | 52.4 | 48.3 | 38.1 |
BiGAN (ours) | 56.2 | 54.4 | 49.4 | 43.9 | 33.3 |
BiGAN, (ours) | 55.3 | 53.2 | 49.3 | 44.4 | 34.8 |
We evaluate the transferability of BiGAN representations to the PASCAL VOC (Everingham et al., 2014) computer vision benchmark tasks, including classification, object detection, and semantic segmentation. The classification task involves simple binary prediction of presence or absence in a given image for each of 20 object categories. The object detection and semantic segmentation tasks go a step further by requiring the objects to be localized, with semantic segmentation requiring this at the finest scale: pixelwise prediction of object identity. For detection, the pretrained model is used as the initialization for Fast R-CNN (Girshick, 2015) (FRCN) training; and for semantic segmentation, the model is used as the initialization for Fully Convolutional Network (Long et al., 2015) (FCN) training, in each case replacing the AlexNet (Krizhevsky et al., 2012) model trained fully supervised for ImageNet classification. We report results on each of these tasks in Table 3, comparing BiGANs with contemporary approaches to unsupervised (Krähenbühl et al., 2016) and self-supervised (Doersch et al., 2015; Agrawal et al., 2015; Wang & Gupta, 2015; Pathak et al., 2016) feature learning in the visual domain, as well as the baselines discussed in Section 4.1.
FRCN | FCN | |||||
Classification | Detection | Segmentation | ||||
(% mAP) | (% mAP) | (% mIU) | ||||
trained layers | fc8 | fc6-8 | all | all | all | |
sup. | ImageNet (Krizhevsky et al., 2012) | 77.0 | 78.8 | 78.3 | 56.8 | 48.0 |
self-sup. | Agrawal et al. (2015) | 31.2 | 31.0 | 54.2 | 43.9 | - |
Pathak et al. (2016) | 30.5 | 34.6 | 56.5 | 44.5 | 30.0 | |
Wang & Gupta (2015) | 28.4 | 55.6 | 63.1 | 47.4 | - | |
Doersch et al. (2015) | 44.7 | 55.1 | 65.3 | 51.1 | - | |
unsup. | -means (Krähenbühl et al., 2016) | 32.0 | 39.2 | 56.6 | 45.6 | 32.6 |
Discriminator () | 30.7 | 40.5 | 56.4 | - | - | |
Latent Regressor (LR) | 36.9 | 47.9 | 57.1 | - | - | |
Joint LR | 37.1 | 47.9 | 56.5 | - | - | |
Autoencoder () | 24.8 | 16.0 | 53.8 | 41.9 | - | |
BiGAN (ours) | 37.5 | 48.7 | 58.9 | 46.2 | 34.9 | |
BiGAN, (ours) | 41.7 | 52.5 | 60.3 | 46.9 | 35.2 |
column, only the linear classifier (a multinomial logistic regression) is learned – in the case of BiGAN, on top of randomly initialized fully connected (FC) layers
fc6 and fc7. In the fc6-8 column, all three FC layers are trained fully supervised with all convolution layers frozen. Finally, in the all column, the entire network is “fine-tuned”. BiGAN outperforms other unsupervised (unsup.) feature learning approaches, including the GAN-based baselines described in Section 4.1, and despite its generality, is competitive with contemporary self-supervised (self-sup.) feature learning approaches specific to the visual domain.Despite making no assumptions about the underlying structure of the data, the BiGAN unsupervised feature learning framework offers a representation competitive with existing self-supervised and even weakly supervised feature learning approaches for visual feature learning, while still being a purely generative model with the ability to sample data and predict latent representation . Furthermore, BiGANs outperform the discriminator () and latent regressor (LR) baselines discussed in Section 4.1, confirming our intuition that these approaches may not perform well in the regime of highly complex data distributions such as that of natural images. The version in which the encoder takes a higher resolution image than output by the generator (BiGAN ) performs better still, and this strategy is not possible under the LR and baselines as each of those modules take generator outputs as their input.
Although existing self-supervised approaches have shown impressive performance and thus far tended to outshine purely unsupervised approaches in the complex domain of high-resolution images, purely unsupervised approaches to feature learning or pre-training have several potential benefits.
BiGAN and other unsupervised learning approaches are agnostic to the domain of the data. The self-supervised approaches are specific to the visual domain, in some cases requiring weak supervision from video unavailable in images alone. For example, the methods are not applicable in the permutation-invariant MNIST setting explored in Section 4.2, as the data are treated as flat vectors rather than 2D images.
Furthermore, BiGAN and other unsupervised approaches needn’t suffer from domain shift between the pre-training task and the transfer task, unlike self-supervised methods in which some aspect of the data is normally removed or corrupted in order to create a non-trivial prediction task. In the context prediction task (Doersch et al., 2015), the network sees only small image patches – the global image structure is unobserved. In the context encoder or inpainting task (Pathak et al., 2016), each image is corrupted by removing large areas to be filled in by the prediction network, creating inputs with dramatically different appearance from the uncorrupted natural images seen in the transfer tasks.
Other approaches (Agrawal et al., 2015; Wang & Gupta, 2015) rely on auxiliary information unavailable in the static image domain, such as video, egomotion, or tracking. Unlike BiGAN, such approaches cannot learn feature representations from unlabeled static images.
We finally note that the results presented here constitute only a preliminary exploration of the space of model architectures possible under the BiGAN framework, and we expect results to improve significantly with advancements in generative image models and discriminative convolutional networks alike.
The authors thank Evan Shelhamer, Jonathan Long, and other Berkeley Vision labmates for helpful discussions throughout this work. This work was supported by DARPA, AFRL, DoD MURI award N000141110688, NSF awards IIS-1427425 and IIS-1212798, and the Berkeley Artificial Intelligence Research laboratory. The GPUs used for this work were donated by NVIDIA.
Speech recognition with deep recurrent neural networks.
In ICASSP, 2013.Data-dependent initializations of convolutional neural networks.
In ICLR, 2016.Deep Boltzmann machines.
In AISTATS, 2009.See 1 Proof. For measures and on space , with absolutely continuous with respect to , the RN derivative exists, and we have
(4) |
Let the probability measure denote the average of measures and . Both and are each absolutely continuous with respect to . Hence the RN derivatives and exist and sum to :
(5) |
We use (4) and (5) to rewrite the objective (3) as a single expectation under measure :
Note that for any . Thus, .
While Theorem 1 and Propositions 1 and 2 hold for any encoder and generator , stochastic or deterministic, Theorems 2 and 3 assume the encoder and generator are deterministic functions; i.e., with conditionals and defined as functions.
For use in the proofs of those theorems, we simplify the definitions of measures and given in Section 3 for the case of deterministic functions and below:
See 2 Proof. Let be the region of in which the inversion property does not hold. We will show that, for optimal and , has measure zero under (i.e., ) and therefore holds -almost everywhere.