Generative Models as a Data Source for Multiview Representation Learning

06/09/2021 ∙ by Ali Jahanian, et al. ∙ MIT 15

Generative models are now capable of producing highly realistic images that look nearly indistinguishable from the data on which they are trained. This raises the question: if we have good enough generative models, do we still need datasets? We investigate this question in the setting of learning general-purpose visual representations from a black-box generative model rather than directly from data. Given an off-the-shelf image generator without any access to its training data, we train representations from the samples output by this generator. We compare several representation learning methods that can be applied to this setting, using the latent space of the generator to generate multiple "views" of the same semantic content. We show that for contrastive methods, this multiview data can naturally be used to identify positive pairs (nearby in latent space) and negative pairs (far apart in latent space). We find that the resulting representations rival those learned directly from real data, but that good performance requires care in the sampling strategy applied and the training method. Generative models can be viewed as a compressed and organized copy of a dataset, and we envision a future where more and more "model zoos" proliferate while datasets become increasingly unwieldy, missing, or private. This paper suggests several techniques for dealing with visual representation learning in such a future. Code is released on our project page:



There are no comments yet.


page 2

page 7

page 13

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The last few years have seen great progress in the diversity and quality of generative models. For almost every popular data domain there is now a generative model that produces realistic samples from that domain, be it images [44, 4, 47], music [14], or text [5]. This raises an intriguing possibility: what we used to do with real data, can we now do instead with synthetic data, sampled from a generative model?

Figure 1: Visual representation learning typically consists of training an image embedding function, , given a dataset of real images , as shown in the top row. In our work (bottom row), we study how to learn representations given instead a black-box generative model . Generative models allow us to sample continuous streams of synthetic data to learn from. By applying transformations

on the latent vectors

which input to the model, we can create multiple data “views” that can serve as effective training data for representation learners.

If so, there would be immediate advantages. Models are highly compressed compared to the datasets they represent and therefore easier to share and store. Synthetic data also circumvents some of the concerns around privacy and usage rights that limit the distribution of real datasets [61], and models can be edited to censor sensitive attributes [36], remove biases that exist in real datasets [57, 46], or steer toward other task-specific needs [30, 21, 55]. Perhaps because of these advantages, it is becoming increasingly common for pre-trained generative models to be shared online without their original training data being made easily accessible. This approach has been taken by hobbyists who may not have the resources or intellectual property rights to release the original data111E.g., see, hosting many pretrained models, without datasets.

and in the case of large-scale models such as GPT-3 

[5], where the training data has been kept private but model samples are available through an API.

Figure 2: Different ways of creating multiple views of the same “content”. (a) SimCLR [8] creates views by transforming an input image with standard pixel-space () data augmentations (example images taken from [8]). (b) With a generative model, we can instead create views by sampling nearby points in latent space , exploiting the fact that nearby points in latent space tend to generate imagery of the same semantic object. Note that these examples are illustrative, the actual transformations that achieve the best results are shown in Fig. 5.

Our work therefore targets a problem setting that has received little prior attention: given access only to a trained generative model, and no access to the dataset that trained it, can we learn effective visual representations?

To this end, we provide an exploratory study of representation learning in the setting of synthetic data sampled from pre-trained generative models: we analyze which representation learning algorithms are applicable, how well they work, and how they can be modified to make use of the special structure provided by deep generative networks.

Figure. 1 lays out the framework we study: we compare learning visual embedding functions from real data vs. from generated data controlled via latent transformations. We study generation and representation learning both with and without class labels, and test representation learners based on several objectives. We evaluate representations via transfer performance on held out datasets and tasks.

For representation learners, we focus primarily on contrastive methods that learn to associate multiple “views” of the same scene. These views may be co-occurring sensory signals such as imagery and sounds (e.g., [11]), or may be different augmented or transformed versions of the same image (e.g., [2, 8]). Interestingly, generative models can also create multiple views of an image: by steering in their latent spaces they can achieve camera and color transformations [30] and more [24, 66]. Figure 2 diagrams the currently popular setting, where views are generated as data transformations, versus the setting we focus on, where views are generated via latent transformations.

We study the properties of these latent views in contrastive learning and ask under what conditions they can lead to enhanced learners. We then also compare these learners with non-contrastive encoders, also trained on samples from generative models. Further, we study the long-standing promise of generate models in being capable of generating more samples than the number of datapoints that trained them. Knowing current models still have finite diversity, we ask: how many samples are necessary to get good performance on a task? To answer this question, we run our learning experiments under different number of images drawn from latent space. Our main findings are:

  1. Contrastive learning algorithms can be naturally extended to learning from generative samples, where different “views” of the data are created via transformations in the model’s latent space.

  2. These latent-space transformations can be combined with standard data transformations (“augmentations”) in pixel-space to achieve better performance than either method alone.

  3. To generate positive training pairs in latent space, simple Gaussian transformations work as well as more complicated latent space transformations, with the optimal standard deviation being not too large and not too small, obeying an inverse-U-shaped curve like that observed for contrastive learning from real data 


  4. Generative models can potentially produce an unbounded number of samples to train on; we find that performance improves by training on more samples, but sub-logarithmically.

2 Related work

Learning from Synthetic Data. Using synthetic data has been a prominent method for learning in different domains of science and engineering, with different goals including privacy-preservation and alternative sample generations [53, 61, 42, 32, 10]

. In computer vision tasks, synthetic data has been extensively used as a source for training models, for example in semantic segmentation 

[9, 52]

, human pose estimation 

[64, 29, 54], optical flow [38, 19], or self-supervised multi-task learning [50]. In most prior work, the synthetic data comes from a traditional simulation pipeline, e.g., via rendering a 3D world with a graphics engine. We instead study the setting where the synthetic data is sampled from a deep generative model.

Recent works have used generative networks, such as GANs [22], to improve images generated by graphic engines, closing the domain gap with real images [56, 28]. Ravuri et al. [48] show that, even though there is a gap between GAN-generated images and real ones, mixing the two sources can lead to improvements over models trained purely on real data.

Highly related to our paper is the work of Besnier et al. [3]

, which uses GAN-generated images to train a classifier. While they focus on using a class-conditional generator to train a classifier of those same classes, our work targets the more general setting of visual representation learning, and we also show how to apply our methods in the “unsupervised” setting where an unconditional generative model produces unlabeled samples to learn from.

Recent work has also explored using StyleGAN [31] to generate training data for semantic segmentation [69, 60]. These approaches differ from ours in that they use intermediate layers of the generator network as their representation of images, whereas our method does not require internal access to the generator; instead we simply treat the generator as a black-box source of training data for a downstream representation learner.

GAN-based data augmentation has also been used, in conjunction with real data, to improve robustness at training time [37] and test-time [6]. In contrast to these methods, we explore representation learning without access to the real data at all, relying only on samples from a black-box generative model to train our systems.

Contrastive Representation Learning. Contrastive learning methods [63, 65, 27, 58, 26, 8] have greatly advanced the state of the art of self-supervised representation learning. The idea of contrastive learning is to contrast positive pairs with negative pairs [23]. Such pairs can be easily constructed with various data formats, and examples include different augmentations or transformations of images [8], cross-modality alignment [58, 40, 43], and graph structured data [25]. One key reason for the success of contrastive learning has been shown to be well-chosen data transformations [8, 59, 67]. Specifically in [59], it was shown that different downstream tasks benefit from different data transformations. While all these approaches conduct data transformations, or “augmentations”, in the raw pixel space, in this paper we explore the possibility of transforming training points in the latent space of a GAN [22, 4, 16].

Generative Representation Learning. Generative models learn representations by modeling the data distribution . In VAEs [34], each data point is encoded into a latent distribution, from which codes are sampled to reconstruct the input by maximizing the data likelihood. GANs [22] model data generation through a minimax adversarial game, after which the discriminator can serve as a good representation extractor [44] . In ALI [17], BiGAN [15], and BigBiGAN [16]

both image encoding and latent decoding are modeled simultaneously, and the encoder turns out to be a representation learner. Similar to the trend in natural language processing 

[13, 45]

, autoregressive models 

[62, 7] have been adopted to learn representations from raw pixels. These prior works show ways to jointly train a representation alongside a generative model, using a training set of real data to guide the process. Our exploration qualitatively differs in that we assume we are given a black-box generative model, and no real training data, and the goal is to learn an effective representation by sampling from the model.

Figure 3: Three different methods for learning representations. The first row illustrates a standard contrastive learning framework (e.g., SimCLR [8]) in which positive pairs as sampled as transformations of training images . The second and third rows show the new setting we consider: we are given a generator, rather than a dataset, and can use the latent space (input) of the generator to control the production of effective training data. refers to transformations applied in pixel-space and denotes transformations in latent-space. The second row illustrates a contrastive learning approach in this setting and the third row shows an approach that simply inverts the generative model. Note for contrastive learning, negatives are omitted for clarity.

3 Method

Standard representation learning algorithms take a dataset as input, and produce an embedding function as output, where is a vector representation of . Our method, in contrast, takes a generative model as input, in order to produce the embedding function as output. We restrict our attention to implicit generative models (IGMs) [39], which map from latent variables to sampled images , i.e. . Many currently popular generative models have this form, including GANs [22], VAEs [34], and normalizing flows [51]. We also consider class-conditional variants, which we denote as , where is a discrete class label. In our experiments, we only investigate GANs, but note that the method is general to any IGM with latent variables.

To learn

, from either real data or model samples, we can pick any of a large variety of representation learners: autoencoders 

[1], rotation prediction [20]

, colorization 

[68], etc. We focus on contrastive methods due to their strong performance and natural extension to using latent transformations to define positive pairs. These methods are illustrated in Fig. 3 in the first and the second rows. We also examine the effectiveness of non-contrastive methods, namely an inverter (similar to the encoder in the autoencoders) as illustrated in the third row of Fig. 3. When labels are available, instead of the inverter, we learn a classifier.

In the following sections, we first define the contrastive framework with different sampling strategies for creating views via pixel transformations and latent transformations. We then define the non-contrastive frameworks.

3.1 Contrastive learning framework

Modern contrastive methods learn embeddings that are invariant to certain nuisance transformations of the data, or different “viewing” conditions. Two “views” of the same scene are pulled together in embedding space while views of different scenes are pushed apart. A common objective is the InfoNCE objective [63]. We use the following variant of this objective:


Here is an anchor image, is a positive pair of images, i.e. a pair we want to bring together in embedding space. is a negative pair of images, which we want to push apart.

Different contrastive learning algorithms differ in how the positive and negative pairs are defined. Commonly, and are two different transformations (i.e. data transformations in pixel space) of the same underlying image, and is a transformation of a different randomly selected image [8, 26, 20].

Our analysis focuses both on how the datapoints are sampled in the first place, and on how they are transformed to create negative and positive pairs:

  1. What happens when is fake data sampled from an IGM rather than real data sampled from a dataset?

  2. How can we use transformations in pixel space and in latent space to define positive and negative pairs?

Different answers to these two questions yield the specific methods we compare, as described in the next sections.

3.2 Sampling positive and negative pairs

We first describe several schemes for sampling from both real and generated datapoints, with transformations applied in either latent space or pixel space.

3.2.1 Contrastive pixel transformation (i.e. SimCLR)

If we are given a dataset, and a set of pixel space transformations, , that we wish to be invariant to, a standard approach is to use the following sampling strategy:


where refers to drawing a random image uniformly from the dataset . This setting is depicted in the top row of Fig. 3. In our experiments, we use SimCLR [8] as an instantiation of this approach.

3.2.2 Supervised contrastive pixel transformation (i.e. SupCon)

Given a labeled dataset , with classes, we can leverage the labels to define positives as images that share the same labels and negatives as randomly sampled images from other classes:


where Cat is the categorical distribution and , . Following SupCon [33], in each batch one of the positives is specially set as .

3.2.3 Contrastive latent views + pixel transformation

If we are given an unconditional IGM , we may also define a set of latent transformations, , that we wish our representation to be invariant to. We can use this method with or without pixel space transformations applied as well. With both and applied, we have the following fake data generating process:


In practice we set

to be the truncated normal distribution,

, with truncation  [4]. This scenario is depicted in the middle row of Fig. 3.

Given a class-conditional IGM, we can do the same except conditioned on class labels, i.e. we simply replace with , using for the anchor and positive sample, and an independent draw for the negative sample. Additionally, taking inspiration from SupCon, in each batch, one positive is set with the same as the anchor (but different pixel space transformation) while the other positives in the batch are generated from independently drawn .

3.3 Creating views with and

We refer to transformations as “views” of the data (either real or fake). For pixel space transformations, , many options have been previously proposed, and in our experiments we choose the transforms from SimCLR [8] as they are currently standard and effective. Our framework of course could be updated with better transforms as they are developed in future work.

Our work is the first, to our knowledge, to explore latent space transformations, , for contrastive representation learning. Therefore, we focus our analysis on studying the properties of and explore the following options:

3.3.1 Independent latent views

The simplest method is:


That is, the transformation simply produces a new random draw from . In the unconditional setting, this can be considered a naive baseline where the two views share no information about the image. However, in the class-conditional case, this strategy is actually quite sensible, and in a sense optimal if the goal is to extract class semantics [59]: the two positive views are two different images that are independent except that they share the same class .

3.3.2 Gaussian latent views

Many IGMs have the remarkable property that nearby points in latent space map to semantically related generated images [30]. This suggests that we can define positive views as nearby latent vectors (refer to Sec. 4.1.2 for an empirical proof). A simple way to do so is to define the latent transformation to just be a small offset applied to the sampled vector. We use truncated Gaussian offsets as a simple instantiation of this idea:


where is the truncated Normal distribution with truncation  [4].


argues that there exists a sweet spot in view creation, where the two views share just the right amount of information, not too much and not too little. In the present setting, this theory suggests that there should be a corresponding sweet spot in the variance of the Gaussian,

. Indeed we observe this in Fig. 6.

3.3.3 Steered latent views

Can we find latent transformations that are more directly related to semantics? We leverage the recent method of Jahanian et al. [30], which finds latent vectors that achieve target effects in image space, such as shifting an image up and down or changing it is brightness.


where is a latent vector trained to match a target pixel space transformation according to the following objective:


where is a pixel space transformation, such as “shift left”, parameterized by the strength of the transformation , e.g. the number of pixels to shift.

Figures 4 and 5 show different views generated according to all the preceding methods.

3.4 Non-contrastive alternatives

We can also learn image representations without a contrastive loss. Here we describe two possible approaches:


The vector that generates an image can itself be considered a vector representation of that image. This suggests that simply inverting the generator may result in a useful embedding function. For a given IGM , we can learn an encoder that inverts an image according to , finding the latent vector that would lead to generate the original image. We can do that by minimizing the distance between the original and regenerated image .

If the images are generated from the same , , as shown in the bottom row of Fig. 3, we can also minimize the distance between the original latent and the inverted . We can also make sure that the encoder function is invariant or equivariant to different latent transforms of a given image, such that .


Given a labeled dataset and a set of latent and image transformations, we may want to learn a representation that is invariant to such transformations, retaining enough information to predict the label of an image. We can do that by training an encoder that classifies different views of an image to its given label, via a cross-entropy loss.

Figure 4: Examples of different transformation methods for unconditional data.
Figure 5: Examples of different transformation methods for class-conditional data.

4 Experiments

To study the methods in Sec. 3, we analyze the behaviour and effectiveness of the representations learned from both unconditional and class-conditional IGMs. We use the generator from BigBiGAN [4] as our unconditional IGM. Note, we do not

use the encoder or training scheme from BigBiGAN, we simply use the IGM as an off-the-shelf unconditional ImageNet generator. For the class-conditional IGM, we use BigGAN 

[4] with the “deep-256” and truncation. Both IGMs are trained on ImageNet1000 [12]. We use image size of for all the settings (i.e. scaling down BigGAN images).

For representation learning, we investigate two settings in our experiments: 1) training and evaluating representations on data at the scale of ImageNet1000 [12], and 2) a lighter weight protocol where we train and evaluate representations only on data at the scale of ImageNet100 [58] (a 100 class subset of ImageNet1000). In the ImageNet1000 setting, the real data encoders are trained on the ImageNet1000 dataset, the class-conditional IGM encoders are trained on anchor images ( images conditioned on each of the ImageNet1000 classes, which roughly matches the size of ImageNet1000), and the unconditional IGM encoders are trained on anchor images sampled unconditionally. We use SGD with learning rate of , batch size of , and epochs. In the ImagetNet100 setting, the real data encoders are trained on ImageNet100, the class-conditional IGM encoders are trained on anchor images ( images conditioned on each of the classes in ImageNet100), and the unconditional IGM encoders are trained on anchor images sampled unconditionally (note that the unconditional model implicitly is still sampling from all 1000 classes since the generative model itself was fit to ImageNet1000). We use SGD with learning rate of , batch size of , for epochs.

We evaluate the learned representations by training a linear classifier on top of the learned embeddings, for either ImageNet1000 or ImageNet100, matching the setting in which the encoder was trained, and report Top-1 class accuracy. For both ImageNet1000 and ImageNet100 settings, we use SGD with batch size of over epochs, and learning rates of for real and for IGMs, using a cosine decay schedule. We also evaluate on the Pascal VOC2007 dataset [18] as a held out data setting, selecting hyper-parameters via cross-validation, following [35] and report the mean average precision () measures. See Appendix C for evaluating on the object detection task.

4.1 Contrastive learning methods

IGMs can create multiple views of images via their latent transformations [30], making them useful for contrastive multi-view learning. In this section, we study the effectiveness of contrastive methods for learning representations from IGMs.

To generate datasets containing multiple views produced by latent transformations, we first sample anchor view images in the ImageNet1000 setting (described above). All the anchors are generated with . We then generate one neighbor view for each anchor view by following different strategies for view creation, Gaussian and steer, described in Sec. 3.3. For the Gaussian view, we tune the standard deviation of to be and for our unconditional and class-conditional IGMs respectively (i.e. and ), and we use this setting in all of our experiments. For steering, we learn the latent walk as the summation of individual walks with randomly sampled steps for different camera transformations, i.e. horizontal and vertical shifts, zoom, 2D and 3D rotations as well as color transformations, as follows:


for . The details for training follows [30], and we report them in Appendix A.

The resulting images of our latent view creation strategies are shown in Fig. 4(b,c) and Fig. 5(b,c) for the unconditional and conditional IGMs, respectively. In the class-conditional case, we test independent views, i.e. and qualitatively show examples in Fig. 5(d). When needed, we further combine the latent views with pixel transformations, as shown in Fig. 4(e,f) and Fig. 5(f).

In the following sections, we learn visual representations of the generated images by training a ResNet-50 via SimCLR [8] for the datasets generated from unconditional IGMs and SupCon [33] for the class-conditional IGMs.

Training Method Transfer Task
Data distribution Objective ImageNet1000 VOC07 Classification
Top-1 Accuracy AP
Real SimCLR Augs. Contrastive 43.90 0.67
Generated SimCLR Augs. Contrastive 35.69 0.57
Generated Contrastive 28.88 0.52
Generated SimCLR Augs. Contrastive 42.58 0.64
Generated Contrastive 26.52 0.49
Generated SimCLR Augs. Contrastive 41.78 0.63
Generated Inverter 26.43 0.49
Table 1: Results on unconditional IGMs. Real data is distributed as , generated data is distributed as . indicates that no transformation is applied. For the Contrastive objective, positive and negative views are defined as described in Sec. 3.2.1 (without using class labels).
Training Method Transfer Task
Data distribution Objective ImageNet1000 VOC07 Classification
Top-1 Accuracy AP
Real SimCLR Augs. Sup. Contrastive 50.84 0.76
Generated SimCLR Augs. Sup. Contrastive 48.19 0.75
Generated Sup. Contrastive 35.74 0.66
Generated SimCLR Augs. Sup. Contrastive 46.43 0.74
Generated Sup. Contrastive 36.74 0.68
Generated SimCLR Augs. Sup. Contrastive 49.25 0.75
Generated Sup. Contrastive 36.21 0.65
Generated SimCLR Augs. Sup. Contrastive 48.97 0.76
Generated Inverter 31.56 0.60
Table 2: Results on class-conditional IGMs. Real data is distributed as , and generated data as . indicates that no transformation is applied. indicates that the transformation draws a new sample , independent of the original . For the Sup. Contrastive objective, positives are defined following Sec. 3.2.2, where two views are treated as positive if and only if they share the same label .

4.1.1 Effect of the latent and image transformations

First, we experiment with the effect of using pixel transformations on images sampled by the IGMs, without latent transformations, i.e. . Next, we enable both latent and pixel transformations together, where the latent transformation is either Gaussian or steer, i.e. or , respectively. There is a third latent transformation, independent view, i.e. , only applicable to the class-conditional IGM because of conditioning on the class semantics. We report in Table 1 and  2 the transfer results on unconditional and class-conditional IGMs, respectively, for the described transformations. From both tables, we observe similar trends across both unconditional and class-conditional IGMs. The view generation works well along with the contrastive methods: while pixel transformations lead to strong representations, these are significantly improved by adding latent transformations , in the unconditional case. In the class-conditional case, on the other hand, there is only marginal improvement using , indicating that the class-label supervision may be sufficient to already lead to strong semantic representations.

When comparing different types of latent transformations, we find that Gaussian transformations work best in the unconditional case, and close to steer for the VOC results in the class-conditional case. This is a somewhat surprising result. Despite designing views through steering methods, transforming randomly in all latent directions provides close or better performance.

Figure 6: Performance of the learned representations when trained on Gaussian views with different standard deviations for . For each standard deviation, we evaluate the performance of a linear classifier on ImageNet100, on top of the learned features.

4.1.2 Limits of the latent transformations

In contrastive learning, there is an optimal point in how much two views can share information [59]. Similarly, in our Gaussian latent view creation, we expect there to be a sweet spot for how far we can go when sampling a neighbor view relative to an anchor view. To study that sweet spot for the unconditional IGM, we run an experiment where we vary the distance of the neighbor point from the anchor point in the latent space, using the Gaussian transformation. For this experiment we use the ImageNet100 setting. Fig. 6 illustrates the linear test performance versus the standard deviation () of the Gaussian . We observe there is a sweet spot at , after which the performance deteriorates.

4.2 Non-Contrastive methods

We further study representations learned from non-contrastive methods.


We first experiment with an inverter, as illustrated in Fig. 3. We learn an image encoder by minimizing the mean squared error between the original and predicted latent code . For the class-conditional IGMs, we include an auxiliary cross-entropy loss, to predict the category of the encoded image. For this experiment, we use the ImageNet1000 setting.

Similar to the contrastive setting, we train a ResNet-50 encoder and replace the last layer with a fully connected layer to predict the latent vector of a image. We train the encoder with a dataset of images and their latent vectors, i.e. pairs of . Note we don’t apply image transformations on given that directions in -space encode basic transformations like shifting and color change [30] so is not invariant to these transformations.

The results of using the unconditional IGM representations in Table 1 and the class-conditional results in Table 2 suggest this non-contrastive approach performs poorly in comparison with the contrastive methods.


When learning representations from class-conditional IGMs, we can leverage the labels available and learn representations through a classification objective (softmax cross-entropy). We study the performance of this learning objective in the ImageNet100 setting. We train a classifier with a ResNet-50 backbone to classify images into one of the classes of ImageNet100, applying as data augmentation but not . After an embedding has been learned in this way, we evaluate the Top-1 Accuracy on ImageNet100 of a linear classifier on top of the learned representations. This approach achieves 65.2%, performing similarly to the Supervised Contrastive objective, which achieves 66.8% accuracy in the same setting (only , evaluation on ImageNet100). Overall, the classifier is a subset of the inverter (which reconstructs both and class label ), but it allows us to use data pixel augmentations which also helps for higher performance. Further results on the classifier experiments, and comparisons to other methods in the ImageNet100 setting are given in Appendix D.

Comparison to BigBiGAN

We finally compare the pretrained BigBiGAN encoder [16] (a non-contrastive approach) with our proposed methods. Note that an important difference in BigBiGAN is that the representations are learned jointly with the generator, and thus requires access to the original real dataset. To match the setup with our experiments, we take the BigBiGAN ResNet50 encoder after average pooling (without the head) and use it in our linear classification of ImageNet100 test. Further, note that this encoder is trained on ImageNet1000 and accepts as input images of size , whereas for this experiment we compare against our encoders trained on the ImageNet100 training sets and use images of size . The results of Top-1 accuracy from BigBiGAN encoder yields which is better than the inverter but still worse than all the contrastive methods trained and tested on ImageNet100 (see the numbers in in Table 3 in Appendix C).

4.3 Effect of the number of samples

In the synthetic data setting, we can potentially create infinite data. Generative models however still have finite diversity. Yet an interesting question is: how many samples do we really need to get good coverage and good visual representations?

To answer this question, we run experiments in the ImageNet100 setting and evaluate the linear classification performance for Gaussian views combined with pixel transformations, as we vary the number of samples. We show the results in Fig. 7. Note that we train all the encoders for the same number of iterations, equivalent to the epochs (matching the experiments shown in tables 3 and  4 in Appendix C). This means, as the number of images (x-axis) increases, the number of times the model revisits a seen image decreases.

Figure 7: Effect of the number of samples number of samples for training the representation learner. We evaluate the Top-1 accuracy of a linear classifier on ImageNet100, using the learned features. Here “unconditional” and “conditional” refer to the IGMs, and “Gaussian” refers to using the Gaussian view creation (). We also apply in both cases. When we turn off and only keep , we refer to them as “ only”.

As Fig. 7 illustrates, the performance increases with more samples both in the unconditional and class-conditional IGMs, but sub-logarithmically. Therefore, we suggest using a finite but large number of samples. These findings are consistent with recent work that found a small gap in generalization performance between online learning (infinite data) and a sufficiently large offline regime [41].

5 Conclusion

In this paper we investigated how to learn visual representations from IGMs (implicit generative models), when they are given as a black-box and without any access to their training data. IGMs make it possible to generate multiple views of similar image content by sampling nearby points in the latent space of the IGM. These views can be used for contrastive learning or as input to other representation learning algorithms that associate multiple views of the same scene. For unconditional IGMs, learning from latent views improves performance beyond only learning from pixel-space transformations. For class-conditional IGMs, on the other hand, we find that class-label supervision is sufficient to yield strong representations and latent view generation only provides marginal improvements beyond this. Representations learned from large quantities of generative data rival the transfer performance of representations learned from the real datasets, but there is still a performance gap. As generative models improve, and new latent steering methods are also developed, we expect that training vision systems on generative data may become an increasingly important tool in our toolkit.

6 Acknowledgements

Author A.J. thanks Kamal Youcef-Toumi, Boris Katz, and Antonio Torralba for their support. We thank Antonio Torralba and Tongzhou Wang for helpful discussions.

This research was supported in part by IBM through the MIT-IBM Watson AI Lab. The research was also partly sponsored by the United States Air Force Research Laboratory and the United States Air Force Artificial Intelligence Accelerator and was accomplished under Cooperative Agreement Number FA8750-19-2-1000. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the United States Air Force or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.


  • [1] D. H. Ballard (1987)

    Modular learning in neural networks.

    In AAAI, Cited by: §3.
  • [2] S. Becker and G. E. Hinton (1992) Self-organizing neural network that discovers surfaces in random-dot stereograms. Nature. Cited by: §1.
  • [3] V. Besnier, H. Jain, A. Bursuc, M. Cord, and P. Pérez (2020) This dataset does not exist: training models from generated images. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1–5. Cited by: §2.
  • [4] A. Brock, J. Donahue, and K. Simonyan (2019) Large scale GAN training for high fidelity natural image synthesis. In ICLR, Cited by: §1, §2, §3.2.3, §3.3.2, §4.
  • [5] T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al. (2020) Language models are few-shot learners. arXiv preprint arXiv:2005.14165. Cited by: §1, §1.
  • [6] L. Chai, J. Zhu, E. Shechtman, P. Isola, and R. Zhang (2021) Ensembling with deep generative views. arXiv preprint arXiv:2104.14551. Cited by: §2.
  • [7] M. Chen, A. Radford, R. Child, J. Wu, H. Jun, P. Dhariwal, D. Luan, and I. Sutskever (2020) Generative pretraining from pixels. In ICML, Cited by: §2.
  • [8] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton (2020) A simple framework for contrastive learning of visual representations. External Links: 2002.05709 Cited by: Appendix E, Figure 2, §1, Figure 3, §2, §3.1, §3.2.1, §3.3, §4.1.
  • [9] Y. Chen, W. Li, X. Chen, and L. V. Gool (2019) Learning semantic segmentation from synthetic data: a geometrically guided input-output adaptation approach. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    pp. 1841–1850. Cited by: §2.
  • [10] Y. Dan, Y. Zhao, X. Li, S. Li, M. Hu, and J. Hu (2020) Generative adversarial networks (gan) based efficient sampling of chemical composition space for inverse design of inorganic materials. npj Computational Materials 6 (1), pp. 1–7. Cited by: §2.
  • [11] V. R. de Sa (1994) Learning classification with unlabeled data. In Advances in neural information processing systems, pp. 112–119. Cited by: §1.
  • [12] J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei (2009) Imagenet: a large-scale hierarchical image database. In CVPR, Cited by: §4, §4.
  • [13] J. Devlin, M. Chang, K. Lee, and K. Toutanova (2018) Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Cited by: §2.
  • [14] P. Dhariwal, H. Jun, C. Payne, J. W. Kim, A. Radford, and I. Sutskever (2020) Jukebox: a generative model for music. arXiv preprint arXiv:2005.00341. Cited by: §1.
  • [15] J. Donahue, P. Krähenbühl, and T. Darrell (2017) Adversarial feature learning. In ICLR, Cited by: §2.
  • [16] J. Donahue and K. Simonyan (2019) Large scale adversarial representation learning. In NeurIPS, Cited by: Appendix E, §2, §2, §4.2.
  • [17] V. Dumoulin, I. Belghazi, B. Poole, O. Mastropietro, A. Lamb, M. Arjovsky, and A. Courville (2016) Adversarially learned inference. arXiv preprint arXiv:1606.00704. Cited by: §2.
  • [18] M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman (2010) The pascal visual object classes (voc) challenge. International journal of computer vision 88 (2), pp. 303–338. Cited by: §4.
  • [19] P. Fischer, A. Dosovitskiy, E. Ilg, P. Häusser, C. Hazırbaş, V. Golkov, P. Van der Smagt, D. Cremers, and T. Brox (2015) Flownet: learning optical flow with convolutional networks. arXiv preprint arXiv:1504.06852. Cited by: §2.
  • [20] S. Gidaris, P. Singh, and N. Komodakis (2018) Unsupervised representation learning by predicting image rotations. arXiv preprint arXiv:1803.07728. Cited by: §3.1, §3.
  • [21] L. Goetschalckx, A. Andonian, A. Oliva, and P. Isola (2019) Ganalyze: toward visual definitions of cognitive image properties. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5744–5753. Cited by: §1.
  • [22] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial networks. External Links: 1406.2661 Cited by: §2, §2, §2, §3.
  • [23] R. Hadsell, S. Chopra, and Y. LeCun (2006) Dimensionality reduction by learning an invariant mapping. In CVPR, Cited by: §2.
  • [24] E. Härkönen, A. Hertzmann, J. Lehtinen, and S. Paris (2020) GANSpace: discovering interpretable GAN controls. NeurIPS. Cited by: §1.
  • [25] K. Hassani and A. H. Khasahmadi (2020) Contrastive multi-view representation learning on graphs. arXiv preprint arXiv:2006.05582. Cited by: §2.
  • [26] K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick (2020) Momentum contrast for unsupervised visual representation learning. In CVPR, Cited by: Appendix C, §2, §3.1.
  • [27] O. J. Hénaff, A. Razavi, C. Doersch, S. Eslami, and A. v. d. Oord (2019) Data-efficient image recognition with contrastive predictive coding. arXiv:1905.09272. Cited by: §2.
  • [28] J. Hoffman, E. Tzeng, T. Park, J. Zhu, P. Isola, K. Saenko, A. A. Efros, and T. Darrell (2018) CyCADA: cycle-consistent adversarial domain adaptation. In

    International conference on machine learning

    Cited by: §2.
  • [29] C. Ionescu, D. Papava, V. Olaru, and C. Sminchisescu (2013) Human3. 6m: large scale datasets and predictive methods for 3d human sensing in natural environments. IEEE transactions on pattern analysis and machine intelligence 36 (7), pp. 1325–1339. Cited by: §2.
  • [30] A. Jahanian, L. Chai, and P. Isola (2020) On the ”steerability” of generative adversarial networks. External Links: 1907.07171 Cited by: Appendix A, §1, §1, §3.3.2, §3.3.3, §4.1, §4.1, §4.2.
  • [31] T. Karras, S. Laine, and T. Aila (2019) A style-based generator architecture for generative adversarial networks. Cited by: §2.
  • [32] S. Khan, E. Gunpinar, M. Moriguchi, and H. Suzuki (2019) Evolving a psycho-physical distance metric for generative design exploration of diverse shapes. Journal of Mechanical Design 141 (11). Cited by: §2.
  • [33] P. Khosla, P. Teterwak, C. Wang, A. Sarna, Y. Tian, P. Isola, A. Maschinot, C. Liu, and D. Krishnan (2020) Supervised contrastive learning. External Links: 2004.11362 Cited by: §3.2.2, §4.1.
  • [34] D. P. Kingma and M. Welling (2013) Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114. Cited by: §2, §3.
  • [35] S. Kornblith, J. Shlens, and Q. V. Le (2019) Do better imagenet models transfer better?. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2661–2671. Cited by: §4.
  • [36] J. Liao, C. Huang, P. Kairouz, and L. Sankar (2019) Learning generative adversarial representations (gap) under fairness and censoring constraints. arXiv preprint arXiv:1910.00411. Cited by: §1.
  • [37] C. Mao, A. Gupta, A. Cha, H. Wang, J. Yang, and C. Vondrick (2020) Generative interventions for causal learning. arXiv e-prints, pp. arXiv–2012. Cited by: §2.
  • [38] N. Mayer, E. Ilg, P. Hausser, P. Fischer, D. Cremers, A. Dosovitskiy, and T. Brox (2016) A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4040–4048. Cited by: §2.
  • [39] S. Mohamed and B. Lakshminarayanan (2016) Learning in implicit generative models. arXiv preprint arXiv:1610.03483. Cited by: §3.
  • [40] P. Morgado, N. Vasconcelos, and I. Misra (2020) Audio-visual instance discrimination with cross-modal agreement. arXiv:2004.12943. Cited by: §2.
  • [41] P. Nakkiran, B. Neyshabur, and H. Sedghi (2020) The deep bootstrap: good online learners are good offline generalizers. arXiv preprint arXiv:2010.08127. Cited by: §4.3.
  • [42] J. Nußberger, F. Boesel, S. M. Lenz, H. Binder, and M. Hess (2020) Synthetic observations from deep generative models and binary omics data with limited sample size. bioRxiv. Cited by: §2.
  • [43] M. Patrick, Y. M. Asano, R. Fong, J. F. Henriques, G. Zweig, and A. Vedaldi (2020) Multi-modal self-supervision from generalized data transformations. arXiv:2003.04298. Cited by: §2.
  • [44] A. Radford, L. Metz, and S. Chintala (2015) Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434. Cited by: §1, §2.
  • [45] A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever Improving language understanding by generative pre-training. Cited by: §2.
  • [46] V. V. Ramaswamy, S. S. Kim, and O. Russakovsky (2020) Fair attribute classification through latent space de-biasing. arXiv preprint arXiv:2012.01469. Cited by: §1.
  • [47] A. Ramesh, M. Pavlov, G. Goh, S. Gray, C. Voss, A. Radford, M. Chen, and I. Sutskever (2021) Zero-shot text-to-image generation. arXiv preprint arXiv:2102.12092. Cited by: §1.
  • [48] S. Ravuri and O. Vinyals (2019) Classification accuracy score for conditional generative models. In NeurIPS, Cited by: Appendix E, §2.
  • [49] S. Ren, K. He, R. Girshick, and J. Sun (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pp. 91–99. Cited by: Appendix C.
  • [50] Z. Ren and Y. J. Lee (2018) Cross-domain self-supervised multi-task feature learning using synthetic imagery. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 762–771. Cited by: §2.
  • [51] D. J. Rezende and S. Mohamed (2015) Variational inference with normalizing flows. arXiv preprint arXiv:1505.05770. Cited by: §3.
  • [52] G. Ros, L. Sellart, J. Materzynska, D. Vazquez, and A. M. Lopez (2016) The synthia dataset: a large collection of synthetic images for semantic segmentation of urban scenes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3234–3243. Cited by: §2.
  • [53] J. W. Sakshaug and T. E. Raghunathan (2010) Synthetic data for small area estimation. In International Conference on Privacy in Statistical Databases, pp. 162–173. Cited by: §2.
  • [54] G. Shakhnarovich, P. Viola, and T. Darrell (2003)

    Fast pose estimation with parameter-sensitive hashing

    In null, pp. 750. Cited by: §2.
  • [55] Y. Shen, J. Gu, X. Tang, and B. Zhou (2020) Interpreting the latent space of gans for semantic face editing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9243–9252. Cited by: §1.
  • [56] A. Shrivastava, T. Pfister, O. Tuzel, J. Susskind, W. Wang, and R. Webb (2017) Learning from simulated and unsupervised images through adversarial training. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2107–2116. Cited by: §2.
  • [57] S. Tan, Y. Shen, and B. Zhou (2020) Improving the fairness of deep generative models without retraining. External Links: 2012.04842 Cited by: §1.
  • [58] Y. Tian, D. Krishnan, and P. Isola (2019) Contrastive multiview coding. arXiv preprint arXiv:1906.05849. Cited by: Appendix C, §2, §4.
  • [59] Y. Tian, C. Sun, B. Poole, D. Krishnan, C. Schmid, and P. Isola (2020) What makes for good views for contrastive learning. arXiv preprint arXiv:2005.10243. Cited by: item 3, §2, §3.3.1, §3.3.2, §4.1.2.
  • [60] N. Tritrong, P. Rewatbowornwong, and S. Suwajanakorn (2021) Repurposing gans for one-shot semantic part segmentation. arXiv preprint arXiv:2103.04379. Cited by: §2.
  • [61] A. Tucker, Z. Wang, Y. Rotalinti, and P. Myles (2020) Generating high-fidelity synthetic patient data for assessing machine learning healthcare software. NPJ digital medicine 3 (1), pp. 1–13. Cited by: §1, §2.
  • [62] A. Van den Oord, N. Kalchbrenner, L. Espeholt, O. Vinyals, A. Graves, et al. (2016) Conditional image generation with pixelcnn decoders. In Advances in neural information processing systems, pp. 4790–4798. Cited by: §2.
  • [63] A. van den Oord, Y. Li, and O. Vinyals (2019) Representation learning with contrastive predictive coding. External Links: 1807.03748 Cited by: §2, §3.1.
  • [64] G. Varol, J. Romero, X. Martin, N. Mahmood, M. J. Black, I. Laptev, and C. Schmid (2017) Learning from synthetic humans. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 109–117. Cited by: §2.
  • [65] Z. Wu, Y. Xiong, S. X. Yu, and D. Lin (2018) Unsupervised feature learning via non-parametric instance discrimination. In CVPR, Cited by: §2.
  • [66] Z. Wu, D. Lischinski, and E. Shechtman (2020) StyleSpace analysis: disentangled controls for stylegan image generation. CoRR abs/2011.12799. External Links: Link, 2011.12799 Cited by: §1.
  • [67] T. Xiao, X. Wang, A. A. Efros, and T. Darrell (2020) What should not be contrastive in contrastive learning. arXiv preprint arXiv:2008.05659. Cited by: §2.
  • [68] R. Zhang, P. Isola, and A. A. Efros (2016) Colorful image colorization. In European conference on computer vision, pp. 649–666. Cited by: §3.
  • [69] Y. Zhang, H. Ling, J. Gao, K. Yin, J. Lafleche, A. Barriuso, A. Torralba, and S. Fidler (2021) Datasetgan: efficient labeled data factory with minimal human effort. arXiv preprint arXiv:2104.06490. Cited by: §2.


Appendix A Creating Steering Views

For steering, we learn the latent walk as the summation of individual walks with randomly sampled steps for different camera transformations, i.e. horizontal and vertical shifts, zoom, 2D and 3D rotations as well as color transformations, as follows:


for . Following [30], coefficient determines what magnitude we want for each transformation. For example, for color, we add to each RGB channel of an image (and have three vectors in the latent space). Similarly, for each , determines how many pixels we shift or rotate the image. For zoom, we use to determine how much scale we should apply to the image. For that reason, we use in the latent space but in the pixel space (to define the edits). In order to train , we create a dataset of target images and try to find the best that mimics the targets (as described in Eq. (17) in the main paper). Here, each target is an edited version of and we do all the transformations with random steps on them on each image. Thus, one target is a composition of all of the transformations.

After learning the walk, we can randomly draw a vector and add the walk to it with random steps of transformations, and this result in a transformed image. This transformed image will serve as a positive view of an image of a given vector.

Appendix B Visualization of Latent Transformations

Here we present qualitative examples of our methods in latent view creation for both unconditional and class-conditional IGMs as illustrated in Fig. 8 and Fig. 9, respectively.

Figure 8: Examples of different latent transformation methods for unconditional data. Columns from left: anchor, Gaussian neighbors, and steering neighbors.
Figure 9: Examples of different latent transformation methods for conditional data. Columns from left: anchor, Gaussian neighbors, and steering neighbors.
Training Method Transfer Task
Data distribution Objective ImageNet100 VOC07 Classification VOC07 Detection
Top-1 Accuracy AP AP
Real SimCLR Augs. Contrastive 67.36 59.45 45.93 73.49 48.72
Generated SimCLR Augs. Contrastive 57.10 51.04 44.12 72.23 46.11
Generated SimCLR Augs. Contrastive 61.52 55.07 46.84 74.88 49.68
Generated SimCLR Augs. Contrastive 60.74 55.03 46.55 74.51 49.37
Generated Inverter 28.36 22.95 39.56 66.38 40.56
Table 3: Results on unconditional IGMs. Real data is distributed as , generated data is distributed as . indicates that no transformation is applied. For the Contrastive objective, positive and negative views are defined as described in Sec. 3.2.1 (without using class labels).
Training Method Transfer Task
Data distribution Objective ImageNet100 VOC07 Classification VOC07 Detection
Top-1 Accuracy AP AP
Real SimCLR Augs. Sup. Contrastive 81.18 65.79 45.76 74.72 48.58
Generated SimCLR Augs. Sup. Contrastive 66.82 56.41 43.87 72.88 45.50
Generated SimCLR Augs. Sup. Contrastive 70.16 58.26 44.23 73.51 45.73
Generated SimCLR Augs. Sup. Contrastive 67.86 57.47 44.49 73.31 45.85
Generated SimCLR Augs. Sup. Contrastive 68.08 57.94 44.08 73.06 45.49
Generated Inverter 38.84 24.93 42.02 69.29 43.62
Table 4: Results on class-conditional IGMs. Real data is distributed as , and generated data as . indicates that no transformation is applied. indicates that the transformation draws a new sample , independent of the original . For the Sup. Contrastive objective, positives are defined following Sec. 3.2.2, where two views are treated as positive if and only if they share the same label .
Training Method Transfer Task
Data distribution Objective ImageNet100 VOC07 Classification VOC07 Detection
Top-1 Accuracy AP AP
Real SimCLR Augs. Classifier 80.64 67.99 45.90 75.08 48.24
Generated SimCLR Augs. Classifier 65.18 62.41 43.45 72.88 45.46
Table 5: Results on class-conditional IGMs. Real data is distributed as , and generated data as . We report the performance on real data and synthetic data with no latent transforms, for the Supervised Contrastive and Classifier objectives.
Figure 10: Mixing real data and synthetic IGM data as a way of augmenting real data. The plot shows the percentage of real data replaced the fake one versus the top-1 percent accuracy.

Appendix C Further Transfer Learning Tasks

In this section, we further investigate the lighter weight protocol: train and evaluate representations only on data at the scale of ImageNet100 [58]. This setting is described in the main text in Sec 4.

Further, we test on object detection task. For object detection, following [26], we use a Faster-RCNN [49] with the R50-C4 architecture. We fine-tune all layers for

iterations on PASCAL VOC

trainval07+12, with a batch size of

. We report the standard COCO metrics, including

(averaged across multiple thresholds), , .

We report the results in Table 3 for unconditional IGM and Table 4 for the conditional IGM.

Appendix D Learning Representations with a Classifier

We provide additional results on the classifier experiments from Section 4.2 in Table 5. Note these experiments are in the ImageNet100 setting and can be compared with Table 4 in this Appendix.

Appendix E Mixing Real and Synthetic IGM Data

Following  [48], we test whether unsupervised models trained using real data can benefit from using data generated by an IGM. For these experiments, we use the ImageNet100 setting. Starting from ImageNet100, we replace a given percentage of the images by samples from BigBiGAN [16], and a ResNet-50 using the SimCLR [8] framework. For each model, we train a linear classifier on ImageNet100, using the learned features, and report Top-1 accuracy. Similar to  [48], we see a decreasing trend in the performance as the number of real images decreases, but find a sweet spot in using a small percentage (5%) of synthetic images.