Log In Sign Up

HoloGAN: Unsupervised learning of 3D representations from natural images

We propose a novel generative adversarial network (GAN) for the task of unsupervised learning of 3D representations from natural images. Most generative models rely on 2D kernels to generate images and make few assumptions about the 3D world. These models therefore tend to create blurry images or artefacts in tasks that require a strong 3D understanding, such as novel-view synthesis. HoloGAN instead learns a 3D representation of the world, and to render this representation in a realistic manner. Unlike other GANs, HoloGAN provides explicit control over the pose of generated objects through rigid-body transformations of the learnt 3D features. Our experiments show that using explicit 3D features enables HoloGAN to disentangle 3D pose and identity, which is further decomposed into shape and appearance, while still being able to generate images with similar or higher visual quality than other generative models. HoloGAN can be trained end-to-end from unlabelled 2D images only. Particularly, we do not require pose labels, 3D shapes, or multiple views of the same objects. This shows that HoloGAN is the first generative model that learns 3D representations from natural images in an entirely unsupervised manner.


page 1

page 3

page 5

page 6

page 7

page 8


LR-GAN: Layered Recursive Generative Adversarial Networks for Image Generation

We present LR-GAN: an adversarial image generation model which takes sce...

Unsupervised Learning of 3D Structure from Images

A key goal of computer vision is to recover the underlying 3D structure ...

Unsupervised learning of clutter-resistant visual representations from natural videos

Populations of neurons in inferotemporal cortex (IT) maintain an explici...

Unsupervised Learning of Efficient Geometry-Aware Neural Articulated Representations

We propose an unsupervised method for 3D geometry-aware representation l...

Resolution Dependant GAN Interpolation for Controllable Image Synthesis Between Domains

GANs can generate photo-realistic images from the domain of their traini...

Code Repositories


Non-official + minimal reimplementation of HoloGAN by Nguyen-Phuoc, et al:

view repo


HoloGAN implementation in PyTorch

view repo

1 Introduction

Learning to understand the relationship between 3D objects and 2D images is an important topic in computer vision and computer graphics. In computer vision, it has applications in fields such as robotics, autonomous vehicles or security. In computer graphics, it benefits applications in both content generation and manipulation. This ranges from photorealistic rendering of 3D scenes or sketch-based 3D modelling, to novel-view synthesis or relighting.

Recent generative image models, in particular, generative adversarial networks (GANs), have achieved impressive results in generating images of high resolution and visual quality [26, 27, 5, 1, 57]

, while their conditional versions have achieved great progress in image-to-image translation

[22, 47], image editing [12, 11, 56] or motion transfer [6, 29]. However, GANs are still fairly limited in their applications, since they do not allow explicit control over attributes in the generated images, while conditional GANs need labels during training (Figure 1 left), which are not always available.

Even when endowed with labels like pose information, current generative image models still struggle in tasks that require a fundamental understanding of 3D structures, such as novel-view synthesis from a single image. For example, using 2D kernels to perform 3D operations, such as out-of-plane rotation to generate novel views, is very difficult. Current methods either require a lot of labelled training data or produce blurry results [13, 31, 14, 53, 50]. Although recent work has made efforts to address this problem by using 3D data [38, 60], 3D ground-truth data are very expensive to capture and reconstruct. Therefore, there is also a practical motivation to directly learn 3D representations from unlabelled 2D images.

Motivated by these observations, we focus on designing a novel architecture that allows unsupervised learning of 3D representations from images, enabling direct manipulation of view, shape, and appearance in generative image models (Figure HoloGAN: Unsupervised learning of 3D representations from natural images). The key insight of our network design is the combination of a strong inductive bias about the 3D world with deep generative models to learn better representations for downstream tasks. Conventional representations in computer graphics, such as voxels and meshes, are explicit in 3D and easy to manipulate via, for example, rigid-body transformations. However, they come at the cost of memory inefficiency or ambiguity in how to discretise complex objects. As a result, it is non-trivial to build generative models with such representations directly [46, 40, 43]. Implicit

representations, such as high-dimensional latent vectors or deep features, are favoured by generative models for being spatially compact and semantically expressive. However, these features are not designed to work with explicit 3D transformations

[9, 19, 14, 24, 42], which leads to visual artefacts and blurriness in tasks such as view manipulation.

We propose HoloGAN, an unsupervised generative image model that learns representations of 3D objects that are not only explicit in 3D but also semantically expressive. Such representations can be learnt directly from unlabelled natural images. Unlike other GAN models, HoloGAN employs both 3D and 2D features for generating images. HoloGAN first learns a 3D representation, which is then transformed to a target pose, projected to 2D features, and rendered to generate the final images (Figure 1 right). Different from recent work that employs hand-crafted differentiable renderers [34, 28, 21, 60, 48, 32, 17], HoloGAN learns perspective projection and rendering of 3D features from scratch using a projection unit [38]. This novel architecture enables HoloGAN to learn 3D representations directly from natural images for which there are no good hand-crafted differentiable renderers. To generate new views of the same scene, we directly apply 3D rigid-body transformations to the learnt 3D features, and visualise the results using the neural renderer that is jointly trained. This has been shown to produce sharper results than performing 3D transformations in high-dimensional latent vector space [38].

HoloGAN can be trained end-to-end in an unsupervised manner using only unlabelled 2D images, without any supervision of poses, 3D shapes, multiple views of objects, or geometry priors such as symmetry and smoothness over the 3D representation that are common in this line of work [3, 42, 25]. To the best of our knowledge, HoloGAN is the first generative model that can learn 3D representations directly from natural images in a purely unsupervised manner. In summary, our main technical contributions are:

  • A novel architecture that combines a strong inductive bias about the 3D world with deep generative models to learn disentangled representations (pose, shape, and appearance) of 3D objects from images. The representation is explicit in 3D and expressive in semantics.

  • An unconditional GAN that, for the first time, allows native support for view manipulation without sacrificing visual image fidelity.

  • An unsupervised training approach that enables disentangled representation learning without using labels.

Figure 1: Comparison of generative image models. Data given to the discriminator are coloured purple. Left: In conditional GANs, the pose is observed and the discriminator is given access to this information. Right: HoloGAN does not require pose labels and the discriminator is not given access to pose information.

2 Related work

Figure 2: HoloGAN’s generator network: we employ 3D convolutions, 3D rigid-body transformations, the projection unit and 2D convolutions. We also remove the traditional input layer from

, and start from a learnt constant 4D tensor. The latent vector

z is instead fed through MLPs to map to the affine transformation parameters for adaptive instance normalisation (AdaIN). Inputs are coloured gray.

HoloGAN is at the intersection of GANs, structure-aware image synthesis and disentangled representation learning. In this section, we review related work in these areas.

2.1 Generative adversarial networks

GANs learn to map samples from an arbitrary latent distribution to data that fool a discriminator network into categorising them as real data [15]. Most recent work on GAN architectures has focused on improving training stability or visual fidelity of the generated images, such as multi-resolution GANs [26, 58], or self-attention generators [57, 1]. However, there is far less work on designing GAN architectures that enable unsupervised disentangled representation learning which allows control over attributes of the generated images. By injecting random noise and adjusting the “style” of the image at each convolution, StyleGAN [27] can separate fine-grained variation (e.g., hair, freckles) from high-level features (e.g., pose, identity), but does not provide explicit control over these elements. A similar approach proposed by Chen et al. [8] shows that this network design also achieves more training stability. The success of these approaches indicates that network architecture can be more important for training stability and image fidelity than the specific choice of GAN loss. Therefore, we also focus on architecture design for HoloGAN, but with the goal of learning to separate pose, shape and appearance, and to enable direct manipulation of these elements.

2.2 3D-aware neural image synthesis

Recent work in neural image synthesis and novel-view synthesis has found success in improving the fidelity of the generated images with 3D-aware networks. RenderNet [38]

introduces a differentiable renderer using a convolutional neural network (CNN) that learns to render 2D images directly from 3D shapes. However, RenderNet requires 3D shapes and their corresponding rendered images during training. Other approaches learn 3D embeddings that can be used to generate new views of the same scene without any 3D supervision

[45, 48]. However, while Sitzmann et al. [48] require multiple views and pose information as input, Rhodin et al. [45] require supervision from paired images, background segmentation and pose information. Aiming to separate geometry from texture, visual object networks (VONs) [60] first sample 3D objects from a 3D generative model, render these objects using a hand-crafted differentiable layer to normal, depth and silhouette maps, and finally apply a trained image-to-image translation network. However, VONs need explicit 3D data for training, and only work for single-object images with a simple white background. Our HoloGAN also learns a 3D representation and renders it to produce a 2D image but without seeing any 3D shapes, and works with real images containing complex backgrounds and multi-object scenes.

The work closest to ours is Pix2Scene [42]

, which learns an implicit 3D scene representation from images, also in an unsupervised manner. However, this method maps the implicit representation to a surfel representation for rendering, while HoloGAN uses an explicit 3D representation with deep voxels. Additionally, using a hand-crafted differentiable renderer, Pix2Scene can only deal with simple synthetic images (uniform material and lighting conditions). HoloGAN, on the other hand, learns to render from scratch, and thus works for more complex natural images.

2.3 Disentangled representation learning

The aim of disentangled representation learning is to learn a factorised representation, in which changes in one factor only affect the corresponding elements in the generated images, while being invariant to other factors. Most work in disentangled learning leverages labels provided by the dataset [51, 2, 44] or benefits from set supervision (e.g., videos or multiple images of the same scene; more than two domains with the same attributes) [31, 10, 14].

Recent efforts in unsupervised disentangled representation learning, such as -VAE [19] or InfoGAN [9, 23]

, have focused mostly on designing loss functions. However, these models are sensitive to the choice of priors, provide no control over what factors are learned, and do not guarantee that the learnt disentangled factors are semantically meaningful. Moreover,

-VAE comes with a trade-off between the quality of the generated images and the level of disentanglement. Finally, these two methods struggle with more complicated datasets (natural images with complex backgrounds and lightings). In contrast, by redesigning the architecture of the generator network, HoloGAN learns to successfully separate pose, shape and appearance, as well as providing explicit pose control and enables shape/appearance editing even for more complex natural image datasets.

3 Method

To learn 3D representations from 2D images without labels, HoloGAN extends traditional unconditional GANs by introducing a strong inductive bias about the 3D world into the generator network. Specifically, HoloGAN generates images by learning a 3D representation of the world and to render it realistically such that it fools discriminator. View manipulation therefore can be achieved by directly applying 3D rigid-body transformations to the learnt 3D features. In other words, the images created by the generator are a view-dependent mapping from a learnt 3D representation to the 2D image space. This is different from other GANs which learn to map a noise vector directly to 2D features to generate images.

Figure 2 illustrates the generator architecture of HoloGAN: HoloGAN first learns a 3D representation (assumed to be in a canonical pose) using 3D convolutions (Section 3.1), transforms this representation to a certain pose, projects and computes visibility using the projection unit (Section 3.2), and computes shaded colour values for each pixel in the final images with 2D convolutions. HoloGAN shares many rendering insights with RenderNet [38], but works with natural images, and needs neither pre-training of the neural renderer nor paired 3D shape–2D image training data.

During training, we sample random poses from a uniform distribution and transform the 3D features using these poses before rendering them to images. This random pose perturbation pushes the generator network to learn a disentangled representation that is suitable for both 3D transformation and generating images that can fool the discriminator. While pose transformation could be learnt from data, we provide this operation, which is differentiable and straightforward to implement, explicitly to HoloGAN. Using explicit rigid-body transformations for novel-view synthesis has been shown to produce sharper images with fewer artefacts

[38]. More importantly, this provides an inductive bias towards representations that are compatible with explicit 3D rigid-body transformations, providing easy view manipulation. As a result, the learnt representations are explicit in 3D and disentangled between pose and identity.

Kulkarni et al. [31] categorize the learnt disentangled representation into intrinsic and extrinsic elements. While intrinsic elements describe shape, appearance, etc., extrinsic elements describe pose (elevation, azimuth) and lighting (location, intensity). The design of HoloGAN naturally lends itself to this separation by using more inductive biases about the 3D world: the adoption of a native 3D transformation, which controls the pose (shown as in Figure 2) directly, to the learnt 3D features, which control the identity (shown as in Figure 2).

3.1 Learning 3D representations

HoloGAN generates 3D representations from a learnt constant tensor (see Figure 2). The random noise vector instead is treated as a “style” controller, and mapped to affine parameters for adaptive instance normalization (AdaIN) [20]

after each convolution using a multilayer perceptron (MLP)


Given some features at layer of an image and the noise “style” vector , AdaIN is defined as:


This can be viewed as generating images by transformation of a template (the learnt constant tensor) using AdaIN to match the mean and standard deviation of the features at different levels

(which are believed to describe the image “style”) of the training images. Empirically, we find this network architecture can disentangle pose and identity much better than those that feed the noise vector directly to the first layer of the generator.

HoloGAN inherits this style-based strategy from StyleGAN [27] but is different in two important aspects. Firstly, HoloGAN learns 3D features from a 4D constant tensor (size 444512, where the last dimension is the feature channel) before projecting them to 2D features to generate images, while StyleGAN only learns 2D features. Secondly, HoloGAN learns a disentangled representation by combining 3D features with rigid-body transformations during training, while StyleGAN injects independent random noise into each convolution. StyleGAN, as a result, learns to separate 2D features into different levels of detail, depending on the feature resolution, from coarse (e.g., pose, identity) to more fine-grained details (e.g., hair, freckles). We observe a similar separation in HoloGAN. However, HoloGAN further separates pose (controlled by the 3D transformation), shape (controlled by 3D features), and appearance (controlled by 2D features).

It is worth highlighting that to generate images at 128128 (same as VON), we used a deep 3D representation of size up to 16161664. Even with such limited resolution, HoloGAN can still generate images with competitive quality and more complex backgrounds than other methods that use full 3D geometry such as VON’s voxel grid of resolution 1281281281 [60].

3.2 Learning with view-dependent mappings

In addition to adopting 3D convolutions to learn 3D features, during training, we introduce more bias about the 3D world by transforming these learnt features to random poses before projecting them to 2D images. This random pose transformation is crucial to guarantee that HoloGAN learns a 3D representation that is disentangled and can be rendered from all possible views, as also observed by Tran et al. [51] in DR-GAN. However, HoloGAN performs explicit 3D rigid-body transformation, while DR-GAN performs this using an implicit vector representation.

3.2.1 Rigid-body transformation

We assume a virtual pin-hole camera that is in the canonical pose (axis-aligned and placed along the negative z-axis) relative to the 3D features being rendered. We parameterise the rigid-body transformation by 3D rotation followed by trilinear resampling. We do not consider translation in this work. Assuming the up-vector of the object coordinate system is the global y-axis, rotation comprises rotation around the y-axis (azimuth) and the x-axis (elevation). Details on ranges for pose sampling are included in the supplemental document.

3.2.2 Projection unit

In order to learn meaningful 3D representations from just 2D images, HoloGAN learns a differentiable projection unit [38] that reasons over occlusion. In particular, the projection unit receives a 4D tensor (3D features), and returns a 3D tensor (2D features).

Since the training images are captured with different perspectives, HoloGAN needs to learn perspective projection. However, as we have no knowledge of the camera intrinsics, we employ two layers of 3D convolutions (without AdaIN) to morph the 3D representation into a perspective frustum (see Figure 2) before their projection to 2D features.

The projection unit is composed of a reshaping layer that concatenates the channel dimension with the depth dimension, thus reducing the tensor dimension from 4D () to 3D (

, and an MLP with a non-linear activation function (leakyReLU

[35] in our experiments) to learn occlusion. Please see the supplemental document for more details on the architecture of HoloGAN.

3.3 Loss functions

Identity regulariser

To generate images at higher resolution (128128 pixels), we find it beneficial to add an identity regulariser that ensures a vector reconstructed from a generated image matches the latent vector used in the generator G. We find that this encourages HoloGAN to only use for the identity to maintain the object’s identity when poses are varied. We introduce an encoder network F that shares the majority of the convolution layers of the discriminator, but uses an additional fully-connected layer to predict the reconstructed latent vector. The identity loss is:

Style discriminator

Our generator is designed to match the “style” of the training images at different levels, which effectively controls image attributes at different scales. Therefore, in addition to the image discriminator that classifies images as real or fake, we propose multi-scale

style discriminators that perform the same task but at the feature level. In particular, the style discriminator tries to classify the mean and standard deviation , which describe the image “style” [20]. Given a style discriminator for layer , the style loss is defined as:


The total loss can be written as:


We use for all experiments. We use the modified GAN loss [15] for .

4 Experiment settings


We train HoloGAN using a variety of datasets: Basel Face [39], CelebA [33], Cats [59], Chairs [7], Cars [54], and LSUN bedroom [55]. More details on the datasets can be found in the supplemental document. We train HoloGAN on resolutions of 6464 pixels for Cats and Chairs, and 128128 pixels for Basel Face, CelebA, Cars and LSUN bedroom.

Note that only the Chairs dataset contains multiple views of the same object; all other datasets only contain unique single views. For this dataset, because of the limited number of ShapeNet [7] 3D chair models (6778 shapes), we render images from 60 randomly sampled views for each chair. During training, we ensure that each batch contains completely different types of chairs to prevent the network from using set supervision, i.e., looking at the same chair from different viewpoints in the same batch, to cheat.

Implementation details

We use adaptive instance normalization [20] for the generator, and a combination of instance normalization [52] and spectral normalization [37] for the discriminator. See our supplemental document for details.

We train HoloGAN from scratch using the Adam solver [30]. To generate images during training, we sample , and also sample random poses from a uniform distribution (more details on pose sampling can be found in the supplemental document). We use for all datasets, except for Cars at 128128, where we use . We will make our code available publicly.

5 Results

We first show qualitative results of HoloGAN on datasets with increasing complexity (Section 5.1). Secondly, we provide quantitative evidence that shows HoloGAN can generate images with comparable or higher visual fidelity than other 2D-based GAN models (Section 5.2). We also show the effectiveness of using our learnt 3D representation compared to explicit 3D geometries (binary voxel grids) for image generation (Section 5.3). We then show how HoloGAN also learns to disentangle shape and appearance (Section 5.4). Finally, we perform an ablation study to demonstrate the effectiveness of our network design and training approach (Section 5.5).

5.1 Qualitative evaluation

Figures HoloGAN: Unsupervised learning of 3D representations from natural images, 3, 5 and 6b show that HoloGAN can smoothly vary the pose along azimuth and elevation while keeping the same identities for multiple different datasets. Note that the LSUN dataset contains a variety of complex layouts of multiple objects. This makes it a very challenging dataset for learning to disentangle pose from object identity.

In the supplemental document, we show results for the Basel Face dataset. We also perform linear interpolation with the noise vectors while keeping the poses the same, and show that HoloGAN can smoothly interpolate the identities between two samples. This demonstrates that HoloGAN correctly learns an explicit deep 3D representation that disentangles pose from identity, despite not seeing any pose labels or 3D shapes during training.

Figure 3: For the Chairs dataset with high intra-class variation, HoloGAN can still disentangle pose (360° azimuth, 160° elevation) and identity.
Figure 4: We compare HoloGAN to InfoGAN (images adapted from Chen et al. [9]) on CelebA (6464) in the task of separating identity and azimuth. Note that we cannot control what can be learnt by InfoGAN.
Comparison to InfoGAN [9]

We compare our approach to InfoGAN on the task of learning to disentangle identity from pose on the CelebA dataset [33] at a resolution of 6464 pixels. Due to the lack of publicly available code and hyper-parameters for this dataset111The official code repository at only works with the MNIST dataset., we use the CelebA figure from the published paper. We also tried the official InfoGAN implementation with the Cars dataset, but were unable to train the model successfully as InfoGAN appears to be highly sensitive to the choice of prior distributions and the number of latent variables to recover.

Figure 4 shows that HoloGAN successfully recovers and provides much better control over the azimuth while still maintaining the identity of objects in the generated images. HoloGAN can also recover elevation (Figure 5b, right) despite the limited variation in elevation in the CelebA dataset, while InfoGAN cannot. Most importantly, there is no guarantee that InfoGAN always recovers factors that control object pose, while HoloGAN explicitly controls this via rigid-body transformation.

Figure 5: HoloGAN supports changes in both azimuth (range: 100°) and elevation (range: 35°). However, the available range depends on the dataset. For CelebA, for example, few photos in the dataset were taken from above or below.
Figure 6: a) Car images generated by VON for an azimuth range of 360°. Despite using 3D shapes and silhouette masks during training, VON can only generate images with a simple white background, and struggles for certain frontal views (highlighted in red) and rear views (highlighted in blue). b) HoloGAN generates car images with complex backgrounds and varying azimuth (range: 360°) and elevation (range: 35°) from unlabelled images.

5.2 Quantitative results

To evaluate the visual fidelity of generated images, we use the Kernel Inception Distance (KID) by Bińkowski et al. [4]222 KID computes the squared maximum mean discrepancy between feature representations (computed from the Inception model [49]) of the real and generated images. In contrast to FID [18]

, KID has an unbiased estimator. The lower the KID score, the better the visual quality of generated images. We compare HoloGAN with other recent GAN models: DCGAN

[41], LSGAN [36], and WGAN-GP [16], on 3 datasets in Table 1. Note that KID does not take into account feature disentanglement, which is one of the main contributions of HoloGAN.

We use a publicly available implementation333 and use the same hyper-parameters (that were tuned for CelebA) provided with this implementation for all three datasets. Similarly, for HoloGAN, we use the same network architecture and hyper-parameters 444Except for ranges for sampling the azimuth: 100° for CelebA since face images are only taken from frontal views, and 360° for Chairs and Cars for all three datasets. We sample 20,000 images from each model to calculate the KID scores shown below.

Method CelebA 6464 Chairs 6464 Cars 6464
DCGAN [41] 1.81 0.09 6.36 0.16 04.78 0.11
LSGAN [36] 1.77 0.06 6.72 0.19 04.99 0.13
WGAN-GP [16] 1.63 0.09 9.43 0.24 15.57 0.29
HoloGAN (ours) 2.87 0.09 1.54 0.07 02.16 0.09
Table 1: KID [18] between real images and images generated by HoloGAN and other 2D-based GANs (lower is better). We report KID mean100 std.100. The table shows that HoloGAN can achieve competitive or higher KID score with other methods, while providing explicit control of objects in the generated images (not measured by KID).

Table 1 shows that HoloGAN can generate images with competitive (for CelebA) or even better KID scores on more challenging datasets: Chairs, which has high intra-class variability, and Cars, which has complex backgrounds and lighting conditions. This also shows that HoloGAN architecture is more robust and can consistently produce images with high visual fidelity across different datasets with the same set of hyper-parameters (except for azimuth ranges). We include visual samples for these models in the supplemental document. More importantly, HoloGAN learns a disentangled representation that allows manipulations of the generated images. This is a great advantage compared to methods such as -VAE [19], which has to compromise between the image quality and the level of disentanglement of the learnt features.

5.3 Deep 3D representation vs. 3D geometry

Here we compare our method to the state-of-the-art visual object networks (VON) [60] on the task of generating car images. We use the trained model and code provided by the authors. Although VON also takes a disentangling approach to generating images, it relies on 3D shapes and silhouette masks during training, while HoloGAN does not. Figure 6b shows that our approach can generate images of cars with complex backgrounds, realistic shadows and competitive visual fidelity. Note that to generate images at 128128, VON uses the full binary voxel geometry at 1281281281 resolution, while HoloGAN uses a deep voxel representation of up to 16161664 resolution, which is more spatially compact and expressive since HoloGAN also generates complex background and shadows. As highlighted in Figure 6a, VON also tends to change the identity of the car such as changing colours or shapes at certain views (highlighted), while HoloGAN maintains the identity of the car in all views. Moreover, HoloGAN can generate car images in full 360° views (Figure 6), while VON struggles to generate images from the back views.

Traditional voxel grids can be very memory intensive. HoloGAN hints at the great potential of using explicit deep voxel representations for image generation, as opposed to using the full 3D geometry in the traditional rendering pipeline. For example, in Figure 5c, we generate images of the entire bedroom scene using a 3D representation of only 16161664 resolution.

5.4 Disentangling shape and appearance

Here we show that in addition to pose, HoloGAN also learns to further divide identity into shape and appearance. We sample two latent codes, and , and feed them through HoloGAN. While controls the 3D features (before perspective morphing and projection), controls the 2D features (after projection). Figure 7 shows the generated images with the same pose, same , but with a different at each row. As can be seen, while the 3D features control objects’ shapes, the 2D features control appearance (texture and lighting). This shows that by using 3D convolutions to learn 3D representations and 2D convolutions to learn shading, HoloGAN learns to separate shape from appearance directly from unlabelled images, allowing separate manipulation of these factors. In the supplemental document, we provide further results, in which we use different latent codes for 3D features at different resolutions, and show the separation between features that control the overall shapes and more fine-grained details such as gender or makeup.

Figure 7: Combinations of different latent vectors (for 3D features) and (for 2D features). While influences objects’ shapes, determines appearance (texture and lighting). Best viewed in colour.

5.5 Ablation studies

We now conduct a series of studies to demonstrate the effectiveness of our network design and training approach.

Training without random 3D transformations

Randomly rotating the 3D features during training is crucial for HoloGAN, as it encourages the generator to learn a disentangled representation between pose and identity. In Figure 8, we show results with and without 3D transformation during training. For the model trained without 3D transformation, we generate images of rotated objects by manually rotating the learnt 3D features after the model is trained. As can be seen, this model can still generate images with good visual fidelity, but when the pose is changed, it completely fails to generate meaningful images, while HoloGAN can easily generate images of the same objects in different poses. We believe that the random transformation during training forces the generator to learn features that meaningfully undergo geometric transformations, while still being able to generate images that can fool the discriminator. As a result, our training strategy encourages HoloGAN to learn a disentangled representation of identity and pose.

Training with traditional input

By starting from a learnt constant tensor and using the noise vector as a “style” controller at different levels, HoloGAN can better disentangle pose from identity. Here we perform another experiment, in which we feed to the first layer of the generator network, like other GAN models. Figure 8 shows that the model trained with this traditional input is confused between pose and identity. As a result, the model also changes the object’s identity when it is being rotated, while HoloGAN can smoothly vary the pose along the azimuth and keep the identity unchanged.

Figure 8: Ablation study showing images with changing azimuths (from left to right). Top: Our approach. Middle: Our approach without using random 3D transformations during training fails to rotate objects. Bottom: Our approach with a traditional input layer mapped from instead of a learnt constant tensor fails to disentangle object pose and identity.

6 Discussion and conclusion

While HoloGAN can successfully learn to separate pose from identity, its performance depends on the variety and distribution of poses included in the training dataset. For example, for the CelebA and Cats dataset, the model cannot recover elevation as well as azimuth (see Figure 5a,b). This is because most face images are taken at eye level and thus contain limited variation in elevation. Currently, during training, we sample random poses from a uniform distribution. Future work therefore can explore learning the distribution of poses from the training data in an unsupervised manner to account for uneven pose distributions. Other directions to explore include further disentanglement of objects’ appearances, such as texture and illumination. Finally, it will be interesting to combine HoloGAN with training techniques such as progressive GANs [26] to generate higher-resolution images.

Although the world is 3D, most current generative models only use 2D features and make limited assumptions about the physical world. In this work, we presented HoloGAN, a generative image model that learns 3D representation from natural images in an unsupervised manner by adopting strong inductive biases about the 3D world. HoloGAN can be trained end-to-end with only unlabelled 2D images, and learns to disentangle challenging factors such as 3D pose, shape and appearance. This disentanglement provides control over these factors, while being able to generate images with similar or higher visual quality than 2D-based GANs. Our experiments show that HoloGAN successfully learns meaningful 3D representations across multiple datasets with varying complexity. We are therefore convinced that explicit deep 3D representations are a crucial step forward for both the interpretability and controllability of GAN models, compared to existing explicit (meshes, voxels) or implicit [14, 42] 3D representations.


We thank Jan Malte Lichtenberg for discussions and feedback on the manuscript. This work was supported by the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 665992, CDE, the UK’s EPSRC Centre for Doctoral Training in Digital Entertainment, EP/L016540/1, CAMERA, the RCUK Centre for the Analysis of Motion, Entertainment Research and Applications, EP/M023281/1, a UKRI Innovation Fellowship, EP/S001050/1, and an NVIDIA Corporation GPU Grant. We also received GPU support from Lambda Labs.


  • [1] Y. Alami Mejjati, C. Richardt, J. Tompkin, D. Cosker, and K. I. Kim (2018) Unsupervised attention-guided image-to-image translation. In NeurIPS, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (Eds.), pp. 3693–3703. External Links: Link Cited by: §1, §2.1.
  • [2] J. Bao, D. Chen, F. Wen, H. Li, and G. Hua (2018-06) Towards open-set identity preserving face synthesis. In CVPR, Cited by: §2.3.
  • [3] J. T. Barron and J. Malik (2015-08) Shape, illumination, and reflectance from shading. IEEE Transactions on Pattern Analysis and Machine Intelligence 37 (8), pp. 1670–1687. External Links: ISSN 0162-8828, Document Cited by: §1.
  • [4] M. Bińkowski, D. J. Sutherland, M. Arbel, and A. Gretton (2018) Demystifying MMD GANs. In ICLR, External Links: Link Cited by: §5.2.
  • [5] A. Brock, J. Donahue, and K. Simonyan (2019) Large scale GAN training for high fidelity natural image synthesis. In ICLR, External Links: Link Cited by: §1.
  • [6] C. Chan, S. Ginosar, T. Zhou, and A. A. Efros (2018) Everybody dance now. Note: arXiv:1808.07371 Cited by: §1.
  • [7] A. X. Chang, T. A. Funkhouser, L. J. Guibas, P. Hanrahan, Q. Huang, Z. Li, S. Savarese, M. Savva, S. Song, H. Su, J. Xiao, L. Yi, and F. Yu (2015) ShapeNet: an information-rich 3D model repository. Note: arXiv:1512.03012 Cited by: §4, §4.
  • [8] T. Chen, M. Lucic, N. Houlsby, and S. Gelly (2019) On self modulation for generative adversarial networks. In ICLR, External Links: Link Cited by: §2.1.
  • [9] X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel (2016) InfoGAN: interpretable representation learning by information maximizing generative adversarial nets. In NIPS, D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (Eds.), pp. 2172–2180. Cited by: §1, §2.3, Figure 4, §5.1.
  • [10] E. L. Denton and v. Birodkar (2017) Unsupervised learning of disentangled representations from video. In NIPS, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.), pp. 4414–4423. External Links: Link Cited by: §2.3.
  • [11] H. Ding, K. Sricharan, and R. Chellappa (2018) ExprGAN: facial expression editing with controllable expression intensity. In AAAI, Cited by: §1.
  • [12] G. Dorta, S. Vicente, N. D. F. Campbell, and I. Simpson (2018) The GAN that warped: semantic attribute editing with unpaired data. Note: arXiv:1811.12784 Cited by: §1.
  • [13] A. Dosovitskiy, J. Springenberg, M. Tatarchenko, and T. Brox (2017-04) Learning to generate chairs, tables and cars with convolutional networks. IEEE Transactions on Pattern Analysis and Machine Intelligence 39 (4), pp. 692–705. Cited by: §1.
  • [14] S. M. A. Eslami, D. Jimenez Rezende, F. Besse, F. Viola, A. S. Morcos, M. Garnelo, A. Ruderman, A. A. Rusu, I. Danihelka, K. Gregor, D. P. Reichert, L. Buesing, T. Weber, O. Vinyals, D. Rosenbaum, N. Rabinowitz, H. King, C. Hillier, M. Botvinick, D. Wierstra, K. Kavukcuoglu, and D. Hassabis (2018) Neural scene representation and rendering. Science 360 (6394), pp. 1204–1210. External Links: ISSN 0036-8075, Document Cited by: §1, §1, §2.3, §6.
  • [15] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In NIPS, Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger (Eds.), pp. 2672–2680. External Links: Link Cited by: §2.1, §3.3.
  • [16] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville (2017) Improved training of Wasserstein GANs. In NIPS, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.), pp. 5767–5777. External Links: Link Cited by: §5.2, Table 1.
  • [17] P. Henderson and V. Ferrari (2018) Learning to generate and reconstruct 3D meshes with only 2D supervision. In BMVC, Cited by: §1.
  • [18] M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter (2017) GANs trained by a two time-scale update rule converge to a local Nash equilibrium. In NIPS, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.), pp. 6626–6637. External Links: Link Cited by: §5.2, Table 1.
  • [19] I. Higgins, L. Matthey, A. Pal, C. Burgess, X. Glorot, M. Botvinick, S. Mohamed, and A. Lerchner (2017) -VAE: learning basic visual concepts with a constrained variational framework. In ICLR, Cited by: §1, §2.3, §5.2.
  • [20] X. Huang and S. Belongie (2017) Arbitrary style transfer in real-time with adaptive instance normalization. In ICCV, Cited by: §3.1, §3.3, §4.
  • [21] E. Insafutdinov and A. Dosovitskiy (2018) Unsupervised learning of shape and pose with differentiable point clouds. In NeurIPS, Cited by: §1.
  • [22] P. Isola, J. Zhu, T. Zhou, and A. A. Efros (2017)

    Image-to-image translation with conditional adversarial networks

    In CVPR, pp. 5967–5976. Cited by: §1.
  • [23] I. Jeon, W. Lee, and G. Kim (2019) IB-GAN: disentangled representation learning with information bottleneck GAN. Note: External Links: Link Cited by: §2.3.
  • [24] D. Jimenez Rezende, S. M. A. Eslami, S. Mohamed, P. Battaglia, M. Jaderberg, and N. Heess (2016) Unsupervised learning of 3D structure from images. In NIPS, pp. 4996–5004. Cited by: §1.
  • [25] A. Kanazawa, S. Tulsiani, A. A. Efros, and J. Malik (2018) Learning category-specific mesh reconstruction from image collections. In ECCV, Cited by: §1.
  • [26] T. Karras, T. Aila, S. Laine, and J. Lehtinen (2018) Progressive growing of GANs for improved quality, stability, and variation. In ICLR, Cited by: §1, §2.1, §6.
  • [27] T. Karras, S. Laine, and T. Aila (2018) A style-based generator architecture for generative adversarial networks. Note: arXiv:1812.04948 Cited by: §1, §2.1, §3.1.
  • [28] H. Kato, Y. Ushiku, and T. Harada (2018) Neural 3D mesh renderer. In CVPR, Cited by: §1.
  • [29] H. Kim, P. Garrido, A. Tewari, W. Xu, J. Thies, M. Nießner, P. Pérez, C. Richardt, M. Zollhöfer, and C. Theobalt (2018-08) Deep video portraits. ACM Transactions on Graphics 37 (4), pp. 163:1–14. External Links: ISSN 0730-0301, Document Cited by: §1.
  • [30] D. P. Kingma and J. Ba (2015) Adam: a method for stochastic optimization. In ICLR, Cited by: §4.
  • [31] T. D. Kulkarni, W. F. Whitney, P. Kohli, and J. Tenenbaum (2015) Deep convolutional inverse graphics network. In NIPS, pp. 2539–2547. Cited by: §1, §2.3, §3.
  • [32] T. Li, M. Aittala, F. Duran, and J. Lehtinen (2018-11) Differentiable monte carlo ray tracing through edge sampling. ACM Transactions on Graphic 37 (6), pp. 162:1–11. External Links: ISSN 0730-0301, Document Cited by: §1.
  • [33] Z. Liu, P. Luo, X. Wang, and X. Tang (2015-12) Deep learning face attributes in the wild. In ICCV, Cited by: §4, §5.1.
  • [34] M. M. Loper and M. J. Black (2014) OpenDR: an approximate differentiable renderer. In ECCV, pp. 154–169. Cited by: §1.
  • [35] A. L. Maas, A. Y. Hannun, and A. Y. Ng (2013) Rectifier nonlinearities improve neural network acoustic models. In ICML Workshops, Cited by: §3.2.2.
  • [36] X. Mao, Q. Li, H. Xie, R. Y.K. Lau, Z. Wang, and S. P. Smolley (2017) Least squares generative adversarial networks. In ICCV, Cited by: §5.2, Table 1.
  • [37] T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida (2018) Spectral normalization for generative adversarial networks. In ICLR, Cited by: §4.
  • [38] T. H. Nguyen-Phuoc, C. Li, S. Balaban, and Y. Yang (2018) RenderNet: a deep convolutional network for differentiable rendering from 3D shapes. In NeurIPS, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (Eds.), pp. 7902–7912. External Links: Link Cited by: §1, §1, §2.2, §3.2.2, §3, §3.
  • [39] P. Paysan, R. Knothe, B. Amberg, S. Romdhani, and T. Vetter (2009)

    A 3D face model for pose and illumination invariant face recognition

    In International Conference on Advanced Video and Signal Based Surveillance, pp. 296–301. Cited by: §4.
  • [40] C. R. Qi, L. Yi, H. Su, and L. J. Guibas (2017) PointNet++: deep hierarchical feature learning on point sets in a metric space. In Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.), pp. 5099–5108. External Links: Link Cited by: §1.
  • [41] A. Radford, L. Metz, and S. Chintala (2016) Unsupervised representation learning with deep convolutional generative adversarial networks. In ICLR, Cited by: §5.2, Table 1.
  • [42] S. Rajeswar, F. Mannan, F. Golemo, D. Vazquez, D. Nowrouzezahrai, and A. Courville (2019) Pix2Scene: learning implicit 3D representations from images. Note: External Links: Link Cited by: §1, §1, §2.2, §6.
  • [43] A. Ranjan, T. Bolkart, S. Sanyal, and M. J. Black (2018-09)

    Generating 3D faces using convolutional mesh autoencoders

    In European Conference on Computer Vision (ECCV), Vol. Lecture Notes in Computer Science, vol 11207, pp. 725–741. External Links: Link Cited by: §1.
  • [44] S. Reed, K. Sohn, Y. Zhang, and H. Lee (2014) Learning to disentangle factors of variation with manifold interaction. In ICML, ICML’14. External Links: Link Cited by: §2.3.
  • [45] H. Rhodin, M. Salzmann, and P. Fua (2018) Unsupervised geometry-aware representation for 3D human pose estimation. In ECCV, Cited by: §2.2.
  • [46] G. Riegler, A. O. Ulusoy, and A. Geiger (2017) OctNet: learning deep 3d representations at high resolutions. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    Cited by: §1.
  • [47] P. Sangkloy, J. Lu, C. Fang, F. Yu, and J. Hays (2017) Scribbler: controlling deep image synthesis with sketch and color. In CVPR, Cited by: §1.
  • [48] V. Sitzmann, J. Thies, F. Heide, M. Nießner, G. Wetzstein, and M. Zollhöfer (2018) DeepVoxels: learning persistent 3D feature embeddings. Note: arXiv:1812.01024 Cited by: §1, §2.2.
  • [49] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich (2015) Going deeper with convolutions. In CVPR, External Links: Link Cited by: §5.2.
  • [50] M. Tatarchenko, A. Dosovitskiy, and T. Brox (2016) Multi-view 3D models from single images with a convolutional network. In ECCV, Cited by: §1.
  • [51] L. Tran, X. Yin, and X. Liu (2018) Representation learning by rotating your faces. IEEE Transactions on Pattern Analysis and Machine Intelligence. Note: preprint External Links: Document Cited by: §2.3, §3.2.
  • [52] D. Ulyanov, A. Vedaldi, and V. S. Lempitsky (2017) Improved texture networks: maximizing quality and diversity in feed-forward stylization and texture synthesis. In CVPR, External Links: Document, Link Cited by: §4.
  • [53] X. Yan, J. Yang, E. Yumer, Y. Guo, and H. Lee (2016) Perspective transformer nets: learning single-view 3D object reconstruction without 3D supervision. In NIPS, pp. 1696–1704. Cited by: §1.
  • [54] L. Yang, P. Luo, C. C. Loy, and X. Tang (2015-06) A large-scale car dataset for fine-grained categorization and verification. In CVPR, pp. 3973–3981. External Links: Document, ISSN 1063-6919 Cited by: §4.
  • [55] F. Yu, Y. Zhang, S. Song, A. Seff, and J. Xiao (2015) LSUN: construction of a large-scale image dataset using deep learning with humans in the loop. Note: arXiv:1506.03365 Cited by: §4.
  • [56] G. Zhang, M. Kan, S. Shan, and X. Chen (2018-09) Generative adversarial network with spatial attention for face attribute editing. In ECCV, Cited by: §1.
  • [57] H. Zhang, I. J. Goodfellow, D. N. Metaxas, and A. Odena (2018) Self-attention generative adversarial networks. Note: arXiv:1805.08318 Cited by: §1, §2.1.
  • [58] H. Zhang, T. Xu, H. Li, S. Zhang, X. Wang, X. Huang, and D. Metaxas (2017) StackGAN: text to photo-realistic image synthesis with stacked generative adversarial networks. In ICCV, Cited by: §2.1.
  • [59] W. Zhang, J. Sun, and X. Tang (2008-10) Cat head detection – how to effectively exploit shape and texture features. In ECCV, Proc. of ECCV 2008 edition. External Links: Link Cited by: §4.
  • [60] J. Zhu, Z. Zhang, C. Zhang, J. Wu, A. Torralba, J. Tenenbaum, and B. Freeman (2018) Visual object networks: image generation with disentangled 3D representations. In NeurIPS, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (Eds.), pp. 118–129. External Links: Link Cited by: §1, §1, §2.2, §3.1, §5.3.