In computer vision a fundamental task is to interpret content in images as part of a 3D world. However, this task proves to be extremely challenging, because images are only 2D projections of the 3D world, and thus incomplete observations. To compensate for this deficiency, one can use multiple views of the same scene gathered simultaneously, as in multiview stereo, or over time, as in structure from motion . These approaches are able to combine the information in multiple images through 2D landmark correspondences, because they share the same instance of the world. It is interesting to notice, that no additional information other than the images themselves and a model of image formation are needed to extract 3D information and the pose of the camera.
In this paper we explore a direct extension of this method to the case where we are given images that never depict the same object twice and aim to reconstruct the unknown 3D surface of the objects as well as their texture. We consider the case where we are made available only a collection of images and no additional annotation or prior knowledge, for example, in the form of a 3D template. Unlike multiview stereo and structure from motion, in this case pixel-based correspondence between different images cannot be established directly.
At the core of our method is the design of a generative model that learns to map images to a 3D surface, a texture and a background image. Then, we use a renderer to generate views given the 3D surface, texture, background image and a randomly sampled viewpoint. We postulate that if the three components are realistic, then also the rendered images from arbitrary (within a suitable distribution) viewpoints will be realistic. To assess realism, we use a discriminator trained in an adversarial fashion . Therefore, by assuming that the samples cover a sufficient distribution of viewpoints, the generative model should automatically learn 3D models from the data.
Finally, we also train an encoder, by pairing it with the generative model and the renderer as an autoencoder. In this training the generative model learns to estimate the 3D surface, the texture, and the viewpoint of an object given an input image. We also formally prove that under suitable assumptions the proposed framework can successfully build such a generative model just from images without additional manual annotation. Finally, we also demonstrate our fully unsupervised method on both real and synthetic data.
2 Related work
Traditional 3D reconstruction techniques require multiple views of the objects [26, 11]. They use hand-crafted features  to match key-points, and they exploit the 3D geometry to estimate their locations. In contrast, 3D reconstruction from a single image is a much more ambiguous problem. Recent methods address this problem by learning the underlying 3D from video sequences , or by using additional image formation assumptions, such as shape from shading models . The direct mapping from an image to a depth map can be learned from real data or also from synthetic images as by Sela et al. . As in structure from motion, Vedaldi et al.  and Zhou et al.  show that it is not necessary to use human annotations to learn the 3D reconstruction of a scene as long as we are given video sequences of the same scene.
In this work we focus on learning a generative model. 3D morphable models (3DMM) [2, 7] are trained with high quality face scans and provide a high quality template for face reconstruction and recognition. Tran et al.  and Genova et al.  train neural networks for regressing the parameters of 3DMMs. Model-based Face Autoencoders (MoFA) and Genova et al. [25, 6] only use unlabelled training data, but they rely on existing models that used supervision. Therefore, with different object categories, these methods require a new pre-training of the 3DMM and knowledge of what 3D objects they need to reconstruct, while our method applies directly to any category without prior knowledge on what 3D shape the objects have.
Unlike 3DMMs, Generative Adversarial Nets (GAN)  and Variational Autoecoders (VAE)  do not provide interpretable parameters, but they are very powerful and can be trained in a fully unsupervised manner. In recent years they improved significantly [1, 10, 14]. 3DGANs  are used to generate 3D objects with 3D supervision. It is possible however to use GANs to train 3D generators by only using 2D images and differentiable renderers similar to the Neural Mesh Renderer  or OpenDR . PrGAN  learns a voxel based representation with GAN, and Henderson et al.  train surface meshes using VAE. They are both limited to synthetic data as they do not model the background. This can be interpreted as using the silhouettes as supervision signal. In contrast we only use 2D image collections and learn a 3D mesh with texture, and model the background as well. PrGAN  as well as our method is a special case of AmbientGAN . We extend their theory to the case of 3D reconstruction and describe failure modes, including the Hollow-mask illusion  and the reference ambiguity .
Our approach can also be interpreted as disentangling the 3D and the viewpoint factors. Reed et al.  solved that task with full supervision using image triplets. They utilised an autoencoder to reconstruct an image from the mixed latent encodings of other two images. Mathieu et al.  and Szabó et al.  only use image pairs that share an attribute, thus reducing the supervision with the help of GANs. By using only a standard image formation model (projective geometry), by setting a prior on the viewpoint distribution in the dataset, we demonstrate the disentangling of the 3D from the viewpoint and the background for the case of independently sampled input images.
3 Unsupervised Learning of 3D Shapes
We are interested in building a mapping from an image to its 3D model, texture, background image, and viewpoint. We call all these outputs the representation, and, when needed, we distinguish the first three outputs, the scene representation, from the viewpoint, the camera representation. These outputs are then used by a differential renderer R to reconstruct an image. Thus, the mapping of images to the representation, followed by the renderer can be seen as an autoencoder. We break the mapping from images to the representation into two steps: First, we encode images to a vector
(with Gaussian distribution) and the camera representation
, and second, we decode the vector to the scene representation(see Figure. 0(b)). The first part is implemented by an encoder E, while the second is implemented via a generator G. In our approach we first train the generator in an adversarial fashion against a discriminator D by feeding Gaussian samples as input (see Figure. 0(a)). The objective of the generator is to learn to map Gaussian samples to scene representations that result in realistic renderings for random viewpoints . Therefore, during training we also use random samples for the viewpoints fed to the renderer. In a second step we then freeze the generator and train the encoder to map images to vectors and viewpoints that allow the generator and the renderer to reconstruct the input . The encoder can be seen as the general inverse mapping of the generator. However, it is also possible to construct a per-sample inverse of the generator-renderer pair by solving a direct optimization problem. We implement all the mappings E, G, and D with neural networks, and we assume they are continuous throughout the paper. We optimize the GAN objective using WGAN with gradient penalty . The chosen representation and all these steps will be described in detail in the next sections.
At the core of our method is the principle that the learned representations will be correct if the rendered images are realistic. This constraint is captured by the adversarial training. We also show analysis that, under suitable assumptions, this constraint is sufficient to recover the correct representation.
3.1 Enforcing Image Realism through GAN
In our approach, we train a generator G to map Gaussian samples to 3D models, where
is the identity matrix (and assume that the dimensionality of the samples is sufficient to represent the complexity of the given dataset). We also assume that the viewpointsare sampled according to a known viewpoint distribution . Then, the GAN loss is
where are the generated fake images and are the real data samples. Following Arjovsky et al. , we optimize the objective above subject to additional constraints
where is the Lipschitz-norm and denotes the scale of the 3D model (returned by ). The scale constraint stabilizes the training of the generator especially at the beginning of the training.
3.2 Inverting the Generator-Renderer Pair
In this section we describe a per-sample inversion of the generator and renderer. That is, given a data sample we would like to find the input vector and viewpoint that once fed to the trained generator G and the renderer R return the image . For simplicity, we restrict our search of the viewpoint to the support of the viewpoint distribution. We denote the support of the distribution
of a random variableas , where is the closure of a set. We formulate this inverse mapping for a particular data sample by minimizing the L2 norm
Finally, the reconstructed 3D model of is .
3.3 Extracting Representations from Images
A more general and efficient way to extract the representation from an image is to train an encoder together with the generator-renderer pair, where the generator has been pre-trained and is fixed. The training enforces an autoencoding constraint so that the input image is mapped to a representation and then through the renderer back to the same image. In this step, we train only an encoder E to learn the mapping from an image to two outputs: and . The first is a vector and the second a viewpoint that the generator and the renderer can map back to . The autoencoder loss is therefore
where the estimated latent vectors and viewpoints are and . We denote the estimated 3D model and image as and . Finally, to train the encoder E we minimize the objective
3.4 3D Representation and Camera Models
In this section we describe the chosen representation and image formation model. This representation affects the output format of the generator G and also the design of the differential renderer R. We now detail the contents of into the object surface , the object texture and the background image .
3D Surface and Texture. Currently, our representation considers a single 3D object and its texture. The object surface consists of a mesh of triangles (a fixed number). The mesh is given as a list of vertices and the list of triangles, which is fixed and consists of triplets of vertex indices. The vertices have associated RGB colors and 3D coordinates indicating its position in a global reference frame. Therefore, is a vector of 3D coordinates and is a vector (of the same size) of RGB values. We illustrate graphically this representation in Figure. 2.
Background. We explicitly model the background behind the object to avoid the need for supervision with silhouettes/masks. Also, we move the background when we change the viewpoint. This is used to make sure that the generator avoids learning trivial representations. For example, if we used a static background, the generator G could learn to place the object outside the field of view of the camera and to simply map the whole input image to the background texture. In our representation we fix the 3D coordinates of the background on a sphere centered around the origin and only learn its texture. To image the background we also approximate the image projection on the plane with a projection on a spherical image plane (a small solid angle), so that the resolution of the background would not change across the image. With this model, viewpoint changes, i.e., rotations about the origin (we only consider 2 angles), correspond to 2D shifts of the background texture . Also, to avoid issues with the resolution of the background, we simulate a different scaling of the background rotation compared to the object rotation. This results in a much larger field of view for the background compared to the field of view used for the object. Finally, by using the assumption that the objects are around the center of the image, it would not be possible for the generator to learn to place the objects on the background, because a change in the viewpoint will move the background and with it the object (thus away from the center of the image).
Camera Model and Viewpoint. Our camera model uses perspective projection. However, to minimize the distortion we use a large focal length (i.e., the distance between the image plane and the camera center). The image plane is also placed at the origin. When we change the viewpoint, we rotate the camera around the origin and thus around its image plane. In this way the image resolution and distortions are quite stable for a wide range of viewpoints. We only consider 2 rotation angles in Euler coordinates along the and axis.
3.5 Network Architectures
Generator. Based on the chosen representation, our generator consist of three sub-generators: one that learns the background texture , a second that learns the object texture and a third that learns the object geometry . The latent input vector is split into two parts: for the background image and for the object, since we generate the background and the object independently, but then expect correlation between the object texture and 3D. The sub-generators for the object texture and the object surface receive as input , while the sub-generator for the background image receives as input . The outputs of all the sub-generators have the same output size since all have 3 channels and the same resolution. For the object and background texture sub-generators we use two separate models based on the architecture of Karras et al. .
The object surface is initially represented in spherical coordinates, where the azimuth and polar angles are defined on a uniform tessellation and we only recover the radial distance. Learning spherical coordinates helps to keep the resolution of the mesh consistent during training, in contrast to learning the 3D coordinates directly. The radial distance is then obtained as a linear combination of basis radial distances and an average radial distance. The object surface sub-generator recovers both the coefficients of the linear combination and the basis radial distances. While the basis is represented directly as network parameters, the coefficients are obtained through the application of a fully connected layer to the input . To ensure a smooth convergence, we use a redundant coarse-to-fine basis. The radial distance produced by the object surface sub-generator can be written as
where is the -th output of the fully connected layer of the sub-generator, is the level of coarseness of , which corresponds to the resolutions , and
is a bilinear interpolation function that maps the-th scale to the final resolution of elements. Although the representation of the radial distance is redundant (the resolution can also fully describe with ), we find that this hierarchical parametrization helps to stabilize the training. Finally, the estimated radial distance , together with the azimuthal angle and polar angle are mapped to 3D coordinates via the spherical coordinates transform and then given as input to the renderer.
Discriminator and Encoder. The discriminator architecture is the same as in . We always rendered the images at full resolution (at pixels), and then downsampled it to match the expected input size of the discriminator during the training with growing resolutions. The encoder architecture is also the same as the discriminators, with small modifications. The output vector size is increased from to , and two small neural networks were attached to produce the two encoder outputs. The latent vectors were estimated with a fully connected layer. The
Euler angles were discretized and their probabilities were estimated by a fully connected layer and a softmax. Then a continuous estimate was computed by taking the expectation.
3.6 Differential Renderer
The renderer takes as input the representation from G, i.e., the surface , the RGB colors , and the background image , and the viewpoint of the camera. For simplicity, we do not model surface and light interaction with shading, reflection, shadows or interreflection models. We simply use a Lambertian model. Since each vertex in is associated to a color in , we interpolate colors inside the triangles using barycentric coordinates. This color model allows the back-propagation of gradients from the renderer output to the vertex coordinates in and colors in . While the differentiation works for pixels inside triangles, at object boundaries and self-occlusions the gradients can not be computed. While others approximate gradients at boundaries [15, 18], we instead modify our rendering engine to draw blurred triangles, so the gradients can be computed exactly. The blurring process is illustrated in Figure. 3. At the boundaries and self-occlusions the triangles are extended and matted linearly against the background or the occluded part of the object. The effect of this blurred rendering is visually negligible, and it only effects a few pixels at the boundaries. However, we found that it contributes substantially to the stability of the training. In contrast, the non-blurry renderer often generates spikes and sails, which made the training unstable. Examples of this behavior are shown in Figure. 3.
4 A Theory of 3D Generative Learning
In this section we give a theoretical analysis of our methods. We prove that under reasonable conditions the generator G can output realistic 3D models. In Theorem 1 we adapted the theory in AmbientGAN  to our approach, as AmbientGAN uses vanilla  and we use Wasserstein GAN [1, 10]. We can provably obtain the 3D model of the object by inverting the generator G or by estimating it via the autoencoder. Our theory requires five main assumptions:
The real images in the dataset are formed by the differentiable rendering engine R given the independent factors and . The images are formed deterministically by . This assumption is needed to guarantee that the generator with the renderer can perfectly model the data. Note that is assumed unknown;
We know the viewpoint distribution , so we can sample from it, but we do not know the viewpoint for any particular data sample ;
The rendering engine R is bijective in the restricted domain , and we denote the inverse with , where and . This property has to be true for any deterministic viewpoint or 3D estimator, otherwise the (single image) 3D reconstruction task can not be solved. Note that this is assumption is also needed in fully supervised methods;
There is a unique probability distributionthat induces the distribution , when . This assumption is not true for 3D data in general, as the 3D objects can have many symmetries. We will discuss and show ambiguities in the experimental section;
Finally, we assume that the encoder, generator and discriminator have infinite capacity, and the training reaches the global optimum and . Note that our first and second assumptions are a necessary condition for perfect training.
Now we show that the generator learns the 3D geometry of the scene faitfully.
When the generator adversarial training is perfect, i.e., we achieve , the generated scene representation distribution is identical to the real one, thus , with .
Since the training is perfect, the discriminator is also perfect. Then, the GAN loss is equal to the Wasserstein distance between and , where is the distribution of the generated fake data . As the distance is zero, and are identical. This implies that , with and , as only can induce the real distribution ∎
Let us denote . G is continuous and , therefore (otherwise ). Because , and R is invertible on , the inverses are and . ∎
Lastly, we show that the encoder E combined with the ideal generator G reconstructs the correct surface of the object in the input image.
When the training is prefect ( and ), the estimated scene representation and viewpoint are correct, i.e. and .
Because E, G and R and their compositions are continuous, , otherwise . The estimated scene representation , because . Finally, the invertibility of R implies that . ∎
In general, assumption four does not hold for the 3D reconstruction problem. Depending on the dataset and the symmetries of the objects, ambiguities can arise. One notable failure case is the hollow-mask illusion . An inverted mask (a concave face) can look realistic, even though it is far from the true geometry. This failure mode can be overcome when the range of viewpoints is large enough so the self-occlusions give away the depth information. In our experiments we observe cases where the system learns inverted faces, but this ambiguity never appears on ShapeNet objects, as they are rendered with a large range of viewpoints. Another example is the reference ambiguity . Two different 3D models can both be realistic, but their reference frames are not necessarily the same, i.e., when they are rendered with the same numerical values of viewpoint angles, they are not aligned. We show examples of both of these ambiguities in Figure 6.
It is important to note that the above mentioned problems are different from the ill-posedness of the single image 3D reconstruction problem, which means that many different 3D objects can produce the same 2D projection. When the viewpoint distribution is known, we can render a candidate 3D shape from a different viewpoint, which immediately reveals, whether it is realistic or not. Ill-posedness is only a problem if the viewpoint distribution is not known. If one has to estimate the viewpoint distribution as well, a trivial failure mode could emerge. The encoder and generator would learn to map images to a flat surface with a fixed viewpoint and the textures would match exactly the 2D inputs.
We trained the generator G on CelebA  and ShapeNet  datasets and our autoencoder () on CelebA. We studied the effect of a regularization term on the surface normals, we shown observations of the hollow-mask and reference ambiguities and compared our method to other face reconstruction methods.
CelebA. CelebA has k colored photos of faces, of which are validation and test images. We used the images at a pixel resolution. We randomly sampled the Euler angles uniformly in the range of degrees for rotating around the horizontal axis and degrees around the vertical axis. We did not rotate along the camera axis. For the autoencoder training we increased the bounds to and degrees respectively and we also allowed rotations along the camera axis by degrees.
Figure 4 shows samples from our generator trained on CelebA. For better viewing we rendered them at pixel resolution. We can see that we achieve plausible textures and 3D shapes. We can clearly see the reconstruction of the nose, brow ridge and the lips. Some smaller details of the 3D are not precise, we can observe some high frequency artifacts and the side of the face has errors as well. However our results are promising, given that it is the first attempt at generating colored 3D meshes on a real dataset without using any annotations.
Smoothing. We added a smoothing term in the objective that is meant to make the generated meshes smoother. It is defined as
where and are indices of neighboring triangles and and are the normal vectors of those triangles. During training we optimize instead of the GAN objective (3). We show results with different amounts of smoothing in Figure 5. We can see that without smoothing the 3D has high frequency artifacts. When is too high, the system cannot learn the correct details of the 3D surface, but only an average ellipsoid. With a moderate amount of smoothing the system reduces the high frequency artifacts, and keeps the larger 3D features.
Ambiguities. Figure 6 shows the ambiguities discussed in section 4. The hollow-mask ambiguity could be observed on a large proportion of generated samples. Because most faces in the CelebA dataset are close to the frontal view, there are only a few examples that provide self-occlusion cues. However we noticed that the system tried to increase the size of the object to create better looking hollow-masks. Thus we limited the object size by resizing it when its radius (the maximal radial distance of its vertices) was too large. This helped to eliminate most cases of hollow-masks. We can also see a sample having the reference ambiguity in Figure 6. The 3D and the texture is plausible, but the reference frame of the object differs from the canonical.
Autoencoder. Figure 7 shows results with two settings of our autoencoder. The first one is trained by minimizing the auto-encoder loos , the other one included the smoothing term too, . The smoothing removes most of the high frequency artifacts of the 3D shape, but tends to generate only an average 3D shape.
Comparisons. In Figure 9 we compared our autoencoder to other methods that reconstruct faces from single images. Although the quality of our 3D shapes does not reach the state of the art, we do not use supervision unlike all the other methods. Tran et al.  and Genova et al.  regress the parameters of the Basel face model , while Sela et al.  uses synthetic data and MoFA  utilises face scans.
ShapeNet ShapeNet consists of 3D models of object categories, on average k models for each category. We used renderings of the car category from distinct viewpoint, uniformly spaced around the objects with an elevation of degrees. We rendered them at a pixel resolution. We made a several changes to the system, so it could learn form the ShapeNet data. We changed the order of rotations along the horizontal and vertical axes, so we could render the mesh in a full circle around the object. We set the background to a constant white, the same colour as the rendered cars had for background. We did not use the resizing technique to constrain the objects in a volume, as the hollow-mask ambiguity did not occur. Otherwise we used the same parameters as we used for the CelebA training. Samples from our generator are shown on Figure 8.
We have presented a method to build a generative model capable of learning the 3D surface of objects directly from a collection of images. Our method does not use annotation or prior knowledge about the 3D shapes in the image collection. The key principle that we use is that the generated 3D surface is correct if it can be used to generate other realistic viewpoints. To create new views from the generated 3D and texture we use a differential renderer and train our generator in an adversarial manner against a discriminator. Our experimental results on the reconstructed 3D and texture from real and synthesis images showed encouraging results.
-  M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein gan. arXiv:1701.07875, 2017.
-  V. Blanz and T. Vetter. A morphable model for the synthesis of 3d faces. In Proceedings of the 26th annual conference on Computer graphics and interactive techniques, 1999.
-  A. Bora, E. Price, and A. G. Dimakis. AmbientGAN: Generative models from lossy measurements. In ICLR, 2018.
-  A. X. Chang, T. Funkhouser, L. Guibas, P. Hanrahan, Q. Huang, Z. Li, S. Savarese, M. Savva, S. Song, H. Su, J. Xiao, L. Yi, and F. Yu. ShapeNet: An Information-Rich 3D Model Repository. Technical Report arXiv:1512.03012 [cs.GR], 2015.
-  M. Gadelha, S. Maji, and R. Wang. 3d shape induction from 2d views of multiple objects. In International Conference on 3D Vision, 2017.
-  K. Genova, F. Cole, A. Maschinot, A. Sarna, D. Vlasic, and W. T. Freeman. Unsupervised training for 3d morphable model regression. In CVPR, 2018.
-  T. Gerig, A. Morel-Forster, C. Blumer, B. Egger, M. Luthi, S. Schönborn, and T. Vetter. Morphable face models-an open framework. In Automatic Face & Gesture Recognition. IEEE, 2018.
-  I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In NIPS, 2014.
-  R. L. Gregory. The intelligent eye. Weidenfeld & Nicolson, 1970.
-  I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville. Improved training of wasserstein gans. In NIPS, 2017.
-  R. Hartley and A. Zisserman. Multiple view geometry in computer vision. Cambridge university press, 2003.
-  P. Henderson and V. Ferrari. Learning to generate and reconstruct 3d meshes with only 2d supervision. 2018.
-  B. K. Horn. Obtaining shape from shading information. MIT press, 1989.
-  T. Karras, T. Aila, S. Laine, and J. Lehtinen. Progressive growing of gans for improved quality, stability, and variation. 2018.
-  H. Kato, Y. Ushiku, and T. Harada. Neural 3d mesh renderer. In CVPR, 2018.
-  D. P. Kingma and M. Welling. Auto-encoding variational bayes. In ICLR, 2014.
-  Z. Liu, P. Luo, X. Wang, and X. Tang. Deep learning face attributes in the wild. In ICCV, 2015.
-  M. M. Loper and M. J. Black. Opendr: An approximate differentiable renderer. In ECCV, 2014.
-  D. G. Lowe. Distinctive image features from scale-invariant keypoints. International journal of computer vision, 60(2):91–110, 2004.
-  M. F. Mathieu, J. J. Zhao, J. Zhao, A. Ramesh, P. Sprechmann, and Y. LeCun. Disentangling factors of variation in deep representation using adversarial training. In NIPS, 2016.
-  D. Novotny, D. Larlus, and A. Vedaldi. Learning 3d object categories by looking around them. In ICCV, 2017.
-  S. E. Reed, Y. Zhang, Y. Zhang, and H. Lee. Deep visual analogy-making. In Advances in neural information processing systems, 2015.
M. Sela, E. Richardson, and R. Kimmel.
Unrestricted facial geometry reconstruction using image-to-image translation.In ICCV, 2017.
-  A. Szabó, Q. Hu, T. Portenier, M. Zwicker, and P. Favaro. Understanding degeneracies and ambiguities in attribute transfer. In ECCV, 2018.
-  A. Tewari, M. Zollhöfer, H. Kim, P. Garrido, F. Bernard, P. Pérez, and C. Theobalt. Mofa: Model-based deep convolutional face autoencoder for unsupervised monocular reconstruction. In ICCV, 2017.
-  C. Tomasi and T. Kanade. Shape and motion from image streams under orthography: a factorization method. International Journal of Computer Vision, 9(2):137–154, 1992.
-  A. T. Tran, T. Hassner, I. Masi, and G. Medioni. Regressing robust and discriminative 3d morphable models with a very deep neural network. In CVPR, 2017.
-  J. Wu, C. Zhang, T. Xue, B. Freeman, and J. Tenenbaum. Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling. In NIPS, 2016.
-  T. Zhou, M. Brown, N. Snavely, and D. G. Lowe. Unsupervised learning of depth and ego-motion from video. In CVPR, 2017.