Unsupervised 3D Shape Learning from Image Collections in the Wild

11/26/2018 ∙ by Attila Szaboó, et al. ∙ Universität Bern 12

We present a method to learn the 3D surface of objects directly from a collection of images. Previous work achieved this capability by exploiting additional manual annotation, such as object pose, 3D surface templates, temporal continuity of videos, manually selected landmarks, and foreground/background masks. In contrast, our method does not make use of any such annotation. Rather, it builds a generative model, a convolutional neural network, which, given a noise vector sample, outputs the 3D surface and texture of an object and a background image. These 3 components combined with an additional random viewpoint vector are then fed to a differential renderer to produce a view of the sampled object and background. Our general principle is that if the output of the renderer, the generated image, is realistic, then its input, the generated 3D and texture, should also be realistic. To achieve realism, the generative model is trained adversarially against a discriminator that tries to distinguish between the output of the renderer and real images from the given data set. Moreover, our generative model can be paired with an encoder and trained as an autoencoder, to automatically extract the 3D shape, texture and pose of the object in an image. Our trained generative model and encoder show promising results both on real and synthetic data, which demonstrate for the first time that fully unsupervised 3D learning from image collections is possible.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

page 6

page 7

page 8

page 10

page 11

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In computer vision a fundamental task is to interpret content in images as part of a 3D world. However, this task proves to be extremely challenging, because images are only 2D projections of the 3D world, and thus incomplete observations. To compensate for this deficiency, one can use multiple views of the same scene gathered simultaneously, as in multiview stereo

[11], or over time, as in structure from motion [26]. These approaches are able to combine the information in multiple images through 2D landmark correspondences, because they share the same instance of the world. It is interesting to notice, that no additional information other than the images themselves and a model of image formation are needed to extract 3D information and the pose of the camera.

In this paper we explore a direct extension of this method to the case where we are given images that never depict the same object twice and aim to reconstruct the unknown 3D surface of the objects as well as their texture. We consider the case where we are made available only a collection of images and no additional annotation or prior knowledge, for example, in the form of a 3D template. Unlike multiview stereo and structure from motion, in this case pixel-based correspondence between different images cannot be established directly.

At the core of our method is the design of a generative model that learns to map images to a 3D surface, a texture and a background image. Then, we use a renderer to generate views given the 3D surface, texture, background image and a randomly sampled viewpoint. We postulate that if the three components are realistic, then also the rendered images from arbitrary (within a suitable distribution) viewpoints will be realistic. To assess realism, we use a discriminator trained in an adversarial fashion [8]. Therefore, by assuming that the samples cover a sufficient distribution of viewpoints, the generative model should automatically learn 3D models from the data.

Finally, we also train an encoder, by pairing it with the generative model and the renderer as an autoencoder. In this training the generative model learns to estimate the 3D surface, the texture, and the viewpoint of an object given an input image. We also formally prove that under suitable assumptions the proposed framework can successfully build such a generative model just from images without additional manual annotation. Finally, we also demonstrate our fully unsupervised method on both real and synthetic data.

2 Related work

Traditional 3D reconstruction techniques require multiple views of the objects [26, 11]. They use hand-crafted features [19] to match key-points, and they exploit the 3D geometry to estimate their locations. In contrast, 3D reconstruction from a single image is a much more ambiguous problem. Recent methods address this problem by learning the underlying 3D from video sequences [29], or by using additional image formation assumptions, such as shape from shading models [13]. The direct mapping from an image to a depth map can be learned from real data or also from synthetic images as by Sela et al. [23]. As in structure from motion, Vedaldi et al. [21] and Zhou et al. [29] show that it is not necessary to use human annotations to learn the 3D reconstruction of a scene as long as we are given video sequences of the same scene.

In this work we focus on learning a generative model. 3D morphable models (3DMM) [2, 7] are trained with high quality face scans and provide a high quality template for face reconstruction and recognition. Tran et al. [27] and Genova et al. [6] train neural networks for regressing the parameters of 3DMMs. Model-based Face Autoencoders (MoFA) and Genova et al. [25, 6] only use unlabelled training data, but they rely on existing models that used supervision. Therefore, with different object categories, these methods require a new pre-training of the 3DMM and knowledge of what 3D objects they need to reconstruct, while our method applies directly to any category without prior knowledge on what 3D shape the objects have.

Unlike 3DMMs, Generative Adversarial Nets (GAN) [8] and Variational Autoecoders (VAE) [16] do not provide interpretable parameters, but they are very powerful and can be trained in a fully unsupervised manner. In recent years they improved significantly [1, 10, 14]. 3DGANs [28] are used to generate 3D objects with 3D supervision. It is possible however to use GANs to train 3D generators by only using 2D images and differentiable renderers similar to the Neural Mesh Renderer [15] or OpenDR [18]. PrGAN [5] learns a voxel based representation with GAN, and Henderson et al. [12] train surface meshes using VAE. They are both limited to synthetic data as they do not model the background. This can be interpreted as using the silhouettes as supervision signal. In contrast we only use 2D image collections and learn a 3D mesh with texture, and model the background as well. PrGAN [5] as well as our method is a special case of AmbientGAN [3]. We extend their theory to the case of 3D reconstruction and describe failure modes, including the Hollow-mask illusion [9] and the reference ambiguity [24].

Our approach can also be interpreted as disentangling the 3D and the viewpoint factors. Reed et al. [22] solved that task with full supervision using image triplets. They utilised an autoencoder to reconstruct an image from the mixed latent encodings of other two images. Mathieu et al. [20] and Szabó et al. [24] only use image pairs that share an attribute, thus reducing the supervision with the help of GANs. By using only a standard image formation model (projective geometry), by setting a prior on the viewpoint distribution in the dataset, we demonstrate the disentangling of the 3D from the viewpoint and the background for the case of independently sampled input images.

3 Unsupervised Learning of 3D Shapes

(a) GAN
(b) Autoencoder
Figure 1: The GAN and the autoencoder training schemes. First the generator G and the discriminator D are trained. Then, the encoder E is trained with a fixed generator G. The renderer R has no trainable parameters.

We are interested in building a mapping from an image to its 3D model, texture, background image, and viewpoint. We call all these outputs the representation, and, when needed, we distinguish the first three outputs, the scene representation, from the viewpoint, the camera representation. These outputs are then used by a differential renderer R to reconstruct an image. Thus, the mapping of images to the representation, followed by the renderer can be seen as an autoencoder. We break the mapping from images to the representation into two steps: First, we encode images to a vector

(with Gaussian distribution) and the camera representation

, and second, we decode the vector to the scene representation

(see Figure. 0(b)). The first part is implemented by an encoder E, while the second is implemented via a generator G. In our approach we first train the generator in an adversarial fashion against a discriminator D by feeding Gaussian samples as input (see Figure. 0(a)). The objective of the generator is to learn to map Gaussian samples to scene representations that result in realistic renderings for random viewpoints . Therefore, during training we also use random samples for the viewpoints fed to the renderer. In a second step we then freeze the generator and train the encoder to map images to vectors and viewpoints that allow the generator and the renderer to reconstruct the input . The encoder can be seen as the general inverse mapping of the generator. However, it is also possible to construct a per-sample inverse of the generator-renderer pair by solving a direct optimization problem. We implement all the mappings E, G, and D with neural networks, and we assume they are continuous throughout the paper. We optimize the GAN objective using WGAN with gradient penalty [10]. The chosen representation and all these steps will be described in detail in the next sections.

At the core of our method is the principle that the learned representations will be correct if the rendered images are realistic. This constraint is captured by the adversarial training. We also show analysis that, under suitable assumptions, this constraint is sufficient to recover the correct representation.

3.1 Enforcing Image Realism through GAN

In our approach, we train a generator G to map Gaussian samples to 3D models, where

is the identity matrix (and assume that the dimensionality of the samples is sufficient to represent the complexity of the given dataset). We also assume that the viewpoints

are sampled according to a known viewpoint distribution . Then, the GAN loss is

(1)

where are the generated fake images and are the real data samples. Following Arjovsky et al. [1], we optimize the objective above subject to additional constraints

(2)
(3)

where is the Lipschitz-norm and denotes the scale of the 3D model (returned by ). The scale constraint stabilizes the training of the generator especially at the beginning of the training.

3.2 Inverting the Generator-Renderer Pair

In this section we describe a per-sample inversion of the generator and renderer. That is, given a data sample we would like to find the input vector and viewpoint that once fed to the trained generator G and the renderer R return the image . For simplicity, we restrict our search of the viewpoint to the support of the viewpoint distribution. We denote the support of the distribution

of a random variable

as , where is the closure of a set. We formulate this inverse mapping for a particular data sample by minimizing the L2 norm

(4)

Finally, the reconstructed 3D model of is .

3.3 Extracting Representations from Images

A more general and efficient way to extract the representation from an image is to train an encoder together with the generator-renderer pair, where the generator has been pre-trained and is fixed. The training enforces an autoencoding constraint so that the input image is mapped to a representation and then through the renderer back to the same image. In this step, we train only an encoder E to learn the mapping from an image to two outputs: and . The first is a vector and the second a viewpoint that the generator and the renderer can map back to . The autoencoder loss is therefore

(5)

where the estimated latent vectors and viewpoints are and . We denote the estimated 3D model and image as and . Finally, to train the encoder E we minimize the objective

(6)

3.4 3D Representation and Camera Models

In this section we describe the chosen representation and image formation model. This representation affects the output format of the generator G and also the design of the differential renderer R. We now detail the contents of into the object surface , the object texture and the background image .

3D Surface and Texture. Currently, our representation considers a single 3D object and its texture. The object surface consists of a mesh of triangles (a fixed number). The mesh is given as a list of vertices and the list of triangles, which is fixed and consists of triplets of vertex indices. The vertices have associated RGB colors and 3D coordinates indicating its position in a global reference frame. Therefore, is a vector of 3D coordinates and is a vector (of the same size) of RGB values. We illustrate graphically this representation in Figure. 2.

Figure 2: Illustration of the 3D and texture representation.

Background. We explicitly model the background behind the object to avoid the need for supervision with silhouettes/masks. Also, we move the background when we change the viewpoint. This is used to make sure that the generator avoids learning trivial representations. For example, if we used a static background, the generator G could learn to place the object outside the field of view of the camera and to simply map the whole input image to the background texture. In our representation we fix the 3D coordinates of the background on a sphere centered around the origin and only learn its texture. To image the background we also approximate the image projection on the plane with a projection on a spherical image plane (a small solid angle), so that the resolution of the background would not change across the image. With this model, viewpoint changes, i.e., rotations about the origin (we only consider 2 angles), correspond to 2D shifts of the background texture . Also, to avoid issues with the resolution of the background, we simulate a different scaling of the background rotation compared to the object rotation. This results in a much larger field of view for the background compared to the field of view used for the object. Finally, by using the assumption that the objects are around the center of the image, it would not be possible for the generator to learn to place the objects on the background, because a change in the viewpoint will move the background and with it the object (thus away from the center of the image).

Camera Model and Viewpoint. Our camera model uses perspective projection. However, to minimize the distortion we use a large focal length (i.e., the distance between the image plane and the camera center). The image plane is also placed at the origin. When we change the viewpoint, we rotate the camera around the origin and thus around its image plane. In this way the image resolution and distortions are quite stable for a wide range of viewpoints. We only consider 2 rotation angles in Euler coordinates along the and axis.

3.5 Network Architectures

Generator. Based on the chosen representation, our generator consist of three sub-generators: one that learns the background texture , a second that learns the object texture and a third that learns the object geometry . The latent input vector is split into two parts: for the background image and for the object, since we generate the background and the object independently, but then expect correlation between the object texture and 3D. The sub-generators for the object texture and the object surface receive as input , while the sub-generator for the background image receives as input . The outputs of all the sub-generators have the same output size since all have 3 channels and the same resolution. For the object and background texture sub-generators we use two separate models based on the architecture of Karras et al. [14].

The object surface is initially represented in spherical coordinates, where the azimuth and polar angles are defined on a uniform tessellation and we only recover the radial distance. Learning spherical coordinates helps to keep the resolution of the mesh consistent during training, in contrast to learning the 3D coordinates directly. The radial distance is then obtained as a linear combination of basis radial distances and an average radial distance. The object surface sub-generator recovers both the coefficients of the linear combination and the basis radial distances. While the basis is represented directly as network parameters, the coefficients are obtained through the application of a fully connected layer to the input . To ensure a smooth convergence, we use a redundant coarse-to-fine basis. The radial distance produced by the object surface sub-generator can be written as

(7)

where is the -th output of the fully connected layer of the sub-generator, is the level of coarseness of , which corresponds to the resolutions , and

is a bilinear interpolation function that maps the

-th scale to the final resolution of elements. Although the representation of the radial distance is redundant (the resolution can also fully describe with ), we find that this hierarchical parametrization helps to stabilize the training. Finally, the estimated radial distance , together with the azimuthal angle and polar angle are mapped to 3D coordinates via the spherical coordinates transform and then given as input to the renderer.

Discriminator and Encoder. The discriminator architecture is the same as in [14]. We always rendered the images at full resolution (at pixels), and then downsampled it to match the expected input size of the discriminator during the training with growing resolutions. The encoder architecture is also the same as the discriminators, with small modifications. The output vector size is increased from to , and two small neural networks were attached to produce the two encoder outputs. The latent vectors were estimated with a fully connected layer. The

Euler angles were discretized and their probabilities were estimated by a fully connected layer and a softmax. Then a continuous estimate was computed by taking the expectation.

Figure 3: Blurred renderer. The leftmost image illustrates how the triangles are blurred at the boundaries. The other three images show failure cases of the non-blurred renderer. In this case, spikes and sails tend to appear on the 3D shape.

3.6 Differential Renderer

The renderer takes as input the representation from G, i.e., the surface , the RGB colors , and the background image , and the viewpoint of the camera. For simplicity, we do not model surface and light interaction with shading, reflection, shadows or interreflection models. We simply use a Lambertian model. Since each vertex in is associated to a color in , we interpolate colors inside the triangles using barycentric coordinates. This color model allows the back-propagation of gradients from the renderer output to the vertex coordinates in and colors in . While the differentiation works for pixels inside triangles, at object boundaries and self-occlusions the gradients can not be computed. While others approximate gradients at boundaries [15, 18], we instead modify our rendering engine to draw blurred triangles, so the gradients can be computed exactly. The blurring process is illustrated in Figure. 3. At the boundaries and self-occlusions the triangles are extended and matted linearly against the background or the occluded part of the object. The effect of this blurred rendering is visually negligible, and it only effects a few pixels at the boundaries. However, we found that it contributes substantially to the stability of the training. In contrast, the non-blurry renderer often generates spikes and sails, which made the training unstable. Examples of this behavior are shown in Figure. 3.

4 A Theory of 3D Generative Learning

In this section we give a theoretical analysis of our methods. We prove that under reasonable conditions the generator G can output realistic 3D models. In Theorem 1 we adapted the theory in AmbientGAN [3] to our approach, as AmbientGAN uses vanilla [8] and we use Wasserstein GAN [1, 10]. We can provably obtain the 3D model of the object by inverting the generator G or by estimating it via the autoencoder. Our theory requires five main assumptions:

  1. The real images in the dataset are formed by the differentiable rendering engine R given the independent factors and . The images are formed deterministically by . This assumption is needed to guarantee that the generator with the renderer can perfectly model the data. Note that is assumed unknown;

  2. We know the viewpoint distribution , so we can sample from it, but we do not know the viewpoint for any particular data sample ;

  3. The rendering engine R is bijective in the restricted domain , and we denote the inverse with , where and . This property has to be true for any deterministic viewpoint or 3D estimator, otherwise the (single image) 3D reconstruction task can not be solved. Note that this is assumption is also needed in fully supervised methods;

  4. There is a unique probability distribution

    that induces the distribution , when . This assumption is not true for 3D data in general, as the 3D objects can have many symmetries. We will discuss and show ambiguities in the experimental section;

  5. Finally, we assume that the encoder, generator and discriminator have infinite capacity, and the training reaches the global optimum and . Note that our first and second assumptions are a necessary condition for perfect training.

Now we show that the generator learns the 3D geometry of the scene faitfully.

Theorem 1.

When the generator adversarial training is perfect, i.e., we achieve , the generated scene representation distribution is identical to the real one, thus , with .

Proof.

Since the training is perfect, the discriminator is also perfect. Then, the GAN loss is equal to the Wasserstein distance between and , where is the distribution of the generated fake data . As the distance is zero, and are identical. This implies that , with and , as only can induce the real distribution

Next, we show that the ideal generator G obtained in Theorem 1 above can be inverted for a particular data sample by solving (4).

Theorem 2.

Suppose that the ideal generator G in Theorem 1 is given. Then, when the objective in Problem (4) achieves the global minimum, i.e., , the estimated scene representation and the viewpoint are correct, i.e., and .

Proof.

Let us denote . G is continuous and , therefore (otherwise ). Because , and R is invertible on , the inverses are and . ∎

Lastly, we show that the encoder E combined with the ideal generator G reconstructs the correct surface of the object in the input image.

Figure 4: Samples from our generator on CelebA. The leftmost image has frontal view and background. The second shows the normal map coloured according to the reference sphere in the bottom right. The leftmost five images show views between degrees.
Figure 5: Smoothing. From top to bottom the smoothing coefficient is increased, with the top row having . The images are rendered as in Figure 4.
Figure 6: Ambiguities. From top to bottom we show a correct sample, one with the hollow-mask, and one with the reference ambiguity.
Theorem 3.

When the training is prefect ( and ), the estimated scene representation and viewpoint are correct, i.e. and .

Proof.

Because E, G and R and their compositions are continuous, , otherwise . The estimated scene representation , because . Finally, the invertibility of R implies that . ∎

In general, assumption four does not hold for the 3D reconstruction problem. Depending on the dataset and the symmetries of the objects, ambiguities can arise. One notable failure case is the hollow-mask illusion [9]. An inverted mask (a concave face) can look realistic, even though it is far from the true geometry. This failure mode can be overcome when the range of viewpoints is large enough so the self-occlusions give away the depth information. In our experiments we observe cases where the system learns inverted faces, but this ambiguity never appears on ShapeNet objects, as they are rendered with a large range of viewpoints. Another example is the reference ambiguity [24]. Two different 3D models can both be realistic, but their reference frames are not necessarily the same, i.e., when they are rendered with the same numerical values of viewpoint angles, they are not aligned. We show examples of both of these ambiguities in Figure 6.

It is important to note that the above mentioned problems are different from the ill-posedness of the single image 3D reconstruction problem, which means that many different 3D objects can produce the same 2D projection. When the viewpoint distribution is known, we can render a candidate 3D shape from a different viewpoint, which immediately reveals, whether it is realistic or not. Ill-posedness is only a problem if the viewpoint distribution is not known. If one has to estimate the viewpoint distribution as well, a trivial failure mode could emerge. The encoder and generator would learn to map images to a flat surface with a fixed viewpoint and the textures would match exactly the 2D inputs.

(a) Autoencoder
(b) Autoencoder with smoothing
Figure 7: Reconstructions with two different autoencoders: (a) only autoencoder loss is optimized, (b) autoencoder and smoothing loss optimized. The images show from left to right: input, estimated image, background texture, object texture, object 3D and renderings from viewpoints. The grey portions of the texture is not used in our generator model.
Figure 8: Samples from our generator on ShapeNet. Each row shows a rendered car from different viewpoint. The 3D normals are shown on the right side.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
Figure 9: Comparisons with other methods on face reconstruction: (a) Input images, (b) Genova [6], (c) Tran [27], (d) MoFA [25], (e) MoFA + Expressions [25], (f) Sela [23], (g) ours and (h) ours . Other methods use much stronger supervision on pre-trained models. In contrast our is fully unsupervised.

5 Experiments

We trained the generator G on CelebA [17] and ShapeNet [4] datasets and our autoencoder () on CelebA. We studied the effect of a regularization term on the surface normals, we shown observations of the hollow-mask and reference ambiguities and compared our method to other face reconstruction methods.

CelebA. CelebA has k colored photos of faces, of which are validation and test images. We used the images at a pixel resolution. We randomly sampled the Euler angles uniformly in the range of degrees for rotating around the horizontal axis and degrees around the vertical axis. We did not rotate along the camera axis. For the autoencoder training we increased the bounds to and degrees respectively and we also allowed rotations along the camera axis by degrees.

Figure 4 shows samples from our generator trained on CelebA. For better viewing we rendered them at pixel resolution. We can see that we achieve plausible textures and 3D shapes. We can clearly see the reconstruction of the nose, brow ridge and the lips. Some smaller details of the 3D are not precise, we can observe some high frequency artifacts and the side of the face has errors as well. However our results are promising, given that it is the first attempt at generating colored 3D meshes on a real dataset without using any annotations.

Smoothing. We added a smoothing term in the objective that is meant to make the generated meshes smoother. It is defined as

(8)

where and are indices of neighboring triangles and and are the normal vectors of those triangles. During training we optimize instead of the GAN objective (3). We show results with different amounts of smoothing in Figure 5. We can see that without smoothing the 3D has high frequency artifacts. When is too high, the system cannot learn the correct details of the 3D surface, but only an average ellipsoid. With a moderate amount of smoothing the system reduces the high frequency artifacts, and keeps the larger 3D features.

Ambiguities. Figure 6 shows the ambiguities discussed in section 4. The hollow-mask ambiguity could be observed on a large proportion of generated samples. Because most faces in the CelebA dataset are close to the frontal view, there are only a few examples that provide self-occlusion cues. However we noticed that the system tried to increase the size of the object to create better looking hollow-masks. Thus we limited the object size by resizing it when its radius (the maximal radial distance of its vertices) was too large. This helped to eliminate most cases of hollow-masks. We can also see a sample having the reference ambiguity in Figure 6. The 3D and the texture is plausible, but the reference frame of the object differs from the canonical.

Autoencoder. Figure 7 shows results with two settings of our autoencoder. The first one is trained by minimizing the auto-encoder loos , the other one included the smoothing term too, . The smoothing removes most of the high frequency artifacts of the 3D shape, but tends to generate only an average 3D shape.

Comparisons. In Figure 9 we compared our autoencoder to other methods that reconstruct faces from single images. Although the quality of our 3D shapes does not reach the state of the art, we do not use supervision unlike all the other methods. Tran et al. [27] and Genova et al. [6] regress the parameters of the Basel face model [7], while Sela et al. [23] uses synthetic data and MoFA [25] utilises face scans.

ShapeNet ShapeNet consists of 3D models of object categories, on average k models for each category. We used renderings of the car category from distinct viewpoint, uniformly spaced around the objects with an elevation of degrees. We rendered them at a pixel resolution. We made a several changes to the system, so it could learn form the ShapeNet data. We changed the order of rotations along the horizontal and vertical axes, so we could render the mesh in a full circle around the object. We set the background to a constant white, the same colour as the rendered cars had for background. We did not use the resizing technique to constrain the objects in a volume, as the hollow-mask ambiguity did not occur. Otherwise we used the same parameters as we used for the CelebA training. Samples from our generator are shown on Figure 8.

6 Conclusions

We have presented a method to build a generative model capable of learning the 3D surface of objects directly from a collection of images. Our method does not use annotation or prior knowledge about the 3D shapes in the image collection. The key principle that we use is that the generated 3D surface is correct if it can be used to generate other realistic viewpoints. To create new views from the generated 3D and texture we use a differential renderer and train our generator in an adversarial manner against a discriminator. Our experimental results on the reconstructed 3D and texture from real and synthesis images showed encouraging results.

References

  • [1] M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein gan. arXiv:1701.07875, 2017.
  • [2] V. Blanz and T. Vetter. A morphable model for the synthesis of 3d faces. In Proceedings of the 26th annual conference on Computer graphics and interactive techniques, 1999.
  • [3] A. Bora, E. Price, and A. G. Dimakis. AmbientGAN: Generative models from lossy measurements. In ICLR, 2018.
  • [4] A. X. Chang, T. Funkhouser, L. Guibas, P. Hanrahan, Q. Huang, Z. Li, S. Savarese, M. Savva, S. Song, H. Su, J. Xiao, L. Yi, and F. Yu. ShapeNet: An Information-Rich 3D Model Repository. Technical Report arXiv:1512.03012 [cs.GR], 2015.
  • [5] M. Gadelha, S. Maji, and R. Wang. 3d shape induction from 2d views of multiple objects. In International Conference on 3D Vision, 2017.
  • [6] K. Genova, F. Cole, A. Maschinot, A. Sarna, D. Vlasic, and W. T. Freeman. Unsupervised training for 3d morphable model regression. In CVPR, 2018.
  • [7] T. Gerig, A. Morel-Forster, C. Blumer, B. Egger, M. Luthi, S. Schönborn, and T. Vetter. Morphable face models-an open framework. In Automatic Face & Gesture Recognition. IEEE, 2018.
  • [8] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In NIPS, 2014.
  • [9] R. L. Gregory. The intelligent eye. Weidenfeld & Nicolson, 1970.
  • [10] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville. Improved training of wasserstein gans. In NIPS, 2017.
  • [11] R. Hartley and A. Zisserman. Multiple view geometry in computer vision. Cambridge university press, 2003.
  • [12] P. Henderson and V. Ferrari. Learning to generate and reconstruct 3d meshes with only 2d supervision. 2018.
  • [13] B. K. Horn. Obtaining shape from shading information. MIT press, 1989.
  • [14] T. Karras, T. Aila, S. Laine, and J. Lehtinen. Progressive growing of gans for improved quality, stability, and variation. 2018.
  • [15] H. Kato, Y. Ushiku, and T. Harada. Neural 3d mesh renderer. In CVPR, 2018.
  • [16] D. P. Kingma and M. Welling. Auto-encoding variational bayes. In ICLR, 2014.
  • [17] Z. Liu, P. Luo, X. Wang, and X. Tang. Deep learning face attributes in the wild. In ICCV, 2015.
  • [18] M. M. Loper and M. J. Black. Opendr: An approximate differentiable renderer. In ECCV, 2014.
  • [19] D. G. Lowe. Distinctive image features from scale-invariant keypoints. International journal of computer vision, 60(2):91–110, 2004.
  • [20] M. F. Mathieu, J. J. Zhao, J. Zhao, A. Ramesh, P. Sprechmann, and Y. LeCun. Disentangling factors of variation in deep representation using adversarial training. In NIPS, 2016.
  • [21] D. Novotny, D. Larlus, and A. Vedaldi. Learning 3d object categories by looking around them. In ICCV, 2017.
  • [22] S. E. Reed, Y. Zhang, Y. Zhang, and H. Lee. Deep visual analogy-making. In Advances in neural information processing systems, 2015.
  • [23] M. Sela, E. Richardson, and R. Kimmel.

    Unrestricted facial geometry reconstruction using image-to-image translation.

    In ICCV, 2017.
  • [24] A. Szabó, Q. Hu, T. Portenier, M. Zwicker, and P. Favaro. Understanding degeneracies and ambiguities in attribute transfer. In ECCV, 2018.
  • [25] A. Tewari, M. Zollhöfer, H. Kim, P. Garrido, F. Bernard, P. Pérez, and C. Theobalt. Mofa: Model-based deep convolutional face autoencoder for unsupervised monocular reconstruction. In ICCV, 2017.
  • [26] C. Tomasi and T. Kanade. Shape and motion from image streams under orthography: a factorization method. International Journal of Computer Vision, 9(2):137–154, 1992.
  • [27] A. T. Tran, T. Hassner, I. Masi, and G. Medioni. Regressing robust and discriminative 3d morphable models with a very deep neural network. In CVPR, 2017.
  • [28] J. Wu, C. Zhang, T. Xue, B. Freeman, and J. Tenenbaum. Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling. In NIPS, 2016.
  • [29] T. Zhou, M. Brown, N. Snavely, and D. G. Lowe. Unsupervised learning of depth and ego-motion from video. In CVPR, 2017.

Supplementary Material