Log In Sign Up

GAN2X: Non-Lambertian Inverse Rendering of Image GANs

2D images are observations of the 3D physical world depicted with the geometry, material, and illumination components. Recovering these underlying intrinsic components from 2D images, also known as inverse rendering, usually requires a supervised setting with paired images collected from multiple viewpoints and lighting conditions, which is resource-demanding. In this work, we present GAN2X, a new method for unsupervised inverse rendering that only uses unpaired images for training. Unlike previous Shape-from-GAN approaches that mainly focus on 3D shapes, we take the first attempt to also recover non-Lambertian material properties by exploiting the pseudo paired data generated by a GAN. To achieve precise inverse rendering, we devise a specularity-aware neural surface representation that continuously models the geometry and material properties. A shading-based refinement technique is adopted to further distill information in the target image and recover more fine details. Experiments demonstrate that GAN2X can accurately decompose 2D images to 3D shape, albedo, and specular properties for different object categories, and achieves the state-of-the-art performance for unsupervised single-view 3D face reconstruction. We also show its applications in downstream tasks including real image editing and lifting 2D GANs to decomposed 3D GANs.


page 1

page 4

page 6

page 7

page 8

page 14

page 15

page 16


PANDORA: Polarization-Aided Neural Decomposition Of Radiance

Reconstructing an object's geometry and appearance from multiple images,...

Do 2D GANs Know 3D Shape? Unsupervised 3D shape reconstruction from 2D Image GANs

Natural images are projections of 3D objects on a 2D image plane. While ...

SupeRVol: Super-Resolution Shape and Reflectance Estimation in Inverse Volume Rendering

We propose an end-to-end inverse rendering pipeline called SupeRVol that...

De-rendering the World's Revolutionary Artefacts

Recent works have shown exciting results in unsupervised image de-render...

A Shading-Guided Generative Implicit Model for Shape-Accurate 3D-Aware Image Synthesis

The advancement of generative radiance fields has pushed the boundary of...

Multi-view Inverse Rendering for Large-scale Real-world Indoor Scenes

We present a multi-view inverse rendering method for large-scale real-wo...

Live Intrinsic Material Estimation

We present the first end-to-end approach for real-time material estimati...

1 Introduction

Natural images are formed as a function of the objects’ underlying physical attributes, such as geometry, material, and illumination. Estimating these object intrinsics from images is one of the core problems in computer vision, and is often referred to as inverse rendering, , the inverse process of rendering in computer graphics. It has wide applications in AR/VR and visual effects, such as relighting, material editing, and object pose editing.

A classic approach for inverse rendering is photometric stereo [1, 12], which requires multi-view and multi-lighting images of a scene to be captured with a light stage. These paired images provide sufficient information for estimating object intrinsics. However, the need of a sophisticated light stage setup makes it difficult to apply to diverse object categories, especially in-the-wild objects like cars. To get rid of this constraint, another line of work aims at performing inverse rendering in an unsupervised or weakly-supervised manner [17, 16, 11, 52], where only unpaired 2D image collections or weak annotations are given. Despite great progress, these approaches often miss fine-grained geometry details and cannot recover specular properties due to the inherent ambiguity.

Figure 2: Images generated by a GAN [21] with different viewpoints and lighting conditions. The various specular highlight effects reveal the non-Lambertian material properties of the face.

While the lack of paired data is the main challenge of unsupervised inverse rendering, a continuous image distribution modelled by a GAN [13, 21] provides a possibility to create pseudo light-stage data. GANs implicitly mimic the image formation process by learning from unpaired image collections. It is shown in the recent Shape-from-GAN works [31, 63] that the pseudo paired data generated by GAN can be used to reconstruct 3D shapes. However, these methods do not account for specular material properties and their mesh-based representation limits the precision of reconstructed 3D shapes. We believe that exploiting GANs for inverse rendering is still not fully explored. For instance, Fig. 2 provides GAN-generated images with variations on not only viewpoints but also the specular highlights, showing evidence that GANs implicitly capture the spatially-varying specular properties of the face.

Motivated by the above observation, in this work, we are interested to go one step further, exploring the problem of non-Lambertian inverse rendering of image GANs with finer details. In particular, we present a method that, for each image generated by a pretrained GAN, learns to predict its underlying 3D shape, material properties including albedo and specular components (, specular intensity and shininess), and lighting condition. While the training is based on GAN-generated images, our method can also be applied to real images as we will show in experiments.

Concretely, to model high-quality 3D shape and non-Lambertian appearance, we devise an implicit neural surface representation to model the 3D surface, albedo, and specular properties. Rendering images from these intrinsics are conducted based on differentiable volume rendering [15] and Phong shading [35]. Inspired by [31], we adopt an exploration-and-exploitation algorithm to generate pseudo multi-view and multi-lighting images from a pretrained GAN. The neural surface representation can then be optimized with a reconstruction loss on the pseudo paired data along with regularization losses that facilitate training. However, the multi-view images generated by 2D GANs are not strictly 3D consistent, leading to imprecise results with noticeable mismatch to the target image. To address this issue, we devise a shading-based refinement process that adapts our model to fully exploit information from the target image and effectively resolves the mismatch and recovers more fine details.

With the above designs, our approach, namely GAN2X, can reconstruct high-quality 3D shape, albedo, and specular properties for different object categories, as shown in Fig. 1. The results show that image GANs do implicitly capture non-Lambertian object intrinsics, and thus provide a new way towards unsupervised inverse rendering. With such capability, GAN2X is also a powerful approach for unsupervised 3D shape learning from unpaired image collections. On the task of albedo and surface normal estimation and single-view 3D face reconstruction, GAN2X significantly outperforms existing unsupervised baselines. We also show promising results in downstream applications including image relighting and lifting 2D GANs to 3D with decomposed intrinsics.

Our contributions are summarized as follows: (1) We propose GAN2X that achieves high-quality non-Lambertian inverse rendering using GANs pretrained on unpaired images only. Our work reveals the large potential of GANs for learning spatially-varying non-Lambertian material properties. (2) GAN2X naturally serves as a strong unsupervised 3D shape learning approach. We show the state-of-the-art performance on unsupervised single-view 3D reconstruction of faces. (3) GAN2X enables a wide range of downstream applications including lifting 2D GANs to 3D with decomposed intrinsics and photorealistic visual effects like relighting and material editing.

2 Related Work

2.1 Generative Adversarial Networks

We have witnessed the great progress of Generative Adversarial Networks (GANs) in image synthesis starting from the first GAN model 

[13]. In particular, the StyleGAN family [20, 21, 19] has successfully produced photo-realistic and high-resolution imaginary. However, the image synthesis process of GANs is usually treated as a black box, which lacks physical interpretability (, 3D shape, material properties). There has been some works that extend GANs to enable 3D control  [45, 10, 65, 46, 24, 38, 39], but such control is driven by the guidance of an external 3D morphable model [4] or a 3D mesh input. Different from these methods, our method does not require additional 3D geometry as input and can explicitly recover the physical attributes including 3D shape and material properties.

Another line of work is 3D-aware GANs, which adopt 3D representations (, voxel grids, and neural fields) or their integration with 2D generative models to enable 3D controllable image synthesis [40, 7, 30, 14, 8, 66, 53, 32, 43]. However, most of these methods do not explicitly model the reflectance and the shading process, thus resulting in suboptimal 3D geometry and the lack of control on lighting. While [32] models illumination explicitly, it makes the assumption of Lambertian reflectance, which does not account for specular highlight. Besides, the learned 3D shapes in [32] still lack fine-grained details due to limited resolution for training. In contrast, we make the first attempt to recover non-Lambertian material properties from pretrained GANs. Unlike previous works that adopt memory-consuming 3D representations to learn a 3D GAN, our work shows that the 3D geometry and material are also readily achievable from off-the-shelf 2D GANs that are more efficient to train.

2.2 Supervised Inverse Rendering

Inverse rendering has been well studied given paired images of an object from multiple viewpoints and light conditions. A typical way is to estimate object intrinsics by fitting photometric or geometric models to reconstruct the paired images. Conventional photometric stereo methods [1, 12] perform this based on meshes while recently it is extended to implicit neural representations [5, 42, 62, 60]. These methods are object-specific, , they do not generalize to unseen object instances. Some learning-based methods are also generalizable by learning from multi views, enabling test-time reconstruction with sparse views or a single view [59, 3, 6, 58, 2, 23]. However, collecting multi-view and multi-lighting images is resource-demanding and difficult to apply to in-the-wild objects like cars. Some methods use synthetic data for training [26, 28, 41, 25, 37], but training on such data requires us to solve the domain-gap problem for generalization to real in-the-wild images. To get rid of these limitations, in this work, we focus on the unsupervised setting where only unpaired image collections are available for training.

2.3 Unsupervised Inverse Rendering

In contrast to the supervised approaches, there has been a surge of interest to develop inverse rendering methods trained only on unpaired image collections. These methods predict the 3D geometry and appearance reconstruction from the input image and use image-space losses computed between the input and the rendered reconstruction. Since this is an ill-posed problem, early approaches focused on human faces and bodies using 3D priors [48, 16], with some methods also learning components of the prior from videos [44, 47]. Several methods learn to reconstruct the object shapes of general categories [11, 17, 55] using weak supervision like template shapes, masks, or hand-crafted priors (, object symmetry and smoothness). There are a few attempts for learning shape as well as material and illumination from unpaired image collections. Wu et al. [52] uses object symmetry and assumes Lambertian materials. The follow-up work [51] does not limit the object to be Lambertian, but requires objects to have rotational symmetry. Similar to us, [31] and [63] also exploit GANs to reconstruct 3D shapes. GAN2Shape [31] assumes Lambertian reflectance and does not recover high-quality albedo, while [63] does not disentangle albedo and illumination. Concurrent to us, [50] relies on coarse 3D shape initialization to estimate the object intrinsics, and assumes globally shared specular intensity and shininess. In contrast, our approach does not rely on coarse shape, and recovers spatially-varying non-Lambertian material properties. In this work, we show that GANs implicitly capture object shape and material properties, and thus provide a new way for unsupervised inverse rendering with precise recovery of geometry and material properties.

3 Method

Figure 3: Method Overview. (a) In the exploration step, we use a convex shape prior to guide the GAN generator to produce projected images with various viewpoint and lighting conditions. (b) In the exploitation step, we leverage the projected images as pseudo paired data to optimize the underlying intrinsic components. The intrinsic components including shape, albedo, and specular properties are represented via implicit neural fields. Images can be rendered from this representation via volume rendering and Phong shading, which are naturally amenable to gradient-based optimization.

Our approach aims to recover the intrinsic components (3D shape, albedo, specular properties) at high quality for any image generated by an image GAN. In the following, we first provide some preliminaries on GANs. We then describe how we represent the intrinsic components and render images from them. Finally, we introduce our inverse rendering algorithm that distills the knowledge of the GAN to recover the intrinsic components.

Preliminaries on Generative Adversarial Networks (GANs). A GAN learns the data distribution via a min-max game between a generator G and a discriminator D [13]. After training from an image dataset, the generator can map a latent code to an image. In this work, our study is based on StyleGAN2 [21], a classic GAN consisting of two parts. First, the latent code is mapped to an intermediate latent code via a mapping network . Then is used to produce the output image with a synthesis network. We refer to the synthesis network as in the following sections.

3.1 Shape and Material Model

Scene representation. In order to model object shape and material with high fidelity, we adopt implicit neural fields to represent these factors. Specifically, for any image generated by the GAN, its 3D shape is represented by an MLP , where is the coordinate of any point and is its signed distance to the object surface. Thus, the surface of the object is . The material properties are represented by another MLP , where is the diffuse albedo, is the specular intensity, and is the shininess. Unlike NeRF [29] that models view-dependent color, in our formulation the material property is view-independent while view-dependent effects are explicitly modeled with shading as would be introduced later. The conditioning of and on is implemented by concatenating the embedding of with as the input. The embedding can be obtained from 1) an additional encoder and 2) the latent code corresponding to from the GAN. Their differences will be discussed latter.

Apart from and , we also have a viewpoint encoder that takes as input and predicts the camera view and a lighting encoder that predicts the lighting condition . Here is the light direction, is the ambient coefficient, and is the diffuse coefficient. This scene representation allows to render an image from any viewpoint or lighting condition via a rendering process as . Next, we introduce in detail.

Rendering. Solving the inverse rendering problem requires our model to render images from the above intrinsics in a differentiable and easy-to-optimize manner. We use volume rendering [15] and Phong shading [35] to achieve this. To render the color of a camera ray with near and far bounds and , we first render the albedo , specular intensity , shininess , and surface normal via:


Here is the weight function for volume rendering, for which we use the normalized S-density following NeuS [49]. We then perform Phong shading to get the final color :


where is the bisector of the angle between viewing direction and , and is a tone-mappping function to ensure a more even brightness distribution. A limitation of this vanilla Phong shading is that it produces similar darkness at surface areas opposite to the light source. We observe that the surface with smaller often tends to be darker due to complex reflection in the environment. Thus, we optionally replace with , where the new negative term is to compensate for the darkness variation at opposite-light areas. We show that this revision leads to more realistic relighting in experiments.

Note that in Eq. 1 and Eq. 2

has a learnable inverse standard deviation parameter

( in [49]) that controls the concentration of density on the surface. It is directly optimized during training in [49]. However, in our case, this would produce a sub-optimal solution as the 3D inconsistency of GAN generated images would impede the convergence of . Thus, we manually increase from to with an exponential schedule as , where is the number of training iterations and controls the decay.

3.2 Inverse Rendering

We perform inverse rendering based on the shape and material model introduced above. In order to generate a number of approximated paired images of various viewpoint and lighting conditions using the GAN, we adopt an exploration and exploitation algorithm following [31]. Different from [31], in this work the inverse rendering is based on the new non-Lambertian neural representation and rendering equation introduced before. Besides, we also devise a chromaticity-based smoothness loss to regularize the material and a viewpoint and lighting loss to stabilize training. A shading-based refinement technique will also be introduced in the next subsection.

Exploration. We first initialize to produce an ellipsoid as a convex shape prior following [31]. The camera viewpoint and lighting are initialized to be a canonical setting and . We then optimize the material network using the reconstruction loss , where is L1 loss. As shown in Fig. 3 (a), with this initial guess of the scene, we can re-render a number of new images from different viewpoints and lighting conditions using , where and are randomly sampled from their prior distributions. These re-rendered images roughly reveal the change of viewpoint and lighting, thus can serve as a guidance for the exploration in the GAN image manifold. Specifically, we reconstruct using the GAN generator by training an encoder that predicts the latent codes. After training, this would produce the GAN-reconstructed images , namely projected images, which have viewpoints and lighting conditions that resemble the re-rendered images .

Exploitation. The projected images obtained in the exploration process could be viewed as pseudo paired images for . Thus, we exploit their information to recover the intrinsic components. As shown in Fig. 3 (b), the viewpoint and lighting for each image are predicted by the viewpoint encoder and lighting encoder respectively. They are then used to render images with the shared shape field and material field. The scene related networks , , , can be jointly optimized using the reconstruction loss:


where is a combination of L1 loss and perceptual loss.

In addition, we also regularize the material using a chromaticity-based smoothness loss. For the material maps () rendered from (), we denote their concatenation as . The chromaticity-based smoothness loss is defined as:


where is calculated from by taking its channels in the CIELAB color space, and are used to signify the computation of image gradients, and is a non-linear function. This regularization motivates pixels with close chromaticity to have similar materials.

As the projected images share similar viewpoints and lighting conditions as the re-rendered images, we can use and to guide the learning of and at the beginning of training:


In summary, the final training objective is:


where are the parameters of networks respectively, and is only used in the first 1k training iterations.

With this training objective, the object shape, material, and lighting will be more correctly inferred. We then use these updated intrinsic components as the initialization and repeat the exploration and exploitation steps for a few times. This allows to further distill the knowledge of the GAN and refine the results.

Joint training. We have introduced how our method can be applied to an individual instance generated by the GAN. In this case, the dependence of and on is actually not necessary. Having this dependence allows us to further extend our method for joint training on multiple instances as done in [31], which improves generalization. For example, if the conditioning of and on is achieved by training an additional image encoder , then we can apply our model to real images after joint training. And if the conditioning is based on the latent code in the space of the GAN, then the StyleGAN mapping network together with our and forms a 3D GAN with decomposed intrinsic components. We show the applications of these variants in experiments.

Figure 4: Qualitative comparison. Our approach achieves more accurate inverse rendering than baselines (zoom in to see details). Our relighting result successfully models the shift of specular highlight on the lip while baselines cannot.

3.3 Shading-based Refinement

While the above approach already produces pleasing results, the reconstructed shape and appearance could possibly bear noticable mismatch with the target image. This is because the pseudo paired data generated by a 2D GAN is not strictly 3D-consistent. To this end, we draw inspiration from shape-from-shading literature [67] and devise a shading-based refinement (SBR) step that fully exploits the information in the target image . Let , , and denote the image, depth, and material map rendered from (). The shading-based refinement objective is to reconstruct the target image with several constraints:


where and are the initial values of and and are used to prevent the shape and material from deviating too much. Here is defined only for . This optimization goal resolves the mismatch and recover more details from the target image while preserving a valid 3D shape and material, as we will show in experiments.

4 Experiments

We conduct extensive experiments to evaluate our approach GAN2X on unsupervised inverse rendering across different object categories including human faces, cat faces, and cars. We use datasets of unpaired image collections including CelebA [27], CelebAHQ [18], AFHQ cat [9], and LSUN car [57]. The GAN models we used are StyleGAN2 [21] pretrained on these datasets. We use off-the-shelf scene parsing models [56, 64] to remove the background. We recommend readers to refer to the supplementary material for more implementation details, qualitative results, and videos.

4.1 Inverse Rendering Results

Qualitative evaluation. Fig. 4 shows the qualitative comparison between our approach and two unsupervised inverse rendering baselines Unsup3d [52] and GAN2Shape [31] on the CelebA dataset. We visualize the rendering, 3D shape, surface normal, albedo, diffuse light map, and specular light map. Our approach reconstructs much more precise 3D shapes than the baselines, , we successfully recover the double-fold eyelids, teeth, and most wrinkles while the baselines miss fine details and produce blurry shapes. Besides, the baselines cannot decompose the specular light map. Consequently, the specular highlights are entangled into albedo maps. In contrast, our approach successfully disentangles the specular highlights from the diffuse albedo, , the highlights on the lips are correctly captured in the specular light map. This, in turn, results in a more accurate albedo map and produces more realistic relighting effects.

Figure 5: Qualitative results. We show inverse rendering and image editing results on the CelebaHQ, AFHQ Cat, and LSUN Car datasets.
Figure 6: Visualization of specular components (, specular intensity and shininess) for objects in Fig. 4 and Fig. 5.
Figure 7: Qualitative comparison of single-view 3D reconstruction on H3DS dataset.

Our results on more datasets are shown in Fig. 5. GAN2X performs high-quality inverse rendering for human head, cat face, and cars, where the 3D shape, albedo, diffuse light map, and specular light map are accurately recovered. Besides, these reconstructed intrinsic components allow photorealistic re-rendering of the object under a novel viewpoint or lighting condition, showing large potential for image editing. We also visualize the learned specular components of different objects in Fig. 7. It is observed that our model successfully learns some meaningful object-varying and spatially-varying specular properties. For instance, the human lips have higher shininess than the human faces; the eyes of the cat have higher shininess than its fur; the car has higher specular intensity than other objects. These results verify our assumption that GANs implicitly capture object material properties. We note that learning spatially-varying specular intensity and shininess is very challenging and our method does not resolve all the ambiguities. For example, the tiles of the car have high shininess, which is counter-intuitive. This is because its specular intensity is already very low and thus the shininess cannot get supervision. Addressing these ambiguities is left as a future work.

Quantitative evaluation. For quantitative evaluation, we first perform single-view 3D reconstruction on the H3DS dataset [36] to evaluate the 3D shape. The H3DS test set has ground truth 3D face scans for several identities. We take a single-view image as input and report the Chamfer Distance between our reconstructed shape and the ground truth. Apart from Unsup3d and GAN2Shape, we also compare with pi-GAN [7] and ShadeGAN [32] that can perform unsupervised 3D reconstruction via GAN inversion. All methods are trained on the CelebA dataset. Applying our model to these real images is achieved with the jointly trained image encoder and as discussed in Sec. 3.2. As shown in Fig. 7 and Tab. 1, our method reconstructs more accurate and more natural-looking 3D shapes than other baselines. Thus, our approach is also well suited for unsupervised 3D shape learning.


Method Unsup3d GAN2Shape pi-GAN ShadeGAN
w/o SBR
CD 3.60 2.62 3.29 2.49 2.21 2.08


Table 1: Single-view 3D reconstruction on the H3DS dataset. We report chamfer distance (CD) between the predicted mesh and the ground truth mesh.

Evaluating albedo for our GAN-based inverse rendering setting is challenging as there lacks a suitable dataset with ground truth albedo. To this end, we leverage the state-of-the-art supervised approach Total Relighting [33] to produce pseudo ground truth, and report results on it as a reference. Total Relighting is trained on high-quality light-stage data, thus producing stable and reliable albedo and surface normal for human faces. We test on 500 images generated by the GAN trained on CelebA and report the scale-invariant error (SIE) for albedo and mean-angle deviation (MAD) for surface normal following [50]. As shown in Tab. 2, our approach significantly outperforms baselines, showing better capability to recover face albedo and shape.


Unsup3d GAN2Shape Ours
SIE () 3.21 3.05 2.16
MAD 18.66 21.75 12.67


Table 2: Quantitative comparison of albedo and surface normal on CelebA. We report scale-invariant error (SIE) for albedo and mean-angle deviation (MAD) for surface normal.

Ablation study. We further study the effects of several components in our pipeline, including the specular term in Eq. 3, the chromaticity-based smoothness loss in Eq. 5, the shading-based refinement (SBR), and the negative shading term introduced in Sec. 3.1. The results are shown in Tab. 3 and Fig. 8. It can be observed that including the specular term helps disentangle specular highlight with albedo. The reduced ambiguity in turn facilitates the learning of surface normal. Besides, the smoothness prior also leads to better disentanglement between albedo and shading, producing more natural albedo map. The example of Fig. 8 (a) is a challenging case where the rendering of our approach without SBR exhibits noticable mismatch with the input image. The proposed SBR can effectively reduce the mismatch and recover more details in the target image. Quantitative results also confirm that SBR further improves the shape and albedo in our approach. Finally, the negative shading term has little effect on the quantitative results, but would improve qualitative relighting effects as it can compensate for darkness variation at surfaces opposite to the light source. As shown in Fig. 8 (b), the rendering without this negative shading term tends to produce constant darkness at the opposite-light areas while including this term reveals some geometry details and is more realistic.

Figure 8: Ablation study of (a) specular component, smoothness prior, shading-based refinement (SBR) and (b) negative shading component.


Ours w/o specular w/o smooth w/o SBR w/o neg
SIE () 2.16 2.25 2.28 2.19 2.15
MAD 12.67 12.88 12.63 12.74 12.66


Table 3: Ablation study of several design choices. The values have the same meaning as in Tab. 2.

4.2 Other Applications

Real image editing. We have shown that GAN2X can be used for single-view 3D reconstruction of real images, here we also demonstrate its applicability for other editing effects for real images. Fig. 9 provides an example of a real image, where our method successfully achieves satisfactory inverse rendering and image editing results. Here we also show material editing effects enabled by our method. By reducing the specular intensity and shininess, we can weaken the specular highlight on the face.

Figure 9: Application on real image inverse rendering and editing.
Figure 10: Application on lifting 2D GANs to decomposed 3D GANs.

Lifting 2D GANs to decomposed 3D GANs. Finally, we observe that GAN2X jointly trained on multiple GAN samples is capable of inheriting the generative property of the 2D GAN. As shown in Fig. 10

, our method not only works for samples used in training, but also generalizes to new samples of the 2D GAN. The different samples can also be naturally interpolated by interpolating the corresponding latent codes, which is an important property of GANs. The results demonstrate the possibility of lifting a 2D GAN to a decomposed 3D GAN with our approach.

5 Conclusion

We have presented a new approach for unsupervised inverse rendering. To overcome the shortage of paired data, we leverage the generative modeling capability of GANs to create pseudo paired data that reflect the underlying intrinsic components. With volume rendering and a Phong shading based implicit neural representation, our approach successfully recovers high-quality 3D shapes, albedo, and specular properties for different object categories, which opens up wide downstream applications. Notably, GAN2X is the first method that can learn spatially-varying specular properties from merely unpaired image collections. For future work, our approach could be further combined with recent advances in 3D GANs to further refine their geometry and decompose the material properties.


Christian Theobalt was supported by ERC Consolidator Grant 4DReply (770784). Lingjie Liu was supported by Lise Meitner Postdoctoral Fellowship.


  • [1] N. Alldrin, T. Zickler, and D. Kriegman Photometric stereo with non-parametric and spatially-varying reflectance. In

    2008 IEEE Conference on Computer Vision and Pattern Recognition

    Cited by: §1, §2.2.
  • [2] J. T. Barron and J. Malik (2014) Shape, illumination, and reflectance from shading. IEEE transactions on pattern analysis and machine intelligence 37 (8), pp. 1670–1687. Cited by: §2.2.
  • [3] S. Bi, Z. Xu, K. Sunkavalli, D. Kriegman, and R. Ramamoorthi (2020) Deep 3d capture: geometry and reflectance from sparse multi-view images. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.2.
  • [4] V. Blanz and T. Vetter (1999) A morphable model for the synthesis of 3d faces. In Proceedings of the 26th annual conference on Computer graphics and interactive techniques, Cited by: §2.1.
  • [5] M. Boss, R. Braun, V. Jampani, J. T. Barron, C. Liu, and H. Lensch (2021) Nerd: neural reflectance decomposition from image collections. In IEEE/CVF International Conference on Computer Vision (ICCV), Cited by: §2.2.
  • [6] M. Boss, V. Jampani, K. Kim, H. Lensch, and J. Kautz (2020) Two-shot spatially-varying brdf and shape estimation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.2.
  • [7] E. R. Chan, M. Monteiro, P. Kellnhofer, J. Wu, and G. Wetzstein (2021) Pi-gan: periodic implicit generative adversarial networks for 3d-aware image synthesis. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.1, §4.1.
  • [8] E. R. Chan, C. Z. Lin, M. A. Chan, K. Nagano, B. Pan, S. D. Mello, O. Gallo, L. Guibas, J. Tremblay, S. Khamis, T. Karras, and G. Wetzstein (2022) In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.1.
  • [9] Y. Choi, Y. Uh, J. Yoo, and J. Ha (2020) StarGAN v2: diverse image synthesis for multiple domains. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §4.
  • [10] Y. Deng, J. Yang, D. Chen, F. Wen, and X. Tong (2020) Disentangled and controllable face image generation via 3d imitative-contrastive learning. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.1.
  • [11] S. Goel, A. Kanazawa, and J. Malik (2020) Shape and viewpoint without keypoints. In European Conference on Computer Vision, pp. 88–104. Cited by: §1, §2.3.
  • [12] D. B. Goldman, B. Curless, A. Hertzmann, and S. M. Seitz (2009) Shape and spatially-varying brdfs from photometric stereo. IEEE Transactions on Pattern Analysis and Machine Intelligence 32 (6), pp. 1060–1071. Cited by: §1, §2.2.
  • [13] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In NIPS, Cited by: §1, §2.1, §3.
  • [14] J. Gu, L. Liu, P. Wang, and C. Theobalt (2021) StyleNeRF: a style-based 3d-aware generator for high-resolution image synthesis. External Links: 2110.08985 Cited by: §2.1.
  • [15] J. T. Kajiya and B. P. Von Herzen (1984) Ray tracing volume densities. ACM SIGGRAPH computer graphics 18 (3), pp. 165–174. Cited by: §1, §3.1.
  • [16] A. Kanazawa, M. J. Black, D. W. Jacobs, and J. Malik (2018) End-to-end recovery of human shape and pose. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7122–7131. Cited by: §1, §2.3.
  • [17] A. Kanazawa, S. Tulsiani, A. A. Efros, and J. Malik (2018) Learning category-specific mesh reconstruction from image collections. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 371–386. Cited by: §1, §2.3.
  • [18] T. Karras, T. Aila, S. Laine, and J. Lehtinen (2018) Progressive growing of GANs for improved quality, stability, and variation. In International Conference on Learning Representations (ICLR), External Links: Link Cited by: §4.
  • [19] T. Karras, M. Aittala, S. Laine, E. Härkönen, J. Hellsten, J. Lehtinen, and T. Aila (2021) Alias-free generative adversarial networks. In Proc. NeurIPS, Cited by: §2.1.
  • [20] T. Karras, S. Laine, and T. Aila (2019) A style-based generator architecture for generative adversarial networks. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4401–4410. Cited by: §2.1.
  • [21] T. Karras, S. Laine, M. Aittala, J. Hellsten, J. Lehtinen, and T. Aila (2020) Analyzing and improving the image quality of stylegan. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8110–8119. Cited by: Figure 2, §1, §2.1, §3, §4.
  • [22] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §6.1.
  • [23] T. D. Kulkarni, W. F. Whitney, P. Kohli, and J. Tenenbaum (2015) Deep convolutional inverse graphics network. Neural Information Processing Systems (NeurIPS) 28. Cited by: §2.2.
  • [24] T. Leimkühler and G. Drettakis (2021) FreeStyleGAN: free-view editable portrait rendering with the camera manifold. 40 (6). External Links: Document Cited by: §2.1.
  • [25] Z. Li, M. Shafiei, R. Ramamoorthi, K. Sunkavalli, and M. Chandraker (2020) Inverse rendering for complex indoor scenes: shape, spatially-varying lighting and svbrdf from a single image. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.2.
  • [26] Z. Li, Z. Xu, R. Ramamoorthi, K. Sunkavalli, and M. Chandraker (2018) Learning to reconstruct shape and spatially-varying reflectance from a single image. ACM Transactions on Graphics (TOG) 37 (6), pp. 1–11. Cited by: §2.2.
  • [27] Z. Liu, P. Luo, X. Wang, and X. Tang (2015) Deep learning face attributes in the wild. In IEEE/CVF International Conference on Computer Vision (ICCV), Cited by: §4, §6.1.
  • [28] A. Meka, M. Maximov, M. Zollhoefer, A. Chatterjee, H. Seidel, C. Richardt, and C. Theobalt (2018) Lime: live intrinsic material estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §2.2.
  • [29] B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng (2020) Nerf: representing scenes as neural radiance fields for view synthesis. In European Conference on Computer Vision (ECCV), Cited by: §3.1, §6.1.
  • [30] M. Niemeyer and A. Geiger (2021) CAMPARI: camera-aware decomposed generative neural radiance fields. In International Conference on 3D Vision (3DV), Cited by: §2.1.
  • [31] X. Pan, B. Dai, Z. Liu, C. C. Loy, and P. Luo (2021) Do 2d gans know 3d shape? unsupervised 3d shape reconstruction from 2d image gans. In International Conference on Learning Representations (ICLR), Cited by: §1, §1, §2.3, §3.2, §3.2, §3.2, §4.1, §6.1, §6.3.
  • [32] X. Pan, X. Xu, C. C. Loy, C. Theobalt, and B. Dai (2021) A shading-guided generative implicit model for shape-accurate 3d-aware image synthesis. In Advances in Neural Information Processing Systems (NeurIPS), Cited by: §2.1, §4.1.
  • [33] R. Pandey, S. O. Escolano, C. Legendre, C. Haene, S. Bouaziz, C. Rhemann, P. Debevec, and S. Fanello (2021) Total relighting: learning to relight portraits for background replacement. ACM Transactions on Graphics (TOG) 40 (4), pp. 1–21. Cited by: §4.1, §6.3.
  • [34] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer (2017)

    Automatic differentiation in pytorch

    Cited by: §6.1.
  • [35] B. T. Phong (1975) Illumination for computer generated pictures. Communications of the ACM 18 (6), pp. 311–317. Cited by: §1, §3.1.
  • [36] E. Ramon, G. Triginer, J. Escur, A. Pumarola, J. Garcia, X. Giro-i-Nieto, and F. Moreno-Noguer (2021) H3D-net: few-shot high-fidelity 3d head reconstruction. In IEEE/CVF International Conference on Computer Vision (ICCV), Cited by: §4.1.
  • [37] S. Sang and M. Chandraker (2020) Single-shot neural relighting and svbrdf estimation. In European Conference on Computer Vision (ECCV), Cited by: §2.2.
  • [38] K. Sarkar, V. Golyanik, L. Liu, and C. Theobalt (2021) Style and pose control for image synthesis of humans from a single monocular view. External Links: 2102.11263 Cited by: §2.1.
  • [39] K. Sarkar, L. Liu, V. Golyanik, and C. Theobalt (2021) HumanGAN: a generative model of humans images. In International Conference on 3D Vision (3DV), Cited by: §2.1.
  • [40] K. Schwarz, Y. Liao, M. Niemeyer, and A. Geiger (2020) Graf: generative radiance fields for 3d-aware image synthesis. arXiv preprint arXiv:2007.02442. Cited by: §2.1.
  • [41] S. Sengupta, A. Kanazawa, C. D. Castillo, and D. W. Jacobs (2018) Sfsnet: learning shape, reflectance and illuminance of facesin the wild’. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.2.
  • [42] P. P. Srinivasan, B. Deng, X. Zhang, M. Tancik, B. Mildenhall, and J. T. Barron (2021) Nerv: neural reflectance and visibility fields for relighting and view synthesis. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.2.
  • [43] A. Tewari, M. B R, X. Pan, O. Fried, M. Agrawala, and C. Theobalt (2022) Disentangled3D: learning a 3d generative model with disentangled geometry and appearance from monocular images. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.1.
  • [44] A. Tewari, F. Bernard, P. Garrido, G. Bharaj, M. Elgharib, H. Seidel, P. Pérez, M. Zollhofer, and C. Theobalt (2019) Fml: face model learning from videos. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10812–10822. Cited by: §2.3.
  • [45] A. Tewari, M. Elgharib, G. Bharaj, F. Bernard, H. Seidel, P. Pérez, M. Zollhofer, and C. Theobalt (2020) StyleRig: rigging stylegan for 3d control over portrait images. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.1.
  • [46] A. Tewari, M. Elgharib, M. BR, F. Bernard, H. Seidel, P. Pérez, M. Zöllhofer, and C. Theobalt (2020-12) PIE: portrait image embedding for semantic control. Vol. 39. Cited by: §2.1.
  • [47] A. Tewari, H. Seidel, M. Elgharib, C. Theobalt, et al. (2021) Learning complete 3d morphable face models from images and videos. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3361–3371. Cited by: §2.3.
  • [48] A. Tewari, M. Zollhofer, H. Kim, P. Garrido, F. Bernard, P. Perez, and C. Theobalt (2017)

    Mofa: model-based deep convolutional face autoencoder for unsupervised monocular reconstruction

    In Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 1274–1283. Cited by: §2.3.
  • [49] P. Wang, L. Liu, Y. Liu, C. Theobalt, T. Komura, and W. Wang (2021) NeuS: learning neural implicit surfaces by volume rendering for multi-view reconstruction. Neural Information Processing Systems (NeurIPS). Cited by: §3.1, §3.1, §6.1, §6.1.
  • [50] F. Wimbauer, S. Wu, and C. Rupprecht (2022) De-rendering 3d objects in the wild. arXiv preprint arXiv:2201.02279. Cited by: §2.3, §4.1.
  • [51] S. Wu, A. Makadia, J. Wu, N. Snavely, R. Tucker, and A. Kanazawa (2021) De-rendering the world’s revolutionary artefacts. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.3.
  • [52] S. Wu, C. Rupprecht, and A. Vedaldi (2020) Unsupervised learning of probably symmetric deformable 3d objects from images in the wild. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §2.3, §4.1, §6.1, §6.3.
  • [53] X. Xu, X. Pan, D. Lin, and B. Dai (2021) Generative occupancy fields for 3d surface-aware image synthesis. In Advances in Neural Information Processing Systems(NeurIPS), Cited by: §2.1.
  • [54] L. Yariv, Y. Kasten, D. Moran, M. Galun, M. Atzmon, B. Ronen, and Y. Lipman (2020) Multiview neural surface reconstruction by disentangling geometry and appearance. Neural Information Processing Systems (NeurIPS) 33. Cited by: §6.1.
  • [55] Y. Ye, S. Tulsiani, and A. Gupta (2021) Shelf-supervised mesh prediction in the wild. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8843–8852. Cited by: §2.3.
  • [56] C. Yu, J. Wang, C. Peng, C. Gao, G. Yu, and N. Sang (2018) Bisenet: bilateral segmentation network for real-time semantic segmentation. In Proceedings of the European conference on computer vision (ECCV), Cited by: §4.
  • [57] F. Yu, A. Seff, Y. Zhang, S. Song, T. Funkhouser, and J. Xiao (2015) Lsun: construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365. Cited by: §4.
  • [58] Y. Yu, A. Meka, M. Elgharib, H. Seidel, C. Theobalt, and W. A. Smith (2020) Self-supervised outdoor scene relighting. In European Conference on Computer Vision (ECCV), Cited by: §2.2.
  • [59] Y. Yu and W. A. Smith (2019) Inverserendernet: learning single image inverse rendering. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.2.
  • [60] K. Zhang, F. Luan, Q. Wang, K. Bala, and N. Snavely (2021) Physg: inverse rendering with spherical gaussians for physics-based material editing and relighting. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.2.
  • [61] R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang (2018)

    The unreasonable effectiveness of deep features as a perceptual metric

    In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §6.1.
  • [62] X. Zhang, P. P. Srinivasan, B. Deng, P. Debevec, W. T. Freeman, and J. T. Barron (2021) Nerfactor: neural factorization of shape and reflectance under an unknown illumination. ACM Transactions on Graphics (TOG) 40 (6), pp. 1–18. Cited by: §2.2.
  • [63] Y. Zhang, W. Chen, H. Ling, J. Gao, Y. Zhang, A. Torralba, and S. Fidler (2021) Image gans meet differentiable rendering for inverse graphics and interpretable 3d neural rendering. In International Conference on Learning Representations (ICLR), Cited by: §1, §2.3.
  • [64] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia (2017) Pyramid scene parsing network. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §4.
  • [65] H. Zhou, J. Liu, Z. Liu, Y. Liu, and X. Wang (2020) Rotate-and-render: unsupervised photorealistic face rotation from single-view images. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.1.
  • [66] P. Zhou, L. Xie, B. Ni, and Q. Tian (2021) CIPS-3D: a 3D-Aware Generator of GANs Based on Conditionally-Independent Pixel Synthesis. External Links: 2110.09788 Cited by: §2.1.
  • [67] M. Zollhöfer, A. Dai, M. Innmann, C. Wu, M. Stamminger, C. Theobalt, and M. Nießner (2015) Shading-based refinement on volumetric signed distance functions. ACM Transactions on Graphics (TOG) 34 (4), pp. 1–14. Cited by: §3.3.

6 Supplementary Material

In this supplementary material, we provide the implementation details, discuss the limitations, and show more qualitative results. We also recommend readers to refer to the video demos at the project page.

6.1 Implementation Details

Model Architectures. Similar to the design of NeuS [49], our shape MLP has 8 hidden layers while the material MLP has 4 hidden layers. The number of channels for each hidden layer is 256. Apart from the coordinate , also takes the surface normal and the last feature from as input. Positional encoding [29] is applied to with 4 frequencies for and 6 frequencies for . In volume rendering, the number of coarse and fine samples are 36 and 36 respectively.

The architecture for viewpoint encoder and lighting encoder is described in Tab. 4. The architecture for image encoder and GAN encoder is described in Tab. 5. These architectures are based on -resolution input images. In our experiments, we use resolution for most datasets except CelebA [27], for which resolution is used. For -resolution input images, the second convolution layer in Tab. 4 and the first ResBlock in Tab. 5 are removed while the output channel of the first convolution layer is increased from 16 to 32. The abbreviations for the network layers are described below:

  • Conv(): convolution with input channels, output channels, kernel size

    , stride

    , and padding


  • Avg_pool(): average pooling with a stride of .

  • ResBlock(): residual block as defined in Tab.6.


Encoder Output size

Conv(3, 16, 4, 2, 1) + ReLU

Conv(16, 32, 4, 2, 1) + ReLU 64
Conv(32, 64, 4, 2, 1) + ReLU 32
Conv(64, 128, 4, 2, 1) + ReLU 16
Conv(128, 256, 4, 2, 1) + ReLU 8
Conv(256, 512, 4, 2, 1) + ReLU 4
Conv(512, 512, 4, 1, 0) + ReLU 1
Conv(512, , 1, 1, 0) + Tanh 1


Table 4: Network architecture for viewpoint net and lighting net . The output channel size is 6 for and 4 or 5 for depending on whether the negative shading term is used.


Encoder Output size
Conv(3, 16, 4, 2, 1) + ReLU 128
ResBlock(16, 32) 64
ResBlock(32, 64) 32
ResBlock(64, 128) 16
ResBlock(128, 256) 8
ResBlock(256, 512) 4
Conv(512, 1024, 4, 1, 0) + ReLU 1
Conv(1024, 512, 1, 1, 0) 1


Table 5: Network architecture of image encoder and GAN encoder .


Residual path
ReLU + Conv(, , 3, 2, 1)
ReLU + Conv(, , 3, 1, 1)
Identity path
Conv(, , 1, 1, 0)


Table 6: Network architecture for the ResBlock(, ) in Tab.5. The output of Residual path and Identity path are added as the final output.


Parameter Value/Range
(4, 100)
for and
for in step1
for all encoders


Table 7: Hyper-parameters. denotes learning rate.


Joint Pre-training Value
Number of samples (1k/200/100)
Number of re-rendered samples (32/80/160)
Number of stages 5
Step 1 iterations (1st stage) 50k
Step 1 iterations (other stages) 30k
Step 2 iterations 8k
Step 3 iterations 50k
(0.3, 1.0)
(0.2, 0.1)
Instance-specific fine-tuning Value
Number of re-rendered samples (800/400/400)
Number of stages 3
Step 1 iterations 6k
Step 2 iterations 2k
Step 3 iterations 15k
(0.8, 1.0)
(0.2, 0.1)
Shading-based refinement Value
Iterations 1k
(1.0, 1.0)
(0.5, 0.2)


Table 8: Hyper-parameters for CelebA, CelebA-HQ, and AFHQ Cat datasets. (x/y/z) denote values for CelebA, CelebA-HQ, and AFHQ Cat respectively.


The hyperparameters used in our experiments are provided in Tab. 

7, Tab. 8, and Tab. 9. For clarity, we denote the material network optimization process in Exploration as “step 1”, the GAN reconstruction process that trains as “step 2”, and the Exploitation process as “step 3”. “Number of stages” denotes how many times the exploration-and-exploitation process are repeated. For the chromaticity-based smoothness loss, we use different values for albedo and specularity maps, which are denoted as and respectively.

Training Process. For CelebA, CelebA-HQ, and AFHQ Cat datasets, we first pre-train our model on multiple samples jointly as mentioned in Sec. 3.2 of the main paper. We then perform instance-specific training for any individual sample. For LSUN car, we do not perform joint training, but first train on resolution and then fine-tune on resolution for each instance. For all datasets, shading-based refinement is finally applied to further refine the results. Note that for the application of lifting 2D GAN to 3D GAN, only joint pre-training is involved as there is no need for instance-specific training. And for application on real images, only joint pre-training and shading-based refinement are involved. This is because instance-specific fine-tuning for real images would require GAN inversion, which harms editability and thus does not perform very stable in practice.

Losses. Similar to [49, 54], we use an Eikonal term to regularize the SDF of by , where are the sampled points and the loss weight for this regularization term is 0.1. For CelebA-HQ and LSUN car, we also include a mask loss in the same way as [49], where the masks are obtained from off-the-shelf segmentation models.

The training process of Eq. 7 in the main paper is typically done by randomly sampling 512 pixels on the image and applying the losses with respect to these pixels. However, the perceptual loss [61] cannot by applied to these scattered pixels and rendering the whole image is infeasible due to heavy memory consumption. To this end, for each training iteration, we randomly choose from two pixel sampling strategies, where the first is random sampling as mentioned before and the second is to sample a image patch. The first pixel sampling strategy preserves the randomness of sampled pixel positions while the second strategy allows perceptual loss to be applied to the image patch. The perceptual loss has a loss weight of 0.1.


Pre-train on 128 resolution Value
Number of re-rendered samples 800
Number of stages 4
Step 1 iterations (1st stage) 15k
Step 1 iterations (other stages) 10k
Step 2 iterations 4k
Step 3 iterations 25k
(0.3, 1.0)
(0.2, 0.1)
Fine-tune on 256 resolution Value
Number of re-rendered samples 400
Number of stages 3
Step 1 iterations 6k
Step 2 iterations 2k
Step 3 iterations 15k
(0.8, 1.0)
(0.2, 0.1)
Shading-based refinement Value
Iterations 1k
(1.0, 1.0)
(0.5, 0.2)


Table 9: Hyper-parameters for LSUN car dataset.

Other Training Details. Our implementation is based on PyTorch [34]. We use Adam optimizer [22] in all experiments. The joint pre-training is run on 2 RTX 8000 GPUs, while all other instance-specific trainings are run on 1 RTX 8000 GPU.

For CelebA, CelebA-HQ, and AFHQ Cat datasets, we adopt a symmetry assumption on object shape and material at step3 of the first stage in joint pre-training. This is done by randomly flipping the shape and material during training, which is similar to [52]. This symmetry assumption helps to infer a canonical face pose.

Note that the exploration process of our method involves randomly sampling multiple viewpoints and lighting conditions. Here we follow [31]

, where the viewpoints are sampled from a prior multi-variate normal distribution and the lighting conditions are sampled from a prior uniform distribution.

Figure 11: More qualitative comparison. This is an extension of Fig. 4 in the main paper.

6.2 Limitations

While our approach shows promising inverse rendering results, it also has some limitations. As mentioned in Sec. 4.1, there still exist some ambiguities in our results. It can be seen from Fig. 7 that the car tiles have unreasonable high shininess due to gradient vanishing, which requires more regularization in the near-zero specular intensity areas. Besides, as the exploration step in our approach relies on a convex shape prior, it mainly works for roughly convex objects and is hard to be applied to more complex objects (, bicycles). This might be alleviated via the recent advancement in 3D-aware GANs that allow explicit camera pose control.

6.3 Qualitative Results

In this section, we provide more qualitative results of our method. Fig. 11 provides more qualitative comparison with Unsup3d [52] and GAN2Shape [31]. We also show the albedo and normal of the supervised method Total Relighting [33] as a reference. More qualitative results are shown in Fig. 12.

Note that our method repeats the exploration-and-exploitation process for several stages. We show the effects of this progressive training in Fig. 13. It can be seen that the results get more accurate with more training stages. The shading-based refinement further refines the results to be more precise. In Fig. 14, we provide some examples of re-rendered images and projected images during training. It can be observed that the projected images have similar viewpoints and lighting conditions as the re-rendered images, but are more natural and thus provide useful information to refine the object intrinsics.

Figure 12: More qualitative results. This is an extension of Fig. 5 in the main paper.
Figure 13: Effects of progressive training and shading-based refinement (SBR).
Figure 14: Examples of (a) re-rendered images and (b) their corresponding projected images.