1 Introduction
Estimating lighting sources from an image is a fundamental problem in computer vision. In general, this is a particularly difficult task when the scene has unknown shape and reflectance properties. On the other hand, estimating the lighting of a human face, one of the most popular and well studied objects, is easier due to its approximately known geometry and near Lambertian reflectance. Lighting estimation can be used in applications such as image editing, 3D structure estimation, and image forgery detection. This paper focuses on estimating lighting from a single face image. We consider the most common face image type: near frontal pose. The same idea can be applied to face images with other poses.
There exist many approaches for lighting estimation from a single face image Barron and Malik (2015); Shahlaei and Blanz (2015); Heredia Conde et al. (2015); Peng et al. (2017), however they are not learningbased and rely on complicated optimization during testing, making the process inefficient. Moreover, the performance of these methods (e.g., Barron and Malik (2015)) depends on the resolution of face images, and cannot give accurate predictions for low resolution images.
Witnessing the dominant success of neural network models in other computer vision problems such as image classification, we are interested in a supervised learning approach that directly regresses lighting parameters from a single face image. Given an input face image, the approach outputs low dimensional Spherical Harmonics coefficients
Basri and Jacobs (2003); Ramamoorthi and Hanrahan (2001) of its environment lighting condition. This is a very difficult problem, especially due to the scarcity of accurate ground truth lighting labels for real face images in the wild. In fact, building a dataset with realistic images and ground truth lighting parameters is extremely hard and currently there exists no such dataset.Lacking ground truth labels, we applied an existing method Barron and Malik (2015) to estimate lighting parameters of real face images. However, these lighting parameters are not the real “ground truth” as they contain unknown noise. Synthetic face images, on the other hand, have noise free ground truth lighting labels. In this work, we show that these synthetic data with accurate labels can help train a deep CNN to regress lighting of real face images: “denoising” the unreliable labels.
The proposed method is based on two assumptions: (1) A deep CNN trained with synthetic data is accurate, i.e., it is not affected by any noise; (2) Ground truth labels for real data are noisy, but still contain useful information. We design the lighting regression deep CNN, which consists of two subnetworks: a feature net that extracts lighting related features and a lighting net that takes these features as input and predicts the Spherical Harmonics parameters. Based on the first assumption, the lighting net trained with synthetic data is accurate. However, this lighting net expects lighting related features for synthetic data as input. To make it work for real data, the lighting related features for real data should be mapped to the same space. For that purpose, we utilize the idea of Generative Adversarial Networks (GAN) Goodfellow et al. (2014). Specifically, a discriminator is trained to distinguish between lighting related features from synthetic data and real data, while the feature net (instead of a generator in the standard GAN) is trained to fool the discriminator. The discriminator and our feature net play a minimax two player game, with the objective of pulling the distribution of lighting related features of real data towards that of the synthetic data. Under the second assumption, we have an additional objective of reducing regression loss between predicted lightings and ground truth labels. Moreover, we design the network to take RGB face images so that it will work for low resolution face images.
Figure 1 (a) illustrates the proposed LDAN model. It consists of two steps during training: (1) Train with synthetic data; (2) Fix the feature net for synthetic data and the lighting net, train another feature net for real data with GAN loss and regression loss. Eric et al. Tzeng et al. (2017) proposed similar ideas for unsupervised domain adaption. One difference is that while learning to map target domain to source domain, they only use GAN loss. We argue that such mapping can be unexpectedly arbitrary. As illustrated by Figure 1 (b), both mapping to , to and mapping to and to make the source and target data have similar distributions. This may not be a big issue for classification tasks if and belong to the same class. However, for regression, this is problematic since every data has its own unique label. As a result, using the regression loss for real data is critical in our regression problem: it regularize the domain mapping function to have reasonable behavior. At the same time, the noises in real data labels are suppressed by training with the GAN loss.
The main contributions of our work are: 1) We are the first to propose a lighting regression network for face images; 2) We propose a novel method: LDAN, to utilize accurate synthetic image lighting labels in training real face images with noisy labels; 3) The proposed method increases the accuracy of Barron and Malik (2015) by on quantitative evaluation and is thousands of times faster.
2 Related Work
Lighting Estimation from A Single Face Image. Estimating lighting conditions from a single face image is an interesting but difficult problem. Blanz and Vetter Blanz and Vetter (1999) proposed to estimate the ambient and directional light as a byproduct of fitting 3D Morphable Models (3DMM) to a single face image. Since then, several 3DMM based methods were proposed Aldrian and Smith (2013); Peng et al. (2017); Heredia Conde et al. (2015); Shahlaei and Blanz (2015); Wang et al. (2009). The performance of these methods rely on a good 3DMM of faces. However, existing 3DMMs are usually built with face images taken in a controlled environment, so their expressive power (especially the texture model) for faces in the wild is limited Booth et al. (2017). Barron and Malik proposed an optimization based method for estimating shape, albedo and lighting for general objects Barron and Malik (2015). To solve such an underconstrained problem, their method heavily relies on prior knowledge about shape, albedo and lighting of general objects. Though they achieved promising results, their method is slow and may fail to give a reasonable result for some cases due to the nonconvexity of the objective function. Kulkarni et al. (2015)
proposed to use deep learning to disentangle representations about pose, lighting and identity of a face image. The authors only show the effectiveness of their method on synthetic images; whether it could be applied to real face images is still in doubt. Moreover, their representation of lighting has no physical meaning, making it difficult to use in other applications.
Learning with Noisy Labels. Learning with noisy labels has attracted the interest of researchers for a long time. Frénay and Kaban (2014) gives a comprehensive introduction to this problem. With the development of deep learning, many research studies have now focused on how to train deep neural networks with noisy labels Mnih and Hinton (2012); Sukhbaatar et al. (2015); Azadi et al. (2016); Xiao et al. (2015); Patrini et al. (2017); Jindal et al. (2016). Mnih and Hinton (2012); Sukhbaatar et al. (2015); Patrini et al. (2017); Jindal et al. (2016)
assume the probability of a noisy label only depends on the noisefree label but not on the input data, and try to model the conditional probability explicitly.
Xiao et al. (2015) models the type of noise as a hidden variable and proposes a novel probabilistic model to infer the true labels. Azadi et al. (2016) proposed to use CNNs pretrained with noisefree data to help select data with noisy labels in order to better handle the noise. All the above mentioned methods focus on classification problems and a considerable portion of the data are assumed to have noisefree labels. However, estimating lighting from face images is a regression problem, and the translation probability from noisefree label to noisy label is much more difficult to model. Moreover, almost all the labels of our data are noisy. As a result, we are dealing with a much harder problem than the methods mentioned above.GAN for Domain Adaption. Since Goodfellow et al. Goodfellow et al. (2014) first proposed Generative Adversarial Networks, several works have been using this idea for unsupervised Domain Adaption Ganin and Lempitsky (2015); Tzeng et al. (2017); Sankaranarayanan et al. (2017); Saito et al. (2016). All these methods solve a problem in which the labels in the target domain are not enough to train a deep neural network. However, the problem we try to solve is intrinsically different from theirs in that the labels in the target domain are sufficient, but all these labels are noisy. Moreover, all these methods apply domain adaption to classification tasks where GAN loss is enough to achieve a good performance. On the contrary, GAN loss alone cannot work in our regression task. Though GAN loss could map the distribution of data in the target domain to that of the source domain, for a single point in the target domain, the mapping is arbitrary which is problematic as every data point has its unique label in a regression task.
3 Proposed Method
3.1 Spherical Harmonics
Existing methods Basri and Jacobs (2003); Ramamoorthi and Hanrahan (2001) have shown that for convex objects with Lambertian reflectance and distant light sources, the lighting of the environment can be well estimated by (gray scale) or (color) dimensions of Spherical Harmonics (SH). In this paper, we use SH as the lighting representation as it has been widely used to represent the environmental lighting in face related applications as suggested in Basri et al. (2007); Wang et al. (2009); Zhang and Samaras (2006); Barron and Malik (2015); Johnson and Farid (2007); Peng et al. (2017).
All dimensions of SH can be fully recovered from an image if the pixels are equally distributed over a sphere. However, the pixels of a face image, loosely speaking, are distributed over a hemisphere. The SH that can be recovered from a face image, as discussed in Ramamoorthi (2002), lie in a lower dimensional subspace, and the SH for faces under different poses lie in different subspaces. As a result, we consider regressing the SH in a lower dimensional subspace instead of the original dimensional SH and focus on near frontal faces since most face images are taken under this pose.
Taking the red color channel as an example, we now show how to get the lower dimensional subspace of SH for near frontal faces. Let
be a column vector: each element represents one pixel value of a face image for the red channel, then
. is a diagonal matrix, each element of which is the albedo of the corresponding pixel, is a 9 dimensional SH parameters vector, is a matrix and is the number of pixels in the image. Each column of corresponds to one SH base image whose elements are determined by the normal of the corresponding pixel (see Basri and Jacobs (2003)). By applying SVD on , we get , then . is a matrix that spans the entire dimensions of SH. We use synthetic data to get since we know the ground truth normal of every pixel and thus is known. We then only keep the columns of , denoted as , corresponding to the largest singular values since they capture energy of the singular values. With , we project all the SH to their dimensional subspace throughout the experiments.3.2 Label Denoising Adversarial Network
Training a regression deep CNN needs a lot of data with ground truth labels. However, getting the ground truth lighting parameters from a realistic face image is extremely difficult. It usually needs a mirror ball or panorama camera which is carefully set up to record an environment map relative to the position of the face. Thus, it is very difficult to get enough data with ground truth labels to train a deep CNN to regress the lighting of a face. Instead, we adapted Barron and Malik (2015) to predict lighting parameters from a large number of face images. These parameters are then projected to a lower dimensional subspace using discussed above. We use these projected lighting parameters as noisy ground truth labels and denote them as . Together with real face images , (, ) will be used as (data, label) pair to train a deep CNN to regress lighting parameters. One problem with these ground truth labels is that they are noisy; directly training a deep CNN with these data cannot give the best performance.
We propose to use synthetic face images whose ground truth lighting parameters are known to help train a deep CNN. The proposed deep CNN has two subnetworks: a feature network that is used to extract lighting related features; a lighting network that takes lighting related features as input and predicts SH for the face images. For synthetic data , we denote its feature network as and its lighting network as . Then the predicted SH is represented as . Since and are trained using synthetic data whose ground truth labels are known, they are accurate. Feature network and Lighting network for real data, on the other hand, will both be affected by the noisy labels if directly trained using the noisy ground truth of real data. To alleviate the effect of noisy labels, we propose to use as the lighting net for real data, i.e., , since it is not affected by noise. However, since is trained using synthetic data, it only works if the input is from the space of lighting related features of synthetic data. As a result, needs to be trained such that the lighting related features for real data will be mapped into the same space as synthetic data.
Given a set of synthetic images and their ground truth labels , we train feature net and lighting net
through the following loss function:
(1) 
where and are a pair of synthetic images with the same SH lighting, different IDs, and different small random deviations from frontal pose. represents their ground truth label. is a set containing all such pairs. and are weight coefficients. Besides the regression loss, we also add a MSE feature loss that enforces the lighting related features of face images with the same SH to be the same. This encourages the lighting related features to contain no information about face ID and pose.
With trained and , we need to train the feature net for real face images so that the lighting related features for real data () lie in the same space as that of synthetic data (). Our idea is inspired by recently proposed Generative Adversarial Networks (GAN) Goodfellow et al. (2014), which have proved to be very effective to synthesize realistic images. In our setting, a discriminator is trained to distinguish and , while is trained so that would make fail. By playing this minmax game, the distribution of will be close to that of . Wasserstein GAN (WGAN) Arjovsky et al. (2017) is used as our training strategy since it can alleviate the “mode dropping” problem and generate more realistic samples for image synthesis. However, making the distribution of and that of similar is not enough for our regression problem since the mapping can be arbitrary to some extent. As shown in Figure 1 (b), both these two mappings would make two sets of points have similar distributions, but not both of them are correct since every point has its unique label. Based on the assumption that noisy ground truth of real data is reasonably close to the ground truth data, we use them as “anchor points” during training. As a result, the final loss function for our problem is defined as follows:
(2)  
where and are the distributions of lighting related features for synthetic and real images respectively.
Following Goodfellow et al. (2014); Arjovsky et al. (2017), the discriminator and feature net are trained alternatively. While training
, RMSProp
Hinton et al. (2012) is applied and Adadelta Zeiler (2012) is used to train , and as discussed in Arjovsky et al. (2017). The details on how to train the whole model are illustrated in Algorithm 1.4 Experiments
4.1 Data Collection
Real Face Images: The proposed LDAN requires a large number of both synthetic and real face images for training. To collect the real face images, we download images with faces from the Internet. The SIRFS method proposed by Barron and Malik Barron and Malik (2015) is then applied to these face images to get the noisy ground truth of SH for lighting. Since SIRFS was proposed to estimate lighting for general objects, the prior they use is not facespecific. To get a better constraint for a face shape, we apply Discriminative Response Map Fitting (DRMF) Asthana et al. (2013) to estimate the facial landmarks and pose. Then, a 3DMM Blanz and Vetter (1999) is fitted to get an estimation of the face depth map which is used as a prior to constrain the face shape estimation of SIRFS. We collected faces with noisy ground truth SH for training.
Synthetic Face Images: We apply the 3D face model provided by Paysan et al. (2009) to generate pairs of faces. Each pair of these faces are under the same lighting but with different identities and a small random variation with respect to frontal pose.
MultiPie: The MultiPie dataset Gross et al. (2010) contains a large number of face images of different IDs taken under different poses and illumination conditions. From this data set, face images are chosen, which contain IDs in frontal pose under lighting conditions. Though the ground truth lighting parameters are not provided for each of these face images, the lighting condition group under which a face image is taken is given. This data is used only for evaluation in our experiments.
4.2 Implementation Details
We use the same network structure for feature net and . We apply the ResNet structure He et al. (2016) to define a feature net. It takes a RGB face image as input and outputs a dimensional feature vector. We use several fully connected layers to define our lighting net and discriminator . The lighting net outputs dimensional lighting parameters and outputs the score for being a lighting related feature of real data. Please refer to the supplementary material for details on the network structures.
While training the proposed model, we first train discriminator for epochs and then train feature net for epochs. We notice that in this way, will be fully trained to distinguish and and will be trained so that will fail. We alternate these two steps for iterations. We choose , and in Equation (2
) so that different losses are roughly balanced. Our algorithm is implemented using Keras
Chollet et al. (2015)with Tensorflow
Abadi et al. (2015) as backend.4.3 Evaluation Metric
Since ground truth lighting parameters for real face images are not available, it is difficult to evaluate the accuracy of regressed lighting quantitatively. We propose an “indirect” quantitative evaluation metric based on classification, and test our method on the MultiPie data set, which contains face images taken under
lighting conditions. More specifically, after regressing the SH for each test face image, of them are used to compute the mean SH for each lighting condition. Then, each of the rest of the face images are assigned to the lighting conditions based on the Euclidean distance between its estimated SH and the mean SH. We carry out cross validations for this classification measurement to make use of all the data.4.4 Experimental Results
To show the effectiveness of the proposed method, we compare our results with the SIRFS Barron and Malik (2015) based method in this section. In SIRFS, the shading of a face is formulated in logarithm space, i.e. where is the shading at the th pixel, is the th row of and represents the SH. estimated in this way is not the correct SH. To estimate the correct SH lighting, we assume that the normal of each pixel estimated by SIRFS is in Euclidean space instead of logarithm space. This assumption is reasonable since we adapted the SIRFS method by estimating a face depth map using 3DMM in Euclidean space, and constrained the estimated face shape to be consistent with it. Supposing is the correct SH, the shading can be found by . Then could be achieved by solving the following equation:
(3) 
This is an over complete linear equation as the number of pixels is larger than the dimension of SH.
Table 1 compares the proposed method with the SIRFS based method using the classification measurement on the MultiPie data set. We denote the original output of SIRFS method as SIRFS log, and SIRFS SH is used to denote the corrected SH using Equation (3). We test these two methods on the original resolution of the MultiPie data which is roughly after cropping the faces. Note that for a face image of size which is the input size for LDAN, the 3DMM we use cannot predict accurate face shapes, resulting in inaccurate estimation by SIRFS. REAL in Table 1 represents our baseline method which uses SIRFS SH as ground truth to train a deep CNN without synthetic data. REAL and LDAN are trained times and the mean accuracies are shown in Table 1. We notice that SIRFS SH, which solves Equation (3) based on SIRFS log, performs worse than SIRFS log. According to Equation (3), the accuracy of SIRFS SH depends not only on the accuracy of SIRFS log, but also on the accuracy of estimated normals. The noisy estimation of normals would make the performance of estimated SIRFS SH more noisy. The performance of REAL is better than SIRFS SH, though it is trained directly using the output of SIRFS SH as the ground truth label. This shows that by observing a large amount of data, the deep CNN itself can be robust to noise to some extent. This is an advantage for learning based methods compared with optimization based algorithms. LDAN outperforms the REAL by more than for top1 accuracy, showing the effectiveness of the proposed method.
We further propose two other models to compare with LDAN as shown in Figure 2. Different from LDAN, Model B and Model C learn the feature nets for synthetic and real data simultaneously and map the lighting related features of them to the same space ^{1}^{1}1Please refer to the supplementary material for the details of how to train Model B and Model C.. These two models are inspired by Ganin and Lempitsky (2015) and Saito et al. (2016). For Model B, synthetic and real data share the same feature net. Since synthetic data and real data are quite different from each other, using a single feature net is difficult to make their lighting features have the same distribution, and we do not expect good performance. Model C defines different feature nets for synthetic and real data. The difference between Model C and LDAN is that Model C tries to map lighting related features for synthetic and real data to the same space which might be different from that learned with synthetic data alone, whereas LDAN tries to directly map lighting related features of real data to the space of synthetic data. Intuitively, compared with LDAN, Model C is more easily affected by the noisy labels of real data since the training of the feature net for synthetic data is affected by the real data.
Model B and C are also trained times and their mean accuracies are shown in Table 1 for comparison. We notice that Model B performs even worse than REAL, which shows that a single feature net for both synthetic and real data is not enough. LDAN and Model C outperform all other methods in Table 1. Moreover, LDAN performs slightly better than Model C, showing that it is more robust to the noise in the ground truth of real data.
To investigate the effectiveness of GAN loss and regression loss, we carry out ablation studies for LDAN. We train the feature net for real data without GAN loss and regression loss for times respectively and compare the results with LDAN in Table 2. Without GAN loss, the performance of LDAN is similar to REAL in Table 1, which means that synthetic data could not help to train a better deep CNN for regressing lighting in this case. Without regression loss, on the other hand, the performance of LDAN drops dramatically. This is because the way of mapping the distribution of lighting related features of real data to that of synthetic data is arbitrary as shown in Figure 1 (b). This is problematic for a regression task where each data has its unique label. Having noisy ground truth as “anchor points”, as we do in LDAN, can alleviate this problem and give much better results.
Figure 3 shows some face images synthesized using the SH parameters estimated by SIRFS SH, REAL and LDAN from MultiPie images. For Figure 3 (b), the lighting of the environment distributes uniformly based on the MultiPie face image, however, SIRFS SH predicts the lighting comes from the lower right part. We notice that face images synthesized using lighting estimated by REAL and LDAN are visually similar, but LDAN predicts more consistent lighting for face images taken under the same lighting condition as shown by our classification results.
(a) Euclidean distance  (b) measure 
Peng et al. Peng et al. (2017) proposed to estimate SH from face images for the application of image forgery detection and achieved startoftheart results. In experiments of MultiPie, they use two face images, one frontal and one profile, to estimate accurate face normals in order to predict SH, whereas our LDAN only uses one input face image. Their results are also achieved by testing on the original high resolution face images of the MultiPie data set. 3DMM cannot predict accurate face normals for face images of , leading to inaccurate lighting estimations for Peng et al. (2017) on such low resolution images^{2}^{2}2Personal communication with B. Peng, coauthor of Peng et al. (2017).. We compare the proposed method with their method (denoted as ‘3D lighting’) in Figure 4. Following the same setup of the experiment on MultiPie, pairs of face images are generated using the data provided by Peng et al. (2017). Half of these face image pairs are taken under the same lighting condition and the other half taken under different lighting conditions. We carry out the verification study on these image pairs. Euclidean distance and the measurement provided by Johnson and Farid (2007) (denoted as measure) are applied for testing and the ROC curves are shown in Figure 4 (a) and (b) respectively. We notice that using Euclidean distance, LDAN outperforms ‘3D lighting’, however under the measure, ‘3D lighting’ performs much better. This measure^{3}^{3}3See supplementary materials for details of measure. ignores the DC component of SH for image forgery purposes; though ‘3D lighting’ can predict first and second order components of SH accurately, it could not predict an accurate DC component. Actually, some of the DC components of SH predicted by ‘3D lighting’ are negative, which is impossible in reality. While DC components may not be important for image forgery detection, it is crucial for a useful lighting representation, especially for application such as image editing. We also notice that under both measurements, LDAN outperforms SIRFS SH, which further confirms the effectiveness of the proposed method.
4.5 Running Time
We run experiments on a workstation with Intel Xeon CPUs and GB memories. While running on GPU, we use one NVIDIA GeForce TITAN X. For a RGB face image, SIRFS Barron and Malik (2015) takes second to predict the lighting parameters. The proposed deep CNN can predict such face images on CPU and face images on GPU per second, so it is potentially times faster.
5 Conclusion
In this paper, we propose a lighting regression network to predict Spherical Harmonics of environment lighting from face images. Lacking the ground truth labels for real face images, we applied an existing method to get noisy ground truth. To alleviate the effect of noise, we propose to apply the idea of adversarial networks and use synthetic face images with known ground truth to help train a deep CNN for lighting regression. Compared with existing methods, the proposed method is more efficient and the experimental results show it improves the performance significantly.
1
References

Abadi et al. [2015]
Martín Abadi, Ashish Agarwal, Paul Barham, et al.
TensorFlow: Largescale machine learning on heterogeneous systems, 2015.
URL http://tensorflow.org/. Software available from tensorflow.org.  Aldrian and Smith [2013] O. Aldrian and W. A. P. Smith. Inverse rendering of faces with a 3d morphable model. IEEE Transactions on PAMI, 35(5), 2013.
 Arjovsky et al. [2017] M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein GAN. ArXiv eprints, abs/1701.07875, 2017.
 Asthana et al. [2013] Akshay Asthana, Stefanos Zafeiriou, Shiyang Cheng, and Maja Pantic. Robust discriminative response map fitting with constrained local models. In CVPR, 2013.
 Azadi et al. [2016] Samaneh Azadi, Jiashi Feng, Stefanie Jegelka, and Trevor Darrell. Auxiliary image regularization for deep cnns with noisy labels. In ICLR, 2016.
 Barron and Malik [2015] J. T. Barron and J. Malik. Shape, illumination, and reflectance from shading. IEEE Transactions on PAMI, 37(8), 2015.
 Basri and Jacobs [2003] R. Basri and D. W. Jacobs. Lambertian reflectance and linear subspaces. IEEE Transactions on PAMI, 25(2):218–233, 2003.
 Basri et al. [2007] Ronen Basri, David Jacobs, and Ira Kemelmacher. Photometric stereo with general, unknown lighting. IJCV, 72(3):239–257, 2007.
 Blanz and Vetter [1999] Volker Blanz and Thomas Vetter. A morphable model for the synthesis of 3d faces. In SIGGRAPH, pages 187–194, 1999.
 Booth et al. [2017] James Booth, Epameinondas Antonakos, Stylianos Ploumpis, George Trigeorgis, Yannis Panagakis, and Stefanos Zafeiriou. 3d face morphable models "inthewild". ArXiv eprints, abs/1701.05360, 2017.
 Chollet et al. [2015] François Chollet et al. Keras. https://github.com/fchollet/keras, 2015.
 Frénay and Kaban [2014] Benoît Frénay and Ata Kaban. A comprehensive introduction to label noise. In ESANN, 2014.

Ganin and Lempitsky [2015]
Yaroslav Ganin and Victor Lempitsky.
Unsupervised domain adaptation by backpropagation.
In ICML, pages 1180–1189, 2015.  Goodfellow et al. [2014] Ian Goodfellow, Jean PougetAbadie, Mehdi Mirza, Bing Xu, David WardeFarley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, pages 2672–2680, 2014.
 Gross et al. [2010] Ralph Gross, Iain Matthews, Jeffrey Cohn, Takeo Kanade, and Simon Baker. Multipie. Image Vision Computing, 28(5), 2010.
 He et al. [2016] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, pages 770–778, 2016.
 Heredia Conde et al. [2015] Miguel Heredia Conde, Davoud Shahlaei, Volker Blanz, and Otmar Loffeld. Efficient and robust inverse lighting of a single face image using compressive sensing. In ICCV Workshops, 2015.
 Hinton et al. [2012] Geoffrey Hinton, Nitish Srivastava, and Kevin Swersky. Lecture 6a, overview of minibatch gradient descent. http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf, 2012.
 Jindal et al. [2016] I. Jindal, M. Nokleby, and X. Chen. Learning deep networks from noisy labels with dropout regularization. In ICDM, pages 967–972, 2016.
 Johnson and Farid [2007] M. K. Johnson and H. Farid. Exposing digital forgeries in complex lighting environments. IEEE Transactions on IFS, 2(3), 2007.
 Kulkarni et al. [2015] Tejas D Kulkarni, William F. Whitney, Pushmeet Kohli, and Josh Tenenbaum. Deep convolutional inverse graphics network. In NIPS, pages 2539–2547, 2015.
 Mnih and Hinton [2012] Volodymyr Mnih and Geoffrey E. Hinton. Learning to label aerial images from noisy data. In ICML, 2012.
 Patrini et al. [2017] Giorgio Patrini, Alessandro Rozza, Aditya Menon, Richard Nock, and Lizhen Qu. Making deep neural networks robust to label noise: a loss correction approach. In CVPR, 2017.

Paysan et al. [2009]
P. Paysan, R. Knothe, B. Amberg, S. Romdhani, and T. Vetter.
A 3d face model for pose and illumination invariant face recognition.
In AVSS, 2009.  Peng et al. [2017] B. Peng, W. Wang, J. Dong, and T. Tan. Optimized 3d lighting environment estimation for image forgery detection. IEEE Transactions on IFS, 12(2), 2017.
 Ramamoorthi [2002] R. Ramamoorthi. Analytic pca construction for theoretical analysis of lighting variability in images of a lambertian object. IEEE Transactions on PAMI, 24(10):1322–1333, 2002.
 Ramamoorthi and Hanrahan [2001] R. Ramamoorthi and P. Hanrahan. On the relationship between radiance and irradiance: Determining the illumination from images of a convex lambertian object. JOSA, 2001.
 Saito et al. [2016] Kuniaki Saito, Yusuke Mukuta, Yoshitaka Ushiku, and Tatsuya Harada. Demian: Deep modality invariant adversarial network. ArXiv eprints, abs/1612.07976, 2016.
 Sankaranarayanan et al. [2017] Swami Sankaranarayanan, Yogesh Balaji, Carlos D. Castillo, and Rama Chellappa. Generate to adapt: Aligning domains using generative adversarial networks. ArXiv eprints, abs/1704.01705, 2017.
 Shahlaei and Blanz [2015] D. Shahlaei and V. Blanz. Realistic inverse lighting from a single 2d image of a face, taken under unknown and complex lighting. In FG, 2015.
 Sukhbaatar et al. [2015] Sainbayar Sukhbaatar, Joan Bruna, Manohar Paluri, Lubomir Bourdev, and Rob Fergus. Learning from noisy labels with deep neural networks. In ICLR, 2015.
 Tzeng et al. [2017] Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. Adversarial discriminative domain adaptation. ArXiv eprints, abs/1702.05464, 2017.
 Wang et al. [2009] Y. Wang, L. Zhang, Z. Liu, G. Hua, Z. Wen, Z. Zhang, and D. Samaras. Face relighting from a single image under arbitrary unknown lighting conditions. IEEE Transactions on PAMI, 31(11):1968 –1984, 2009.
 Xiao et al. [2015] Tong Xiao, Tian Xia, Yi Yang, Chang Huang, and Xiaogang Wang. Learning from massive noisy labeled data for image classification. In CVPR, 2015.
 Zeiler [2012] Matthew D. Zeiler. ADADELTA: an adaptive learning rate method. ArXiv eprints, abs/1212.5701, 2012.
 Zhang and Samaras [2006] Lei Zhang and Dimitris Samaras. Face recognition from a single training image under arbitrary unknown lighting using spherical harmonics. IEEE Transaction on PAMI, 28(3):351–363, 2006.
6 Supplementary Material
6.1 Spherical Harmonics
The 9 dimensional spherical harmonics in terms of Cartesian coordinates of the surface normal is:
(4)  
6.2 Details of Model B and Model C
We show the details of Model B and Model C in our paper. Figure 5 (a) and (b) illustrates the structure of Model B and Model C respectively. The objective function of training Model B and Model C is the same with that of training LDAN, shown in Equation (5).
(5)  
The difference of these two models and LDAN is that these two models try to drag the distribution of lighting related features of real and synthetic data towards each other, while LDAN push the distribution of lighting related features of real data towards that of synthetic data. This difference is illustrated in Figure 6. Model B is inspired by Ganin and Lempitsky (2015) and real and synthetic data share the same feature net. As a result, for Model B, and are the same. Model C, on the other hand, is inspired by Saito et al. (2016), and defines two different feature nets for real and synthetic data.
Similar to LDAN, the lighting net in Model B and Model C is trained only using synthetic data so that it will not be affected by the noise in the labels of real data. Algorithm 2 shows the details of how to train Model B and Model C.
6.3 Structure of Networks
We show the structure of our networks in this section. As discussed in the paper, we apply the structure of ResNet He et al. (2016) to define our feature net. Figure 7 (a) shows the structure of the feature net. A block like “3 conv 16” means a convolutional layer with 16 filters, the size of each filter is where
is the number of input channels. This convolutional layer is followed by a batch normalization layer and a ReLU layer. A block like “3
3 conv 32, /2” has similar meaning, the difference is that the stride of the convolution is
so it down samples the data by a factor of . The output of the feature net is a dimensional feature.Figure 7 (b) shows the structure of the lighting net. “FC ReLU 128” means a fully connected layer whose number of outputs is followed by a ReLU layer. “Dropput” means a dropout layer with dropout ratio being . “FC, 18” means a fully connected layer with outputs.
Figure 7 (c) shows the structure of the discriminator. “FC tanh, 1” means a fully connected layer with just 1 output followed by a tanh layer. The meaning of the rest of the blocks is the same to those of lighting net.
6.4 Definition of Qmeasure
Let and be two Spherical Harmonics (SH). Using these two SH to render two hemispheres and arrange the pixels in to two vectors, we get:
(6) 
where Y is a matrix and is number of pixels. Each column of corresponds to one SH base image. Let and be the vector of and subtracting the mean of their elements:
(7) 
The correlation of and can be computed as:
(8)  
(9) 
Please refer to Johnson and Farid (2007) for the details of how to get Equation (9) from Equation (8). The measure proposed in Johnson and Farid (2007) is defined as follows:
(10) 
where
(11) 
Note the definition of is a little different from Johnson and Farid (2007) due to the difference of arrangement of the elements in SH.
6.5 More Visual Results
We show more synthetic faces rendered using SH estimated by the proposed model in Figure 8.