1 Introduction
Photometric stereo [27] seeks to estimate the shape of an object from images which are obtained from a static camera and under varying lighting. While there has been remarkable progress in photometric stereo, the vast majority of techniques are devoted to scenes that exhibit simple reflectance properties. Yet, this creates a significant disconnect between theory and practice since the vast majority of reallife scenes involve materials with complex reflectance properties that interact with light in myriad number of ways.
In this paper, we present a method for recovering the surface normals and the reflectance of opaque objects with complex spatiallyvarying reflectance. The key challenge here is that the reflectance, characterized in terms of its spatiallyvarying bidirectional reflectance distribution function (SVBRDF), and the shape, characterized in terms of surface normals, are coupled and need to be jointly estimated. Further, the SVBRDF is a very highdimensional signal and, in the absence of additional assumptions, requires a large number of input images for robust estimation.
A common assumption for computational tractability is that the BRDF at each pixel is a weighted combination of a few reference BRDFs [14]. We now need to estimate only the reference BRDFs and their abundances at each pixel which is a significant reduction in the dimensionality of the unknowns. In Goldman et al. [8]
, the isotropic Ward model, a parametric model is used to characterize the reference BRDFs. Alldrin et al.
[2]assume that the reference BRDFs are approximated by the socalled bivariate model, a nonparametric model that approximates the 4D BRDF as a 2D signal. In both cases, the problem of shape and SVBRDF estimation reduces to alternating minimization over the surface normals, the reference BRDFs, and abundances of the reference BRDFs at each pixel. The drawback of these approaches is that the optimization is not just computationally expensive but also has a critical dependence on the ability to find a good initial solution since the underlying problem is nonconvex and riddled with local minima.
An alternate approach called examplebased photometric stereo [11, 20] introduces reference objects — objects with known shape — in the scene. These techniques rely on the concept of orientation consistency [11] which suggests that two surface elements with identical normal and BRDF will take the same appearance when placed in the same illumination. If the reference object had the same BRDF as the target object, we can obtain the shape of the target simply by comparing the intensity profile observed at a pixel on the target to those observed on the reference object. When the target’s BRDF is spatiallyvarying, it can be shown that two reference objects — one diffuse and the one specular — are sufficient to recover the surface normals of the target by approximating the unknown BRDF at each pixel as a nonnegative linear combination of the reference BRDFs [11]. While introducing reference objects is not always desirable, examplebased photometric stereo produces precise shape estimates without requiring the knowledge of lighting.
The technique proposed in this paper relies on the core principle of examplebased photometric stereo without actually introducing reference objects into the scene. Instead, given a dictionary whose atoms are BRDFs associated with a wide range of materials, we can render virtual spheres, one for each atom in the dictionary, under the knowledge of the scene illumination (typically a distant point light source). This provides a set of “virtual examples” that can be used to obtain a perpixel estimate of the shape and reflectance of the scene with arbitrary spatiallyvarying BRDF (see Figure 2). The assumption that we make is that the unknown BRDF at each pixel lies in the nonnegative span of the dictionary atoms. We show that the surface normals and the BRDFs can be estimated via a sequence of tractable linear inverse problems. This obviates the need for complex iterative optimization techniques as well as careful initialization required to avoid convergence to local minima. The interplay of these ideas leads to a robust surface normal and SVBRDF estimation technique that provides stateoftheart results on challenging reallife scenes (see Figure 1).
Contributions.
We make the following contributions.

[leftmargin=*]

[Model] We propose the use of a dictionary of BRDFs to regularize the surface normal and SVBRDF estimation. The BRDF at each pixel of an object is assumed to lie in the nonnegative span of the dictionary atoms.

[Normal estimation] We show that the surface normal at each pixel can be efficiently estimated using a coarsetofine search.

[SVBRDF estimation] Given the surface normals, we recover the BRDF at each pixel independently by solving a linear inverse problem that enforces sparsity in the occurrence of the reference BRDFs at the pixel.

[Validation] We showcase the accuracy of the shape and SVBRDF estimation technique on a wide range of simulated and real scenes and demonstrate that the proposed technique outperforms stateoftheart.
2 Prior work
In this section, we review some of the key techniques for nonLambertian shape estimation.
The diffuse + specular model.
It is well known that the collections of images of a convex Lambertian object typically lies close to a lowdimensional subspace [3, 19]. This naturally leads to techniques [28, 13, 29, 30]
that robustly fit a lowdimensional subspace, capturing the Lambertian component while isolating nonLambertian components, such as specularities, as sparse outliers. However, these techniques have restrictive assumptions on the range of BRDFs to which they are applicable, and more importantly, miss out on powerful cues to the shape of the object that is often present in specular highlights.
Parametric BRDFs.
Parametric models such as the BlinnPhong [4], Ward [25], OrenNayar [16], and CookTorrance model [6] are based on macrobehavior established using specific microfacet models on the materials, and have been widely used in computer graphics. In the context of shape and SVBRDF estimation, Goldman et al. [8] utilize the isotropic Ward model [25] to reduce the dimensionality of the inverse problem. Oxholm and Nishino [17, 18] further extend this idea by introducing a probabilistic formulation to estimate the BRDFs from a single image under natural lighting conditions. However, parametric models are inherently limited in their ability to provide precise approximations to the true BRDFs and further, lead to challenging and illconditioned optimization problems.
Isotropic BRDFs.
Isotropic materials exhibit a form of symmetry, wherein the reflectance of the material is unchanged when the incident and outgoing directions are jointly rotated about the surface normal. This enables the representation of isotropic BRDFs as the function over three as opposed to four angles. In the context of photometric stereo, Alldrin and Kriegman [1] observe that, for isotropic materials, the surface normal at each point can be restricted to lie on a plane. When the isotropic BRDFs has a single dominant lobe, Shi et al. [24] resolve the planar ambiguity and show that the surface normals can be uniquely determined. Higo et al. [12] utilize properties of isotropy, visibility and monotonicity to restrict the solution space of the surface normal at each pixel. This enables a framework for shape estimation without the need for radiometric calibration. Finally, a bivariate approximation for isotropic materials is used in Romeiro et al. [21, 22] to estimate the BRDF of a known shape from a single image and without knowledge of the scene illumination.
Reference basis model.
As mentioned in the introduction, a common assumption for scenes with SVBRDF is that the perpixel BRDF is generated from a few unknown reference BRDFs [14, 8, 2, 5]. Invariably, this leads to a multilinear optimization in highdimensional variables (the reference BRDFs) that is highly dependent on initial conditions. In contrast, our proposed technique avoids the need to estimate highdimensional optimization by evoking knowledge of a dictionary of BRDFs.
3 Problem setup
Setup.
We make the following assumptions. First, the camera is orthographic and hence, the viewing direction is constant across all scene points. Second, the scene illumination is assumed to be from a distant point light source. The light sources are assumed to be of constant brightness (equivalently, that calibration is known) and their direction is known. We denote to refer to the lighting direction in the th image . For a lightstage, this information is typically obtained by a oneoff calibration. Third, the effects of longrange illumination such as cast shadows and interreflections are assumed to be negligible; this is satisfied for objects with a convex shape. Finally, the radiometric response of the camera is assumed to be known.
BRDF representation.
We follow the isotropic BRDF representation used in [23] in which a threeangle coordinate system based on half angles is used. Specifically, the BRDF is expressed as a function with and . However, by Helmholtz’s reciprocity, the BRDF exhibits the following symmetry: and hence it is sufficient to express . Following [26], we use a sampling of each angle. As a consequence, a BRDF is represented as a point in a dimensional space. When we deal with color images, we have a BRDF for each color channel and hence, the dimensionality of the BRDF goes up proportionally.
Consider a scene element with BRDF , surface normal , illuminated from a point light source from a direction and viewed from a direction
. For this configuration of normal, incident light and viewing direction, the BRDF value is simply a linear functional of the vector
:where is a vector that encodes the geometry of the configuration. In essence, the vector samples the appropriate entry from
, allowing for the appropriate interpolation if the required value is off the samplinggrid.
Problem formulation.
Our goal is to recover the surface normals and the SVBRDF in the context of photometric stereo; i.e, multiple images of an object obtained from a static camera under varying lighting. The intensity value observed at pixel with lighting can be written as
(1) 
where is the BRDF and is the surface normal at pixel , respectively, and accounts for shading.
Given multiple intensity values at pixel , one for each lighting direction , we can write
(2)  
Given the intensities, , observed at a pixel and knowledge of lighting directions , we seek to estimate the surface normal and the BRDF at the pixel. This problem is intractable without additional assumptions that constrain the BRDF to a lowerdimensional space.
Model for BRDF.
The key assumption that we make is that the BRDF at a pixel lies on the nonnegative span of the atoms of a BRDF dictionary. Specifically, given dictionary , we assume that the BRDF at pixel can be written as
where are the abundances of the dictionary atoms.
In essence, we have now constrained the BRDF to lie in an dimensional cone.^{1}^{1}1A more appropriate model for the BRDF is that However, this leads to significantly higherdimensional constraints. We instead use a sufficient condition to achieve this, . This provides immense reduction in the dimensionality of the unknowns at the expense of introducing a model misfit error. Indeed the success of this model relies on having a dictionary that is sufficiently rich to cover a wide range of interesting materials. Figure 3 shows the accuracy of various BRDF models on the MERL BRDF database [26].
We also assume that is sparse, suggesting that BRDF at each pixel is the linear combination of a few dictionary atoms. The sparsity constraint avoids overfitting to the intensity measurements as well as provides a regularization for underdetermined problems.
Solution outline.
An estimate of the surface normal and BRDF at a pixel can be obtained by solving
(3) 
The penalty serves to enforce sparse solutions, with determining the level of sparsity in the solution. The optimization problem in (3) is nonconvex due to unitnorm constraint on the surface normal as well as the term . Our solution methodology consists of two steps: (i) Surface normal estimation. We perform an efficient multiscale search that provides us with a precise estimate of the surface normal at pixel (see Section 4); and, (ii) BRDF estimation. We solve (3) only over with the normal fixed to obtain the BRDF at (see Section 5).
4 Surface normal estimation
In this section, we describe an efficient perpixel surface normal estimation algorithm.
4.1 Virtual examplebased normal estimation
Our surface normal estimation is an extension of the method proposed in [11], where two spheres — one diffuse and one specular — are introduced in a scene along with the target object. To obtain the surface normals at an pixel on the target, the intensity observed at pixel , , is matched to those on the reference spheres. The reference spheres provide a sampling of the space of the normals and hence, we can simply treat them as a collection of candidate normals . By orientation consistency, the surface normal estimation now reduces to finding the candidate normal that can best explain the intensity profile . Given a candidate normal , we have two intensity profiles, and , one each for the diffuse and specular sphere, respectively. The estimate of the surface normal at pixel is given as
In [11], this is solved by scanning over all the pixels/candidate normals on the reference spheres.
Rendering virtual spheres.
We rely on the same approach as [11] with the key difference that we virtually render the reference spheres. The virtual spheres are rendered as follows. Given the lighting directions and the BRDF dictionary , for each candidate normal , we render a matrix such that is the intensity observed at a surface with normal and BRDF , under lighting .
We render one such matrix for each candidate normal in . Given these virtually rendered spheres, we can solve (3) by searching over all candidate normals.
Bruteforce search.
For computationally efficiency, we drop the sparsity constraint in (3). We empirically observed that dropping the sparsity constraint made little difference in the estimated surface normals. Now, given the intensity profile at pixel and noting that , solving (3) reduces to:
(4) 
The unitnorm constraint on the surface normals is absorbed into the candidate normals being unitnorm. The optimization problem in (4) requires solving a set of a nonnegative least squares (NNLS) subproblems, one for each element of . For the results in the paper, we used the lsnonneg function in MATLAB to solve the NNLS subproblems.
The accuracy and the computational cost in solving (4) depends solely on the cardinality of the candidate set , . We obtain by uniform or equiangular sampling on the sphere [10]. As a consequence, the accuracy of the normal estimates, on an average, cannot be better than the half the angular spacing of the candidate set. Yet, the smaller the angular spacing of , the larger is its cardinality. For example, a equiangular sampling over the hemisphere requires approximately 250 candidates while a requires 20000 candidates. Given that the timecomplexity of the bruteforce search is linear in , the computational costs for obtaining very precise normal estimates can be overwhelming. To alleviate this, we outline a coarsetofine search strategy that is remarkably faster than the bruteforce approach with little loss in accuracy.
4.2 Coarsetofine search
Figure 4 shows the value of
as a function of the candidate normal for a few examples. In our simulations, we observed that there is a gradual increase in error value as we moved away from the global minima of . We exploit this to design a coarsetofine search strategy where we first evaluate the candidate normals at a coarse sampling and subsequently search in the vicinity of this solution but at a finer sampling.
Specifically, let be the set of equiangular sampling on the unitsphere where the angular spacing is degrees. Given a candidate normal , we define
as the set of unitnorm vectors within degrees from ,
In the first iteration, we initialize the candidate normal set . Now, at the th iteration, we solve (4) over a candidate set . Suppose that is the candidate normal where the minimum occurs at the th iteration. The candidate set for the th iteration is constructed as
with . That is, the candidate set is simply the set of all candidates at a finer angular sampling that are no greater than the current angular sampling from the current estimate. This is repeated till we reach the finest resolution at which we have candidate normals. For the results in this paper, we use the following values: and . For efficient implementation, we prerender for .
The computational gains obtained via this coarsetofine sampling strategy are immense. Table 1 shows the runtime and precision of both brute force and coarsetofine normal estimation strategy for different levels of angular sampling in the generation of the candidate normal set. As expected the runtime of the brute force algorithm is linear in the number of candidates. In contrast, the coarsetofine strategy requires a tiny fraction of this time while nearly achieving the same precision as the brute force strategy.
While the solution to (4) also produces an estimate of the BRDF at the pixel, this estimate is often poor due to lack of the sparseregularizer that serves to avoid overfitting to the observed intensities. In the next section, we use this normal estimate to obtain a perpixel BRDF estimate.
5 Reflectance estimation
Given the surface normal estimate , we obtain an estimate of the BRDF at each pixel, individually, by solving
(5) 
The use of the regularizer promotes sparse solutions and primarily helps in avoiding overfitting to the observed intensities. The optimization problem in (5) is convex and we used CVX [9], a general purpose convex solver, to obtain solutions. The estimate of the BRDF at pixel is given as . The value of was manually tuned for best performance. For colorimagery, we solve for the coefficients associated with each color channel separately.
When we know a priori that multiple pixels share the same BRDF, then we can solve (5) simply by concatenating their corresponding intensity profiles and their respective matrices. As is to be expected, pooling intensities observed at multiple pixels significantly improves the quality of the estimates. Yet, while spatial averaging or spatial priors improve the quality of the estimate, inherently they require the object to exhibit smooth spatialvariations in its BRDF. The advantage of our perpixel BRDF estimation framework is the ability to handle arbitrarily complex spatial variations in the BRDF at cost of noisier estimates. In the next section, we carefully characterize the performance of our proposition using synthetic and real examples.
6 Results
We characterize the performance of our technique using both synthetic and real datasets.
6.1 Synthetic experiments
We use the BRDFs in the MERL database [26] in a leaveoneout scheme for testing the accuracy of our proposed algorithms for surface normal and BRDF estimation. Specifically, when we simulate a test object using a particular material, the dictionary is comprised of BRDFs of the remaining materials from the database. We used the configuration in the lightstage described in [7] for our collection of lighting directions.
Varying number of images.
Figure 5 characterizes the errors in surface normal and BRDF estimation for varying number of input images or equivalently, lighting directions. We report the average error computed by randomly generating 20,000 normals per material and varying across all material BRDFs in the database. This experiment is similar in setup to the one reported in [24] which, to our knowledge, is one of the most accurate techniques for photometric stereo on isotropic BRDFs. In [24], for 200 images, the error in estimating the elevation angle when the azimuth is known is reported as ; in contrast, the proposed technique has an error of in estimating the surface normal without any prior knowledge of the azimuth.
Varying BRDF.
In Figure 6, we evaluate performance of surface normal estimation for varying material BRDFs. We fixed the number of images at . Shown are aggregate statistics computed over 50,000 randomly generated surface normals. The worst case error is less than and the error tapers down to which is the finest sampling that we used for generating candidate normals. This can presumably be reduced by either choosing a finer sampling grid or using gradient descent techniques.
Comparisons.
Figure 7 showcases the performance of many photometric stereo techniques for different objects: a blackobsidian bunny and a goldpainted elephant. We used 253 input images for each object. Photometric stereo under Lambertian model fails to recover precise normal maps indicating the presence of nonLambertian components. The robust PCAbased solver [28] produces better normal maps as compared to the traditional photometric stereo, however it produces overly smoothed estimates; this can be attributed to removal of nonLambertian cues which are invaluable for precise normal estimation. The solution of Alldrin et al. [2] while significantly better than Lambertian technique produces errors greater than . In contrast, the proposed method returns reliable normal estimates for both scenes indicating the robustness of the underlying solution. We also simulated the performance of examplebased photometric stereo which is identical to the proposed technique when applied to a twomaterial (whitediffuse and chrome) dictionary. As expected, having a larger dictionary of BRDFs as in the proposed technique does provide significant improvements in surface normal estimation.
Performance of BRDF estimation.
Given a test BRDF, we generated surface normals with random orientations and rendered their appearance for lighting directions. Assuming the knowledge of the true surface normals, we estimate the BRDF using the optimization in Section 5.
We characterize the performance of the perpixel BRDF estimate as well as the error in the BRDF estimate when the information at the 100 normals are pooled. We use the relative BRDF error [15] to quantify the accuracy of the estimate. Given true BRDF and estimated value , the relative BRDF error is given as
(6) 
with set equal to for convenience.
6.2 Real data
Real images are present a layer of difficulty well beyond simulations and introduce interreflections, subsurface scattering, cast shadows, and imprecise light source localization. We test the performance of our shape and BRDF recovery algorithm on a wide range of datasets. Specifically, we use images from two sources — the light stage data from [7], and the gourd from [2].
Figures 1, 10, 11 and 12 showcase the performance of our algorithm on the real datasets. The results in Figure 1, 11 and 12 were obtained from 250 input images, and the results of “gourd1” in Figure 10 was obtained from 100 input images. The recovered shape and BRDF (as visualized via rendered images) seem to be in agreement with the results in [2]; however, our algorithm is significantly simpler and employs a perpixel algorithm that be easily parallelized.
The robustness of the perpixel BRDF estimate is tested in Figure 11 where there are not just a wide variety of unique materials (the helmet, the breastplate, the chain, the red scabbard, to name a few) but also significant modeling deviations (interreflections, castshadows). In spite of this, our approach produces a faithful rendition of the scene. The perpixel BRDF estimation allows us to handle objects with complex spatial variations. In contrast, methods that assume the presence of just a few reference BRDFs as in [8, 2] would not scale easily to such scenes. We refer the reader to the supplemental videos highlighting the relighting results.
7 Discussions
We present a photometric stereo technique for perpixel normal and BRDF estimation for objects that are visually complex. We demonstrate that the use of a BRDF dictionary significantly simplifies the inverse problem and provides not just stateoftheart results in normal and BRDF estimation but also works robustly on a wide range of real scenes. A hallmark of our approach is the ability to obtain perpixel BRDF estimation without any spatial smoothness assumptions endemic to stateoftheart SVBRDF estimation techniques [8, 2]; this makes it applicable to scenes with a large number of unique materials. Finally, our perpixel framework is ripe for further speedups by solve for the shape and reflectance at each pixel in parallel.
Limitations.
While the use of virtual examples provides flexibility beyond [11], we require light calibration and hence, our method is most suited to shape and reflectance acquisition from lightstages where the light sources are fixed and the calibration is a onetime effort. The accuracy of our coarsetofine normal estimation is lower bounded by finest sampling of our candidate normals. This can potentially be improved by refine the estimates using a gradient descent scheme starting with our solution; however, this approach could be computationally intensive. The SVBRDF produced by our approach can be noisy especially since we independently recover the BRDF at each pixel. If we have a priori knowledge that the scene has a limited number of unique materials, then enforcing this could lead to robust SVBRDF estimates. This can be easily incorporated into our framework by enforcing the matrix of sparse coefficients to be lowrank. Finally, it is also important that the scene lies in the linear span of our dictionary. In the failure of this, our results can be unpredictable. Here, the need for a larger dictionary encompassing hundreds, if not thousands, of materials would be invaluable for the broader applicability of our method.
References
 [1] N. Alldrin and D. Kriegman. Toward reconstructing surfaces with arbitrary isotropic reflectance: A stratified photometric stereo approach. In ICCV, 2007.
 [2] N. Alldrin, T. Zickler, and D. Kriegman. Photometric stereo with nonparametric and spatiallyvarying reflectance. In CVPR, 2008.
 [3] R. Basri and D. Jacobs. Lambertian reflectance and linear subspaces. IEEE Trans. Pattern Analysis and Machine Intelligence (PAMI), 25:218–233, 2003.
 [4] J. Blinn and M. Newell. Texture and reflection in computer generated images. Comm. ACM, 19:542–547, 1976.
 [5] M. Chandraker and R. Ramamoorthi. What an image reveals about material reflectance. In ICCV, 2011.
 [6] R. Cook and K. Torrance. A reflectance model for computer graphics. ACM Trans. Graphics (TOG), 1:7–24, 1982.
 [7] P. Einarsson, C. Chabert, A. Jones, W. Ma, B. Lamond, T. Hawkins, M. Bolas, S. Sylwan, and P. Debevec. Relighting human locomotion with flowed reflectance fields. In Rendering techniques, 2006.
 [8] D. Goldman, B. Curless, A. Hertzmann, and S. Seitz. Shape and spatiallyvarying BRDFs from photometric stereo. In ICCV, 2005.
 [9] M. Grant and S. Boyd. CVX: Matlab software for disciplined convex programming, version 2.1. http://cvxr.com/cvx, 2014.

[10]
R. Harman and V. Lacko.
On decompositional algorithms for uniform sampling from nspheres and
nballs.
Journal of Multivariate Analysis
, 101:2297–2304, 2010.  [11] A. Hertzmann and S. Seitz. Examplebased photometric stereo: Shape reconstruction with general, varying BRDFs. PAMI, 27:1254–1264, 2005.
 [12] T. Higo, Y. Matsushita, and K. Ikeuchi. Consensus photometric stereo. In CVPR, 2010.
 [13] S. Ikehata, D. Wipf, Y. Matsushita, and K. Aizawa. Robust photometric stereo using sparse regression. In CVPR, 2012.
 [14] J. Lawrence, A. BenArtzi, C. DeCoro, W. Matusik, H. Pfister, R. Ramamoorthi, and S. Rusinkiewicz. Inverse shade trees for nonparametric material representation and editing. TOG, 25:735–745, 2006.
 [15] A. Ngan, F. Durand, and W. Matusik. Experimental analysis of brdf models. In Euro. Conf. Rendering Tech., 2005.

[16]
M. Oren and S. Nayar.
Generalization of the lambertian model and implications for machine
vision.
Intl. J. Computer Vision
, 14:227–251, 1995.  [17] G. Oxholm and K. Nishino. Shape and reflectance from natural illumination. In ECCV, 2012.
 [18] G. Oxholm and K. Nishino. Multiview shape and reflectance from natural illumination. In CVPR, 2014.
 [19] R. Ramamoorthi. Analytic PCA construction for theoretical analysis of lighting variability in images of a Lambertian object. PAMI, 24:1322–1333, 2002.
 [20] P. Ren, J. Wang, J. Snyder, X. Tong, and B. Guo. Pocket reflectometry. TOG, 30:45, 2011.
 [21] F. Romeiro, Y. Vasilyev, and T. Zickler. Passive reflectometry. In ECCV, 2008.
 [22] F. Romeiro and T. Zickler. Blind reflectometry. In ECCV, 2010.
 [23] S. Rusinkiewicz. A new change of variables for efficient brdf representation. In Rendering techniques, pages 11–22. 1998.
 [24] B. Shi, P. Tan, Y. Matsushita, and K. Ikeuchi. Elevation angle from reflectance monotonicity: Photometric stereo for general isotropic reflectances. In ECCV. 2012.
 [25] G. Ward. Measuring and modeling anisotropic reflection. TOG, 26:265–272, 1992.
 [26] M. . Wojciech, P. Hanspeter, B. Matt, and M. Leonard. A datadriven reflectance model. TOG, 22:759–769, 2003.
 [27] R. Woodham. Photometric method for determining surface orientation from multiple images. Opt. Eng, 1980.
 [28] L. Wu, A. Ganesh, B. Shi, Y. Matsushita, Y. Wang, and Y. Ma. Robust photometric stereo via lowrank matrix completion and recovery. In ACCV. 2011.
 [29] C. Yu, Y. Seo, and S. Lee. Photometric stereo from maximum feasible Lambertian reflections. In ECCV. 2010.
 [30] L. Yu, S. Yeung, Y. Tai, D. Terzopoulos, and T. Chan. Outdoor photometric stereo. In ICCP, 2013.
Comments
There are no comments yet.