1 Introduction
A recent trend in photometric [7, 13, 12, 26, 23, 20] and physicsbased [24] shape recovery has been to develop methods that solve directly for surface height, rather than first estimating surface normals and then integrating them into a height map. Such methods are attractive since: 1. they only need solve for a single height value at each pixel (as opposed to the two components of surface orientation), 2. integrability is guaranteed, 3. errors do not accumulate through a two step pipeline of shape estimation and integration and 4. it enables combination with cues that provide depth information directly [10]. In both photometric stereo [13, 12, 23] and recently in shapefrompolarisation (SfP) [24], such a direct solution was made possible by deriving equations that are linear in the unknown surface gradient.
In this paper, we explore the combination of SfP constraints with photometric constraints (i.e. photopolarimetric shape estimation) provided by one or two light sources. Photometric stereo with three or more light sources is a very well studied problem with robust solutions available under a range of different assumptions. Two source photometric stereo is still considered a difficult problem [21] even when the illumination is calibrated and albedo is known. We show that various formulations of one and two source photopolarimetric stereo lead to the same general problem (in terms of surface height), that illumination can be estimated and that certain combinations of constraints lead to an albedo invariant formulation. Hence, with only modest additional data capture requirements (a polarisation image rather than an intensity image), we arrive at an approach for uncalibrated two source photometric stereo. We make the following novel contributions:
1.1 Related Work
The polarisation state of light reflected by a surface provides a cue to the material properties of the surface and, via a relationship with surface orientation, the shape. Polarisation has been used for a number of applications, including early work on material segmentation [28] and diffuse/specular reflectance separation [18]. However, there has been a resurgent interest [24, 10, 25, 19] in using polarisation information for shape estimation.
Shapefrompolarisation
The degree to which light is linearly polarised and the orientation associated with maximum reflection are related to the two degrees of freedom of surface orientation. In theory, this polarisation information alone restricts the surface normal at each pixel to two possible directions. Both Atkinson and Hancock
[1] and Miyazaki [15] solve the problem of disambiguating these polarisation normals via propagation from the boundary under an assumption of global convexity. Huynh [8] also disambiguate polarisation normals with a global convexity assumption but estimate refractive index in addition. These works all used a diffuse polarisation model while Morel [16] use a specular polarisation model for metals. Recently, Taamazyan [25] introduced a mixed specular/diffuse polarisation model. All of these methods estimate surface normals that must be integrated into a height map. Moreover, since they rely entirely on the weak shape cue provided by polarisation and do not enforce integrability, the results are extremely sensitive to noise.Photopolarimetric methods There have been a number of attempts to combine photometric constraints with polarisation cues. Mahmoud [11] used a shapefromshading cue with assumptions of known light source direction, known albedo and Lambertian reflectance to disambiguate the polarisation normals. Atkinson and Hancock [3] used calibrated, three source Lambertian photometric stereo for disambiguation but avoiding an assumption of known albedo. Smith [24] showed how to express polarisation and shading constraints directly in terms of surface height, leading to a robust and efficient linear least squares solution. They also show how to estimate the illumination, up to a binary ambiguity, making the method uncalibrated. However, they require known or uniform albedo. We explore variants of this method by introducing additional constraints that arise when a second light source is introduced, allowing us to relax the uniform albedo assumption. We also give an explanation for why the matrix they consider is fullrank except in a unique case. Recently, Ngo [19] derived constraints that allowed surface normals, light directions and refractive index to be estimated from polarisation images under varying lighting. However, this approach requires at least 4 lights. All of the above methods operate on single channel images and do not exploit the information available in colour images.
Polarisation with additional cues Rahmann and Canterakis [22] combined a specular polarisation model with stereo cues. Similarly, Atkinson and Hancock [2] used polarisation normals to segment an object into patches, simplifying stereo matching. Stereo polarisation cues have also been used for transparent surface modelling [14]. Huynh [9] extended their earlier work to use multispectral measurements to estimate both shape and refractive index. Drbohlav and Sara [6] showed how the Basrelief ambiguity [4] in uncalibrated photometric stereo could be resolved using polarisation. However, this approach requires a polarised light source. Recently, Kadambi [10] proposed an interesting approach in which a single polarisation image is combined with a depth map obtained by an RGBD camera. The depth map is used to disambiguate the normals and provide a base surface for integration.
2 Representing Polarisation Information
We place a camera at the origin of a threedimensional coordinate system (Oxyz) in such a way that Oxy coincides with the image plane and Oz with the optical axis. In Sec. 4 we propose a unified formulation for a variety of methods, all of which assume a) orthographic projection, b) known refractive index of the surface. Other assumptions will be given later on, depending on the specific problem at hand. We denote by the viewer direction, by a general light source direction with . We only require the third components of these unit vectors to be greater than zero (all the vectors belong to the upper hemisphere). We will denote by a second light source where required. We parametrise the unknown surface height by the function , where is an image location, and the unit normal to the surface at the point is given by:
(1) 
where is the outgoing normal vector and denotes the partial derivative of w.r.t. x and y, respectively, so that . We now introduce relevant polarization theory, describing how we can estimate a polarisation image from multichannel data.
2.1 Polarisation image
When unpolarised light is reflected by a surface it becomes partially polarised [27]. A polarisation image can be estimated by capturing a sequence of images in which a linear polarising filter in front of the camera lens is rotated through a sequence of different angles , . The measured intensity at a pixel varies sinusoidally with the polariser angle:
(2) 
The polarisation image is thus obtained by decomposing the sinusoid at every pixel location into three quantities [27]: the phase angle, , the degree of polarisation, , and the unpolarised intensity, . The parameters of the sinusoid can be estimated from the captured image sequence using nonlinear least squares [1], linear methods [8] or via a closed form solution [27] for the specific case of , .
2.2 Multichannel polarisation image estimation
A polarisation image is usually computed by fitting the sinusoid in (2) to observed data in a least squares sense. Hence, from measurements we estimate , and . In practice, we may have access to multichannel measurements. For example, we may capture colour images (3 channels), polarisation images with two different light source directions (2 channels) or both (6 channels). Since and depend only on surface geometry (assuming that, in the case of colour images, the refractive index does not vary with wavelength), then we expect these quantities to be constant over the channels. On the other hand, will vary between channels either because of a shading change caused by the different lighting or because the albedo or light source intensity is different in the different colour channels. Hence, in a multichannel setting with channels, we have unknowns and observations. If we use information across all channels simultaneously, the system is more constrained and the solution will be more robust to noise. Moreover, we do not need to make an arbitrary choice about the channel from which we estimate the polarisation image. This idea shares something in common with that of Narasimhan [17], though their material/shape separation was not in the context of polarisation.
Specifically, we can express the multichannel observations in channel with polariser angle as
(3) 
The system of equations is linear in the unpolarised intensities and, by a change of variables, can be made linear in and [8]. Hence, we wish to solve a bilinear system and do so in a least squares sense using interleaved alternating minimisation. Specifically, we a) fix and and then solve linearly for the unpolarised intensity in each channel and b) then fix the unpolarised intensities and solve linearly for and using all channels simultaneously. Concretely, for a single pixel, we obtain the unpolarised intensities across channels by solving:
(4) 
where is given by
(5) 
with denoting the identity matrix, and is given by
Then, with the unpolarised intensities fixed, we solve for and using the following linearisation:
(6) 
where and is given by
(7) 
and is given by:
(8) 
We estimate and from the linear parameters using and .
Input  Single channel estimation  Multichannel estimation  

We initialise by computing a polarisation image from one channel using linear least squares, as in [8], and then use the estimated and to begin alternating interleaved optimisation by solving for the unpolarised intensities across channels. We interleave and alternate the two steps until convergence. In practice, we find that this approach not only dramatically reduces noise in the polarisation images but also removes the ad hoc step of choosing an arbitrary channel to process. We show an example of the results obtained in Figure 1. The multichannel result is visibly less noisy than the single channel performance.
3 Photopolarimetric height constraints
In this section we describe the different constraints provided by photopolarimetric information and then show how to combine them to arrive at linear equations in the unknown surface height.
3.1 Degree of polarisation constraint
A polarisation image provides a constraint on the surface normal direction at each pixel. The exact nature of the constraint depends on the polarisation model used. In this paper we will consider diffuse polarisation, due to subsurface scattering (see [1] for more details). The degree of diffuse polarisation at each point can be expressed in terms of the refractive index and the surface zenith angle as follows (Cf. [1]):
(9)  
Recall that the zenith angle is the angle between the unit surface normal vector and the viewing direction . If we know the degree of polarisation and the refractive index (or have good estimates of them at hand), equation (9) can be rewritten with respect to the cosine of the zenith angle, and expressed in terms of the function, , that depends on the measured degree of polarisation and the refractive index:
(10)  
where we drop the dependency of on for brevity.
3.2 Shading constraint
The unpolarised intensity provides an additional constraint on the surface normal direction via an appropriate reflectance model. We assume that pixels have been labelled as diffuse or specular dominant and restrict consideration to diffuse shading. In practice, we deal with specular pixels in the same way as [24] and simply assume that they point in the direction of the halfway vector between and . For the diffuse pixels, we therefore assume that light is reflected according to the Lambert’s law. Hence, the unpolarised intensity is related to the surface normal by:
(11) 
where is the albedo. Writing in terms of the gradient of as reported in (1), (11) can be rewritten as follows:
(12) 
with . This is a nonlinear equation, but we will see in Sec. 3.4 and 3.5 how it is possible to remove the nonlinearity by using the ratios technique.
3.3 Phase angle constraint
An additional constraint comes from the phase angle, which determines the azimuth angle of the surface normal up to a ambiguity. This constraint can be rewritten as a collinearity condition [24], that is satisfied by either of the two possible azimuth angles implied by the phase angle measurement. Specifically, for diffuse pixels we require the projection of the surface normal into the  plane, , and a vector in the image plane pointing in the phase angle direction, , to be collinear. This corresponds to requiring
(13) 
In terms of the surface gradient, using (1), it is equivalent to
(14) 
A similar expression can be obtained for specular pixels, substituting in the shifted phase angles. The advantage of doing this will become clear in Sec. 4.2.
3.4 Degree of polarisation ratio constraint
Combining the two constraints illustrated in Sec. 3.1 and 3.2, we can arrive at a linear equation, that we refer to as the DOP ratio constraint. Recall that and that we can express in terms of the gradient of by using (1), then isolating the nonlinear term in (10) we obtain
(15) 
where . On the other hand, considering the shading information contained in (12), and again isolating the nonlinearity we arrive at the following
(16) 
Note that we are supposing , and , . Inspecting Eqs. (15) and (16) we obtain
(17) 
We thus arrive at the following partial differential equation (PDE):
(18) 
where
(19) 
and
(20) 
3.5 Intensity ratio constraint
Finally, we construct an intensity ratio constraint by considering two unpolarised images, , taken from two different light source directions, . We construct our constraint equation by applying (11) twice, once for each light source. We can remove the nonlinearity as before and take a ratio, arriving at the following equation:
(21) 
The above equation is independent of albedo, light source intensity and nonlinear normalisation term. Again as before, we can rewrite (21) as a PDE in the form of (18) with
(22) 
where , and
(23) 
4 A unified PDE formulation
Commencing from the constraints introduced in Sec. 3, in this section we show how to solve several different problems in photopolarimetric shape estimation. The common feature is that these are all linear in the unknown height, and are expressed in a unified formulation in terms of a system of PDEs in the same general form:
(24) 
where , , denoting by the reconstruction domain and being or depending on the cases. (24) is a compact and general equation, suitable for describing several cases in a unified differential formulation that solves directly for surface height.
Different combinations of the three constraints described in Sec. 3 that are linear in the surface gradient can be combined in the formulation of (24). Each corresponds to different assumptions and have different pros and cons. We explore three variants and show that [24] is a special case of our formulation. We summarise the alternative formulations in Tab. 1.
Phase  DOP  Intensity  

Method  angle  ratio  ratio 
[24]  ✓  ✓  
Proposed 1  ✓  ✓  
Proposed 2  ✓  ✓  
Proposed 3  ✓  ✓  ✓ 
4.1 Single light and polarisation formulation
This case has been studied in [24]. It uses a single polarisation image, requires known illumination (though [24] show how this can be estimated if unknown) and assumes that albedo is known or uniform. This last assumption is quite restrictive, since it can only be applied to objects with homogeneous surfaces. With just a single illumination condition, only the phase angle and DOP ratio constraints are available. This thus becomes a special case of our general unified formulation (24), where and are defined as
(25) 
4.2 Proposed 1: Albedo invariant formulation
Our first proposed method uses the phase angle constraint (14) and two unpolarised images, taken from two different light source directions, obtained through (12) and combined as in (21). In this case the problem studied is described by the system of PDEs (24) with
(26) 
where and defined as in (22) and (23). The phase angle does not depend on albedo and the intensity ratio constraint is invariant to albedo. As a result, this formulation is particularly powerful because it allows albedo invariant height estimation. Moreover, the light source directions in the two images can be estimated (again, in an albedo invariant manner) using the method in Sec. 6.
Once surface height has been estimated, we can compute the surface normal at each pixel and it is then straightforward to estimate an albedo map using (11). Where we have two diffuse observations, we can compute albedo from two equations of the form of (11) in a least squares sense. In real data, where we have specular pixel labels, we use only the diffuse observations at each pixel. To avoid artifacts at the boundary of specular regions, we introduce a gradient consistency term into the albedo estimation. We encourage the gradient of the albedo map to match the gradients of the intensity image for diffuse pixels.
4.3 Proposed 2: Phase invariant formulation
Our second proposed method uses only the DOP ratio and the intensity ratio constraints. This means that phase angle estimates are not used. The advantage of this is that phase angles are subject to a shift of
at specular reflections when compared to diffuse reflections. So, the phase angle constraint relies upon having accurate perpixel specularity labels, which classify reflections as either dominantly specular or diffuse (or alternatively use a mixed polarisation model
[25] with a four way ambiguity). In this case we need a) two unpolarised intensity images, taken with two different light source directions, and , obtained through (12), b) polarisation information from the function and c) knowledge of the albedo map. We need noncoplanar in order to have the matrix field not singular. Note that the function , obtained from polarization information (as in (10)), is the same for the two required images. The reason for this is that it does not depend on the light source directions but only on the viewer direction which does not change. This formulation can be deduced starting from (21) and (17), arriving at a PDE system as in (24) with(27) 
and using (19), (20), (22), (23) to define the vector fields and the scalar fields that appear in and .
4.4 Proposed 3: Most constrained formulation
Our final proposed method combines all of the previous constraints, leading to a problem of the form (24) with
(28) 
This formulation uses the most information and so is potentially the most robust method. However, it requires known albedo in order to use the DOP ratio constraint. Nevertheless, it is possible to first apply proposed method 1, estimate the albedo and then reestimate surface height using the maximally constrained formulation and the estimated albedo map. In fact, the best performance is obtained by iterating these two steps, alternately using the surface height estimate to compute albedo and then using the updated albedo to recompute surface height.
4.5 Extension to colour images
We now consider how to extend the above systems of equations when colour information is available. If a surface is lit by a coloured point source, then each pixel provides three equations of the form in (11). In principle, this provides no more information than a grayscale observation since the surface normal and light source direction are fixed across colour channels. However, in the presence of noise using all three observations improves robustness. In particular, if the albedo value at a pixel is lower in one colour channel, the signal to noise ratio will be worse in that channel than the others. For a multicoloured object, it is impossible to choose a single colour channel that provides the best signal to noise ratio across the whole object. For this reason, we propose to use information from all colour channels where available.
We already exploit colour information in the estimation of the polarisation image in Sec. 2.2. Hence, the phase angle estimates have already benefited from the improved robustness. Both the DOP ratio and intensity ratio constraints can also exploit colour information by repeating each constraint three times, once for each colour channel. In the case of the intensity ratio, the colour albedo once again cancels if ratios are taken between the same colour channels under different light source directions.
5 Height estimation via linear least squares
We have seen that each of the variants illustrated in the previous section, each with different advantages, can be written as a PDE system (24). Denoting by the number of pixels, we discretise the gradient in (24) via finite differences, arriving at the following linear system in
(29) 
where , with the matrix of finite difference gradients. is the discrete perpixel version of the matrix , hence , where depends on the various proposed cases reported in Sec. 4 ( for (25) and (26), for (27) and for (28)). is the discrete perpixel version of the function , , and the vector of the unknown height values. The resulting discrete system is large, since we have linear equations in unknowns, but sparse, since has few nonzero values for each row, and has as unknowns the height values. The perpixel matrix is a fullrank matrix, for each choice of that comes from the proposed formulations in Sec. 4, under the different assumptions specified for each case. The perpixel matrix related to [24] is fullrank except in one case: when the first two components of the light vector are nonzero and and it happens that the phase angle is at least in one pixel. In that case, the matrix has a rankdeficiency (though in practice assuming a value of exactly , up to numerical tolerance, is unlikely).
We want to find a solution of (29) in the leastsquares sense, find a vector such that
(30) 
Considering the associated system of normal equations
(31) 
it is wellknown that if there exists that satisfies (31), then is also solution of the leastsquares problem, satisfies (30). Since is a fullrank matrix, then the matrix is not singular, hence there exists a unique solution of (31) for each data term . Since neither nor depend on in (24), the solution can be computed only up to an additive constant (which is consistent with the orthographic projection assumption). To resolve the unknown constant, knowledge of at just one pixel is sufficient. In our implementation, we remove the height of one pixel from the variables and substitute its zero value elsewhere.
6 Two source lighting estimation
Our three proposed shape estimation methods require knowledge of the two light source directions. Previously, Smith [24] showed that a single polarisation image can be used to estimate illumination conditions up to a binary ambiguity. However, to do so, they assumed that the albedo was known or uniform, and they also worked only with a single colour channel. In a two source setting, we show that it is possible to estimate both light source directions simultaneously, and do so in an albedo invariant manner. Moreover, we can exploit information across different colour channels to improve robustness to noise. Hence, our three methods can be used in an uncalibrated setting.
The intensity ratio (21) provides one equation per pixel relating unpolarised intensities, surface gradient and light source directions. Given two polarisation images with different light directions, we have one such equation per pixel and six unknowns in total. We assume that ambiguous surface gradient estimates are known from and , and then use (21) to estimate the light source directions.
The intensity ratio (21) is homogeneous in and and so has a trivial solution . If we assume that the intensity of the light source remains constant in each colour channel across the two images, then this intensity divides out when taking an intensity ratio and so the length of the light source vectors is arbitrary. We therefore constrain them to unit length (avoiding the trivial solution), and represent them by spherical coordinates and , such that and . This reduces the number of unknowns to four. We can now write the residual at each pixel given an estimate of the light source directions. There are two possible residuals, depending on which of the two ambiguous polarisation normals we use. From the phase angle and the zenith angle estimated from the degree of polarisation using (10), we have two possible surface normal directions at each pixel and therefore two possible gradients: , . Hence, the residuals at pixel in channel are given by either:
or  
We can now write a minimisation problem for light source direction estimation by summing the minimum of the two residuals over all pixels and colour channels:
The minimum of two convex functions is not itself convex and so this optimisation is nonconvex. However, we find that, even with a random initialisation, it almost always converges to the global minimum. As in [24], the solution is still subject to a binary ambiguity, in that if is a solution then is also a solution (with ), corresponding to the convex/concave ambiguity. We resolve this simply by choosing the maximal solution when surface height is later recovered.
7 Experiments
Input  Input  Ground 
(uniform albedo)  (varying albedo)  truth height 
[24]  Prop. 1  Prop. 2  Prop. 3  Prop. 1+3  

Uniform albedo 

Varying albedo 
We begin by using synthetic data generated from the Mozart height map (Fig. 3). We differentiate to obtain surface normals and compute unpolarised intensities by rendering the surface using light sources and according to (11). We experiment with both uniform albedo and varying albedo for which we use a checkerboard pattern. We simulate the effect of polarisation according to (2), varying the polariser angle between and in
increments. Next, we corrupt this data by adding Gaussian noise with zero mean and standard deviation
, saturate and quantise to 8 bits. This noisy data provides the input to our reconstruction. First, we estimate a polarisation image using the method in Sec. 2.2, then apply each of the proposed methods or the stateoftheart comparison method [24] to recover the height map.In Tab. 2 we report RootMeanSquare (RMS) error in the surface height (in pixels) and mean angular error (in degrees) in the surface normals obtained by differentiating the estimated surface height. In Fig. 3 we show a sample of qualitative results from this experiment. In all cases, more than one of our proposed methods outperform [24]. When albedo is uniform, our phase invariant (Prop. 2) or maximally constrained solution (Prop. 3) provides the best results. When albedo is nonuniform, the albedo invariant method (Prop. 1) provides much better performance. Although the combination of the albedo invariant method followed by the maximally constrained method (Prop. 1+3) does not give quantitatively the best performance, we find that on real world data containing more complex noise and specular reflections, this approach is most robust.
In Fig. 2 we show qualitative results on two real objects with spatially varying albedo. From left to right we show: an image from the input sequence; the surface normals of the estimated height map (inset sphere shows how orientation is visualised as colour); the estimated albedo map; a rerendering of the estimated surface and albedo map under novel lighting with BlinnPhong reflectance [5]; a rotated view of the estimated surface; and, for comparison, reconstructions of the same surfaces using [24]. The results of [24] are highly distorted in the presence of varying albedo. Our approach avoids transfer of albedo details into the recovered shape, leading to convincing relighting results.
Setting  Method  Height  Normal  Height  Normal  Height  Normal 
(pix)  (deg)  (pix)  (deg)  (pix)  (deg)  
Uniform albedo, known lighting  [24]  1.12  2.85  1.68  4.48  5.06  11.28 
Prop. 1  1.78  2.52  1.94  3.30  3.49  7.22  
Prop. 2  0.23  1.45  0.70  1.70  6.50  5.33  
Prop. 3  0.42  1.03  0.52  1.74  1.53  4.73  
Prop. 1+3  3.37  3.22  3.62  4.03  5.82  9.15  
Uniform albedo, estimated lighting  [24]  1.10  2.84  1.55  4.36  4.94  11.16 
Prop. 1  1.77  2.51  1.88  3.23  3.04  6.86  
Prop. 2  0.23  1.45  0.71  1.71  5.87  5.68  
Prop. 3  0.41  1.02  0.49  1.74  1.47  4.88  
Prop. 1+3  3.36  3.21  3.57  3.97  5.73  8.93  
Unknown albedo, known lighting  [24]  22.50  28.03  21.63  27.76  20.76  26.74 
Prop. 1  2.74  4.18  3.28  5.76  6.65  13.11  
Prop. 2  141.19  59.69  140.04  59.49  131.16  57.69  
Prop. 3  18.62  16.76  18.58  16.85  17.33  16.82  
Prop. 1+3  5.22  9.59  5.80  11.26  7.56  16.50  
Unknown albedo, estimated lighting  [24]  7.78  18.10  8.20  18.82  9.93  22.68 
Prop. 1  2.73  4.17  3.19  5.62  6.53  12.98  
Prop. 2  140.56  59.58  133.76  58.31  91.24  47.88  
Prop. 3  18.66  16.79  19.02  17.15  20.34  18.76  
Prop. 1+3  5.21  9.57  5.75  11.09  8.84  19.56 
8 Conclusions
In this paper we have introduced a unifying formulation for recovering height from photopolarimetric data and proposed a variety of methods that use different combinations of linear constraints. We proposed a more robust way to estimate a polarisation image from multichannel data and showed how to estimate lighting from two source photopolarimetric images. Together, our methods provide uncalibrated, albedo invariant shape estimation with only two light sources. Since our unified differential formulation does not depend on a specific camera setup or a chosen reflectance model, the most obvious target for future work is to move to a perspective projection, considering more complex reflectance models, exploiting better the information available in specular reflection and polarisation. In addition, since our methods directly estimate surface height, it would be straightforward to incorporate positional constraints, for example provided by binocular stereo.
Acknowledgements
This work was supported mainly by the “GNCS  INdAM”, in part by ONR grant N000141512013 and the UC San Diego Center for Visual Computing. W. Smith was supported by EPSRC grant EP/N028481/1.
References
 [1] G. A. Atkinson and E. R. Hancock. Recovery of surface orientation from diffuse polarization. IEEE Transactions on Image processing, 15(6):1653–1664, 2006.
 [2] G. A. Atkinson and E. R. Hancock. Shape estimation using polarization and shading from two views. IEEE Trans. Pattern Anal. Mach. Intell., 29(11):2001–2017, 2007.
 [3] G. A. Atkinson and E. R. Hancock. Surface reconstruction using polarization and photometric stereo. In Proc. CAIP, pages 466–473, 2007.

[4]
P. N. Belhumeur, D. J. Kriegman, and A. Yuille.
The Basrelief ambiguity.
Int. J. Comput. Vision
, 35(1):33–44, 1999.  [5] J. F. Blinn. Models of light reflection for computer synthesized pictures. Computer Graphics, 11(2):192–198, 1977.
 [6] O. Drbohlav and R. Šára. Unambiguous determination of shape from photometric stereo with unknown light sources. In Proc. ICCV, pages 581–586, 2001.
 [7] A. Ecker and A. D. Jepson. Polynomial shape from shading. In Proc. CVPR, pages 145–152. IEEE, 2010.
 [8] C. P. Huynh, A. RoblesKelly, and E. Hancock. Shape and refractive index recovery from singleview polarisation images. In Proc. CVPR, pages 1229–1236, 2010.
 [9] C. P. Huynh, A. RoblesKelly, and E. R. Hancock. Shape and refractive index from singleview spectropolarimetric images. Int. J. Comput. Vision, 101(1):64–94, 2013.
 [10] A. Kadambi, V. Taamazyan, B. Shi, and R. Raskar. Polarized 3D: Highquality depth sensing with polarization cues. In Proc. ICCV, 2015.
 [11] A. H. Mahmoud, M. T. ElMelegy, and A. A. Farag. Direct method for shape recovery from polarization and shading. In Proc. ICIP, pages 1769–1772, 2012.
 [12] R. Mecca and M. Falcone. Uniqueness and approximation of a photometric shapefromshading model. SIAM Journal on Imaging Sciences, 6(1):616–659, 2013.
 [13] R. Mecca and Y. Quéau. Unifying diffuse and specular reflections for the photometric stereo problem. In IEEE Workshop on Applications of Computer Vision (WACV), Lake Placid, USA, 2016, 2016.
 [14] D. Miyazaki, M. Kagesawa, and K. Ikeuchi. Transparent surface modeling from a pair of polarization images. IEEE Trans. Pattern Anal. Mach. Intell., 26(1):73–82, 2004.
 [15] D. Miyazaki, R. T. Tan, K. Hara, and K. Ikeuchi. Polarizationbased inverse rendering from a single view. In Proc. ICCV, pages 982–987, 2003.
 [16] O. Morel, F. Meriaudeau, C. Stolz, and P. Gorria. Polarization imaging applied to 3D reconstruction of specular metallic surfaces. In Proc. EI 2005, pages 178–186, 2005.
 [17] S. G. Narasimhan, V. Ramesh, and S. Nayar. A class of photometric invariants: Separating material from shape and illumination. In Proc. ICCV, volume 2, pages 1387 – 1394, 2003.
 [18] S. Nayar, X. Fang, , and T. Boult. Separation of reflection components using color and polarization. Int. J. Comput. Vision, 21(3):163–186, 1997.
 [19] T. T. Ngo, H. Nagahara, and R. Taniguchi. Shape and light directions from shading and polarization. In Proc. CVPR, pages 2310–2318, 2015.

[20]
Y. Quéau, R. Mecca, and J. D. Durou.
Unbiased photometric stereo for colored surfaces: A variational
approach.
In
2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
, pages 4359–4368, June 2016.  [21] Y. Quéau, R. Mecca, J.D. Durou, and X. Descombes. Photometric stereo with only two images: A theoretical study and numerical resolution. Image and Vision Computing, 57:175–191, 2017.
 [22] S. Rahmann and N. Canterakis. Reconstruction of specular surfaces using polarization imaging. In Proc. CVPR, 2001.
 [23] W. Smith and F. Fang. Height from photometric ratio with modelbased light source selection. Computer Vision and Image Understanding, 145:128 – 138, 2016.
 [24] W. A. P. Smith, R. Ramamoorthi, and S. Tozza. Linear depth estimation from an uncalibrated, monocular polarisation image. In Computer Vision  ECCV 2016, volume 9912 of Lecture Notes in Computer Science, pages 109–125, 2016.
 [25] V. Taamazyan, A. Kadambi, and R. Raskar. Shape from mixed polarization. arXiv preprint arXiv:1605.02066, 2016.
 [26] S. Tozza, R. Mecca, M. Duocastella, and A. Del Bue. Direct differential photometric stereo shape recovery of diffuse and specular surfaces. Journal of Mathematical Imaging and Vision, 56(1):57–76, 2016.
 [27] L. B. Wolff. Polarization vision: a new sensory approach to image understanding. Image Vision Comput., 15(2):81–93, 1997.
 [28] L. B. Wolff and T. E. Boult. Constraining object features using a polarization reflectance model. IEEE Trans. Pattern Anal. Mach. Intell., 13(7):635—657, 1991.
Comments
There are no comments yet.