1 Introduction
The reconstruction of three dimensional (3D) information at hand of two dimensional images is a classic problem in computer vision. Many approaches exist to tackle the task, as documented by a rich literature and a number of excellent monographs, among them let us mention here [14, 43, 45]. Let us also mention the survey [16] on 3D reconstruction methods that may be more oriented towards the computer graphics community. Following [45] one may distinguish approaches based on the point spread function as in depth from focus or defocus [49], triangulationbased methods such as stereo vision [8] or structure from motion [41] and intensitybased or photometric methods such as shape from shading or photometric stereo [14]
. An abundance of specific approaches exist that may be roughly classified at hand of the mentioned proceeding. Generally speaking these may be distinguished depending on the type of image data, the number of acquired input images, or if the camera or objects in the scene may move or not. As an example let us mention here techniques based on specular flow
[2, 10, 32] relying on relative motion between a specular object and its environment.Focusing on photometric approaches, as mentioned by Woodham [46] and Ihrke [16] these typically employ a static viewpoint and variations in illumination to obtain the 3D structure. While shape from shading is the corresponding photometric technique classically making use of just one input image cf. [14], photometric stereo (PS) allows to reconstruct the depth map of a static scene from several input images taken from a fixed view point under different illumination conditions. The pioneer of the PS problem was Woodham in 1978 [46], see also Horn [15]. Woodham derived the underlying image irradiance equation as a relation between the image intensity and the reflectance map. It has been shown that the Lambertian surface orientation can be uniquely determined from the resulting appearance variations provided that the surface is illuminated by at least three known, noncoplanar light sources, each for an individual input image [47].
As it is for instance also recognized in [16], most of the later approaches have followed Woodham’s idea and kept two simplifying assumptions. Of particular importance, the first one supposes that the surface reflects the light according to Lambert’s law [18]. This simple reflectance model can still be a reasonable assumption on certain types of materials, when the scene is composed of matte surfaces, but fails for shiny objects concentrating light distributions. Such surfaces can readily be seen in real world situations. It is quite well proved that a light source illuminating a rough surface, reflects a significant part of the light as described by a nonLambertian reflectance model [35, 6, 4]. In such models the intensity of reflected light depends not only on the light direction but also on the viewing angle, and the light is reflected in a mirrorlike way accompanied by a specular lobe. The second assumption in classic PS models is that scene points are projected orthographically during the photographic process. This is a reasonable assumption if objects are far away from the camera, but not if they are close in which the perspective effects grow to be important. The importance of using the perspective projection in such a situation has been demonstrated in the computer vision literature, in the context of photometric methods let us refer for instance to the work [39] where a corresponding example is discussed in detail.
Many studies in PS considered the nonLambertian effects as outliers and tried to remove them. Mukaigawa
[26] suggested a random sample consensus based approach where only diffuse reflection is selected from among the candidates. Mallick [20] introduced a rotation transformation for transforming the RGB color channel to a SUV color channel with the specular channel S and diffuse channels UV. Then, the specular channel S is used for removing specularities. Chanki [50] introduced a strategy based on a maximum feasible subsystem approach. In their method, the maximum subset of images satisfying the Lambertian constraint is obtained among the whole set of PS images that include nonLambertian effects like specularities. A median filtering technique is illustrated by Miyazaki [25] to evade the influence of specular reflections which they considered as outliers. Another method relying on this concept is presented by Tang [36] who proposed a coupled Markov Random Field based on treating the specularities and shadows as noise. Wu [48] considered the 3D recovery problem using a convex optimization technique for separating specularities as deviations from the basic Lambertian assumption in the objective function. Smith and Fang [33] used a modelbased approach that excludes observations that do not fit the Lambertian image formation model. Hertzmann and Seitz [13]employed some reference objects which are considered to be of homogeneous material for simplicity, meaning that purely specular or purely diffuse materials are addressed. In some other works more complex appearance models are fitted to estimated data, thereby relying e.g. as in the work of Goldman
[11] on the use of a convex combination of a small number of known materials, or as in the paper of Oxholm and Nishino [27] on a probabilistic formulation for linking geometry and lighting estimation by introducing priors.Regarding the perspective projection, one of the first works combining this technique with PS is performed by Galo and Tozzi [9]. Their work relies on considering point light sources proximate to the lighted object surface. A perspective PS model is also proposed by Tankus and Kiryati based on Lambertian reflection [37]
. A technically different perspective method for Lambertian PS using hyperbolic partial differential equations (PDEs) is presented by Mecca
[24]. Turning to the use of nonLambertian surface reflectance to account for specular highlights in photometric methods, we may note that the investigation of a shapefromshading method using the Phong model has been shown to give very reasonable results when employing it within a useful process chain [44]. Therefore it seems apparent that an extension to PS making use of a similar image irradiance equation may yield even better results given that in PS more input images than in shape from shading are at hand.Concerning perspective PS techniques that may also deal with nonLambertian effects, the recent works of Mecca [23, 22] should be mentioned. In these approaches, an individual model for PS is suggested by considering separated purely specular and purely Lambertian reflections using five and ten input images, respectively. The separated processing of the reflectance models requires input images with the minimum value of saturation [42] which may lead to cumbersome limitations for some real world applications as e.g. in the case of spatially varying materials. When solving the resulting hyperbolic PDEs, Mecca [23] rely on the fast marching method. In order to apply this technique, the unknown depth value of a certain surface point must be given in advance. However, this information is not always available especially in realworld applications. Let us note that a similar approach as in the works of Mecca and his coauthors is also applied in the orthographic PS method in [42] that is based on dividing the surface into two different, purely specular and purely diffuse parts, which is a difficult task as also mentioned in [42].
Our contributions.
The novel method we propose involves the conceptual advantages of considering perspective projection and nonLambertian reflectance simultaneously based on the complete BlinnPhong model known from computer graphics [5, 30]. By taking into account the complete reflection model our method does not rely on a separation of specular and diffuse reflection in any stage of the computation. In particular, no surface or scene dividing task or previous knowledge on the depth of the scene is required. As a side effect our method is inherently able to handle objects with spatially varying materials without modification. In addition, it is worth to note that we will use three input images in all our experiments which is the minimum necessary inputs for the classic orthographic PS framework with Lambertian reflectance model. In recent work it has been discussed that the use of three input images can be advantageous [42].
Involving the mentioned model assumptions leads to a concrete PS algorithm as sketched in Fig. 1. By the combination of the mentioned benefits we propose a more robust and effectively easier to use method as in previous literature. As a side note, since the complete BlinnPhong model we employ is extensively studied in computer graphics, the surface reflectance in input images as well as expected computational results are potentially easier to interprete than in methods that rely on complex preprocessing steps. Conceptually our work extends the approach presented by Khanian [17]. A main point of the latter conference article is to study the effect of lightening directions on numerical stability while the presentation is restricted there to one spatial dimension.
In addition to presenting an appealing alternative PS approach, we investigate two different methods of realizing the perspective projection. The first method is to compute the normal field and then modifying the gradient field based on the perspective projection which is also proposed in [31, 28]
. As it manipulates the normal vectors, we refer to this technique as the perspective projection based on normal field (PPN) method. The second method is to consider a perspective parameterization of photographed object surfaces for getting the gradient field of the surface. We call this approach the perspective projection based on surface parameterization (PPS) method. Furthermore, we investigate the effect of modeling a camera with the chargecoupled device sensor (CCD camera) on the reconstruction process and the quality of results.
2 Perspective Projection
In this section we introduce two different techniques applied for obtaining the perspective projection.
As we will also consider for experimental comparison a Lambertian perspective PS model,
we also recall its construction here.
A Lambertian scene with
albedo is illuminated from directions , where by corresponding
point light sources at infinity, with diffuse intensity so that it satisfies the following reflectance equation [14]:
(1) 
where is the diffuse material parameter, and are the intensity and surface normal at pixel , respectively.
2.1 Modifying Normal Vectors
The first perspective projection method deals with processing the field of normal vectors . Once the normal map is reconstructed from the orthographic image irradiance equations, the depth map is recovered by giving the following components in Eq. (2) and Eq. (3) to the integrator:
(2) 
where for a camera with the focal length is:
(3) 
In what follows, we denote the perspective projection realised via projection of the normal vector with PPN.
2.2 Direct Perspective Surface Parameterization
Another approach to apply the perspective projection is via corresponding surface parameterization, shown in Fig. 2. In order to project the realworld point to the point on the image plane , we will consider the Thales theorem in both horizontal red and vertical blue triangles. So, we will have:
(4) 
On the other hand, in reality the image plane lies behind the lens.
Therefore, the surface is parameterized using the following formulation,
where is the focal length.
For all points in as the image plane:
(5) 
From this surface parameterization, we can extract the partial derivatives of the surface:
(6)  
(7) 
Finally, we get the surface normal vector as the cross product of the partial derivatives of the surface:
(8) 
So, in this case, the obtained surface normal (8) will be used in image irradiance equation. We recall here the Lambertian perspective image irradiance equation [37], as this will be extended in our model.
In order to remove the dependency of the image irradiance equation on the unknown depth , it will be substituted by , , so that we have to apply to obtain the depth out of our new unknown . This yields:
(9) 
A closed form solution for the gradient field is obtained in [37]. For completeness of the presentation, we now recall the main points in its construction.
Let us consider three input images (the minimum needed inputs in classic PS). By finding from the first image irradiance equation in (9), and replacing it in the second and third image irradiance equation, a linear system of equations should be solved for obtaining the unknown vector :
(10) 
where, we have with :
(11) 
(12) 
(13) 
(14) 
(15) 
The explicit solutions are:
(16) 
Now, we can obtain the albedo of the surface by plugging the resultant gradient vector for instance into the following equation:
(17) 
2.3 Sensitivity of the Solution
Let us try to access the sensitivity of the solution with respect to the lighting directions, which may lead to conditions on the illumination. To this end, the nonsingularity condition of the matrix of coefficients introduced in the previous paragraph should be explored. So, after computing the determinant of and considering the nonsingularity condition , the nonsingularity can be assured in virtually all cases by ensuring that the contributing terms are not zero. This idea leads to the indicator:
(18) 
The first three expressions imply the linear independence of light directions and it can be also obtained from the nonsingularity condition of the light directions matrix. The other resultant expressions are different and satisfying all of them may not be an easy task. Consequently, the sensitivity of the solution to the lightening can be higher than in the PPN approach.
Specular reflection  Diffuse reflection  Blinn–Phong model 
3 BlinnPhong Reflectance Model
Let us introduce the BlinnPhong reflectance model for addressing the issue of specular reflections
of nonLambertian materials. A useful reflectance model giving an approximation for realworld surface reflectance is considering additionally to the Lambertian reflectance a specular term as introduced in the model of BlinnPhong [5, 30]. We stress the world ”additionally” since in reality, most of the objects show both of these reflections in different areas, cf. Fig. 3. Therefore, they include both reflection models at the same time. In the BlinnPhong model, angle of incidence and also the angle between the vector and the vector (halfway vector of the light and viewing direction) are applied as shown in Fig. 3. Now we consider the BlinnPhong model under the perspective projection. To this end, we apply again the two different mentioned perspective approaches.
The basic and complete BlinnPhong image irradiance equation is defined as:
(19) 
Here is the specular material parameter. is the specular light source intensity and the exponent is also called the specular sharpness or shininess.
A corresponding orthographic model has been investigated in the shapefromshading context in [7]. To develop the perspective BlinnPhong PS model, we focus on the surface parameterization and plug in the perspective normal (8) in (19).
Considering input images for
corresponding lighting directions, this yields after some computation
the perspective BlinnPhong reflectance equations as:
(20) 
where
(21) 
(22) 
(23) 
(24) 
3.1 Numerical approach
Now, we present the numerical procedure which can be applied for addressing such a highly nonlinear system of equations. Recalling the description of a system of equations as , where is a given function by the equations from (20), we will discuss our solution procedure.
In order to cope with such a nonlinear system of equations, we applied the LevenbergMarquardt method introduced in [19, 21] as a combination of the GaußNewton method and steepest descent direction technique. In this method, if is the point at iteration , the next iteration can be computed as:
(25) 
(26) 
with .
The matrix is positive definite and is welldefined. In addition, this method does not need the conditions such as the invertibility of Jacobian matrix or Hessian matrix or .
Our numerical approach for the PPS method is based on the following problem formulation. Recalling the perspective BlinnPhong reflectance equations (20), and dividing three equations (, , , corresponding to the three used images in our method) leads to a nonlinear system of equations, with the equations like the following equation (27) as obtained for dividing the and images:
(27) 
with
(28) 
(29) 
It should be noted that even in this case of existing specularities and in the process of solving the perspective PS system for the BlinnPhong model (20), we will still follow Woodham and make use of only three input images.
Furthermore, as for the case of Lambertian PS, we will also deal with the BlinnPhong model
using the perspective version based on transforming the normal vectors (PPN method), i.e. after orthographic BlinnPhong PS. Finally, the obtained gradient fields are
processed by the Poisson integrator, see e.g. [3] for a recent account of surface normal integration.
4 CCD Cameras
We will also investigate the modeling of the CCD camera. In the case of CCD cameras, the following projection mapping is used as presented in [12].
The matrix
(30) 
contains the intrinsic parameters of the camera, namely the focal length in and direction equal to and , with the sensor sizes and and the principal point or focal point . The parameter
is called skew parameter. Here, we neglect this parameter since it will be zero for most of normal cameras
[12]. Using this matrix, we will introduce the following transformation to convert the dimensionless pixel coordinate to the image coordinate as follows:(31) 
By applying the abovementioned transformation, the following representation for the projected point will be obtained:
(32) 
The effect of this modeling can be potentially interesting, since this information is not always accessible. The above transformation is called centerizing in the experiments.
Perspective method  MSE for 1st input  MSE for 2nd input  MSE for 3rd input 

PPN method  0.004239  0.003297  0.007535 
PPS method  0.008042  0.021409  0.007644 
5 Experiments
This section describes our experiments performed by the proposed model and approaches. In a first test we confirm the investigation of Tankus [39] that the use of an orthographic camera model may yield apparent distortions in the reconstruction while a perspective model may take the geometry better into account, see the experiment documented in Fig. 4. This justifies the use of the perspective camera model. Note that in the figure the object of interest is relatively close to the camera.
In a series of tests we now turn to quantitative evaluations of the proposed computational models. To this end we consider the set of test images for use in the next experiments as shown in Fig. 6. The Beethoven test images (which depict a realworld scene) and the Sphere images are of the size . The Stanford Bunny test images have a resolution of . Both Bunny and Sphere are rendered using Blender. The 3D model of Stanford Bunny is obtained from the Stanford 3D scanning repository [1]. The 3D model of the face presented in Fig. 9 is taken from [34] with the size of . For comparing our results, the ground truth depth maps are extracted, and we will make use of the Mean Squared Error (MSE) showing the accuracy.
After considering the mentioned test settings, we demonstrate the applicability of our method at hand of real world medical test images from gastro endoscopy and discuss its superior reconstruction capabilities compared to previous models.
5.1 Tests of accuracy
In the first evaluation, we compare results of two mentioned perspective techniques of PPN and PPS, applied on the specular Sphere in Fig. 6 (c) with different values of focal length. MSE results of these 3D reconstructions are shown in Fig. 5. While obtained results of described perspective methods for some low values of focal lengths are close to each other, PPN perspective strategy outperforms PPS as the focal length increases.
In the second experiment concerned with the Beethoven image set, we investigate the difference between two mentioned perspective approaches on a more complex realworld object scene. To this end, we give in Table 1 the MSE comparing gray value data of the reprojected and input images. Since in this case the ground truth depth map is not available, we reconstruct the reprojected images by obtaining the gradient fields from the mentioned perspective approaches and replacing them in the Lambertian reflectance equation. It can be deduced from Table 1 that reprojecting from PPS method reaches a close accuracy regarding the third input image, while the PPN approach achieves higher accuracy in terms of the first and especially second input image.
5.2 Perspective methods and CCD camera model
Table 2 and Fig. 8 present the results of our 3D reconstructions for highly specular input images as shown in Fig. 6 (b) and Fig. 6 (c), respectively. In order to produce such images, we set nonzero intensities for diffuse and also specular light. Furthermore, the objects include both diffuse ans specular reflections.
Reconstruction by Perspective BlinnPhong PS  shininess  centerizing  no centerizing  
MSE of PPN method for Bunny  0.6  0.4  1.2  1.2  50  0.006355  0.042082 
MSE of PPS method for Bunny  0.6  0.4  1.2  1.2  50  0.012318  0.011318 
MSE of PPN method for Sphere  0.5  0.5  1.2  1.2  150  0.008264  0.022568 
MSE of PPS method for Sphere  0.5  0.5  1.2  1.2  150  0.008431  0.007716 
Depth reconstruction approach  shininess  MSE  

Proposed method  0.3  0.7  1.2  1.2  50  0.004019 
Mecca [23]  0  0.7  0  1.2  50  0.056586 
Values of MSE for 3D reconstruction show the high accuracy of our depth reconstructions by applying the complete BlinnPhong model which is accompanied by two presented perspective schemes. On the other hand, while results of the recovered depth map for the sphere are close to some extent, the outcome of the computed depth map for Bunny based on the PPN method obtains higher accuracy. However, the table also illustrates the higher sensitivity of the PPN perspective scheme to centerizing transformation than the PPS perspective method.
Finally, we compare our approach with the Lambertian model which is the most common model applied in PS and also the method presented by Mecca [23]. Fig. 8 shows the outcome of applying the Lambertian model. The deviation from faithful reconstruction over the specular area of the surface can be seen clearly.
The comparison between our approach and [23] is also shown in Fig. 9. As already indicated, our method applies complete perspective BlinnPhong model on three images including both diffuse and specular reflections and lights, while the method in [23] uses the specular term in BlinnPhong model to handle four purely specular images. The excellent result of the proposed method presented in Fig. 9 (b) over the high value of specularity with the absence of any deviation or artifact shows that the proposed method outperforms stateoftheart approaches such as in [23]. The MSE values of 3D reconstruction associated with experiments in Fig. 9 are also illustrated in Table 3.
5.3 Tests of applicability on realworld test images
This section describes experiments conducted by the proposed approach on realistic images. Let us first turn to some realworld medical test images. It should be noticed that we may also call these images realistic because we did not benefit from a controlled setup or additional laboratory facilities. We used just the images that are available as in any kind of medical (or many other real world) experiments. Let us note that experiments with endoscopic images are well known to yield a challenging test for photometric methods, and they are widely accepted for indicating possible medical applications of photometric approaches, see e.g. [38, 40]. As for our work, the usefulness of computational results for the indicated, concrete medical application is confirmed via collaboration with specialized medical doctors.^{1}^{1}1Let us mention as a reference the collaboration with Dr. Mohammad Karami H. (Dr.mokaho@skums.ac.ir) who is a gastroenterologist and internal medicine specialist at Shahrekord University of Medical Science (Iran).
We have performed trials on endoscopic images in which existence of high specularities is unavoidable. Input images are presented in Fig. 10 (a) and Fig. 10 (b) which are endoscopies of the upper gastrointestinal system. Their 3D reconstructions are represented in Fig. 11 and Fig. 12.
Similar to all the previous experiments only three input images are used. All outputs are displayed with an identical viewpoint enabling their visual comparison. The first column in Fig. 11 is indicative of the deviation in the Lambertian result. As it is visible in the cropped region in Fig. 10 (a), marked in the rectangular part, the beginning and end points of all three folds (marked by A, B and C) should be at about the same level, instead a drastic deviation is showing up at the left side of the surface in results obtained by applying Lambertian reflectance model as also indicated by the blue area in the corresponding depth map.
However, this deviation is rectified by applying the complete BlinnPhong model accompanied by PPS as can be seen in the second column of the Fig. 11 and also entirely corrected using this model with PPN approach represented in the third column. Furthermore, three folds of the surface are reconstructed very well in the BlinnPhong outcomes. This obviously desirable complete reconstruction of those folds cannot be seen in the Lambertian output.
Finally, as also the color alteration in the second row of Fig. 11 shows, high frequency details are recovered as well in the BlinnPhong outputs especially in the case of PPN approach.
These reconstruction aspects are again clearly observable in another endoscopy image depth reconstruction in Fig. 12 which are the depth resulting from inputs as in Fig. 10 (b). Once more, a deviation from the desirable output shape appears in the Lambertian outcome especially in the left corner side. This part of the surface, which is marked by (C) in the input and 3D resulting images, has a cavity toward the upside in reality, which is reconstructed well by the BlinnPhong outputs in contrast to the Lambertian result. The latter apparently provides a reconstruction completely on the opposite side for this region of the original surface. Let us pay attention also to the second row in Fig. 12. A curved line of the upper corrugated region (A) is obtained in the right corner of the BlinnPhong outputs, whereas this region is just a straight line in the right corner of the Lambertian outcome. The height of corrugated regions are obviously more faithfully reconstructed in the BlinnPhong results compared to the Lambertian one.
Last but not least, it is worth to mention that the viewing angle of the endoscopy cameras is very tight. Using cropped parts of those images in our experiments makes this experiment a highly challenging task of 3D reconstruction. The success of our approach to reconstruct such a tiny range of the depth values without any knowledge about photographic conditions reveals the capability of our proposed method in challenging real world applications.
In another test with realworld input images, we compared our method with the approach used in [37] by making use of the input images depicted as Fig. 2 (a), Fig. 2 (b) and Fig. 2 (c) in [37]. The surface is a plastic mannequin head, and the plastic material itself shows specularities. It is wellknown in computer graphics that plastic is a material that can be readily rendered by using the BlinnPhong model [29].
The depth reconstructions obtained by our technique and method of Tankus for those realworld images are presented in Fig. 13. Once again, the deviation from a natural shape in the Lambertian result can be clearly observed in the output in Fig. 13 (b) shown in an identical view with our result in Fig. 13 (a). In addition, let us note that the output of the BlinnPhong model is very clear and smooth, also at highlights. The inhomogeneous recovery of the shape when using the Lambertian model is cropped at some regions such as chin and tip of the nose c.f Fig. 13 (c), where we had to turn the Lambertian result to show these regions. The curved line appearing in the chin and the sharp point at the nose in the Lambertian reconstruction are also visible in [37]. Moreover, as proposed in [37], they could not process eyes in images, due to their specularities, while we succeeded in recovering the faithful 3D shape even with eyes using the complete BlinnPhong model as presented in Fig. 14.
6 Summary and Conclusion
A new framework in PS considering the complete perspective BlinnPhong reflectance including strong specular highlights is presented. The advantages of our method over stateoftheart PS methods and also the Lambertian model are proved via a variety of experiments. The model includes a perspective camera projection. Furthermore, two different techniques applied in perspective projection are evaluated. In addition, we have also evaluated the modeling of CCD camera. All results are obtained using a minimum necessary number of input images, which is an aspect of practical relevance in different applications and makes PS an interesting technique for close to realtime reconstruction, where a minimal set of images is required. We have demonstrated experimentally also the merits of our PS model for possible challenging real world applications, where we recover the surface with high degree of details. Let us also comment that our computational times are very reasonable i.e. in the order of a few seconds in all experiments.
Concerning possible limitations, as with all the possible approaches that rely on a parametric representation of surface reflectance, the corresponding additional parameters in the reflectance function have to be fixed. This issue may provide challenging numerical aspects in the optimization. Also, while the BlinnPhong model gives already reasonable results as we demonstrated, other more sophisticated reflectance models may be adequate for handling highly complicated surfaces, which may be a possible issue of future research.
This work is supported by the Deutsche Forschungsgemeinschaft under grant number BR2245/4–1.
References
 [1] The stanford 3d scanning repository. http://graphics.stanford.edu/data/3Dscanrep/. Accessed: 20160121.
 [2] Y. Adato, Y. Vasilyev, T. Zickler, and O. BenShahar. Shape from specular flow. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(11):2054–2070, 2010.
 [3] M. Bähr, M. Breuß, Y. Quéau, A. S. Boroujerdi, and J. D. Durou. Fast and accurate surface normal integration on nonrectangular domains. Computational Visual Media, 3(2):107–129, 2017.
 [4] P. Beckmann and A. Spizzichino. The scattering of electromagnetic waves from rough surfaces. Proceedings of the IEEE, 52(11):1389–1390, 1964.
 [5] J. F. Blinn. Models of light reflection for computer synthesized. In ACM SIGGRAPH Computer Graphics, volume 11, pages 192–198, 1977.
 [6] W. M. Brandenberg and J. T. Neu. Undirectional reflectance of imperfectly diffuse surfaces. Journal of the Optical Society of America, 56(1):97–103, 1966.
 [7] F. Camilli and S. Tozza. A unified approach to the wellposedness of some nonlambertian models in shapefromshading. SIAM Journal on Imaging Sciences, 10(1):26–46, 2017.
 [8] O. Faugeras. Threedimensional Computer Vision. The MIT Press, 1993.
 [9] M. Galo and C. L. Tozzi. Surface reconstruction using multiple light sources and perspective projection. In International Conference on Image Processing, volume 2, pages 309–312, 1996.
 [10] C. Godard, P. Hedman, W. Li, and G. J. Brostow. Multiviewreconstruction of highly specular surfaces in uncontrolled environments. In 3DV, 2015.
 [11] D. B. Goldman, B. Curless, A. Hertzmann, and S. M. Seitz. Shape and spatiallyvarying brdfs from photometric stereo. In CVPR, volume 1, pages 341–348, 2005.
 [12] R. Hartley and A. Zisserman. Multiple view geometry in computer vision. Cambridge university press, 2004.
 [13] A. Hertzmann and S. M. Seitz. Shape and materials by example: a photometric stereo approach. In CVPR, volume 1, pages 533–540, 2003.
 [14] B. K. P. Horn. Robot Vision. The M.I.T. Press, 1986.

[15]
B. K. P. Horn, R. J. Woodham, and W. M. Silver.
Determining shape and reflectance using multiple images.
M.I.T. Artificial Intelligence Laboratory, Memo 490, 1978.
 [16] I. Ihrke, K. N. Kutulakos, H. P. A. Lensch, M. Magnor, and W. Heidrich. Transparent and specular object reconstruction. Computer Graphics forum, 29(8):2400–2426, 2010.
 [17] M. Khanian, A. Sharifi Boroujerdi, and M. Breuß. Perspective photometric stereo beyond lambert. In QCAV, volume 9534, pages 95341F–95341F–8, 2015.
 [18] J. Lambert and D. L. DiLaura. Photometry, or, on the measure and gradations of light, colors, and shade: Translation from the latin of photometria, sive, de mensura et gradibus luminis, colorum et umbrae. Illuminating Engineering Society of North America, 2001.
 [19] K. Levenberg. A method for the solution of certain problems in least squares. Quarterly of Applied Mathematics, 5:164–168, 1944.
 [20] S. P. Mallick, T. E. Zickler, D. J. Kriegman, and P. N. Belhumeur. Beyond lambert: reconstructing specular surfaces using color. In CVPR, volume 2, pages 619–626, 2005.
 [21] D. Marquardt. An algorithm for least squares estimation on nonlinear parameters. Journal of the Society of Industrial and Applied Mathematics, 11(2):431–441, 1963.
 [22] R. Mecca and Y. Quéau. Unifying diffuse and specular reflections for the photometric stereo problem. In WACV, pages 1–9, 2016.
 [23] R. Mecca, E. Rodola, and D. Cremers. Realistic photometric stereo using partial differential irradiance equation ratios. Computers and Graphics, 51:8–16, 2015.
 [24] R. Mecca, A. Tankus, and A. F. Bruckstein. Twoimage perspective photometric stereo using shapefromshading. In ACCV, volume 7727, pages 110–121, 2012.
 [25] D. Miyazaki, K. Hara, and K. Ikeuchi. Median photometric stereo as applied to the segonko tumulus and museum objects. International Journal of Computer Vision, 86(2):229–242, 2010.
 [26] Y. Mukaigawa, Y. Ishii, and T. Shakunaga. Analysis of photometric factors based on photometric linearization. Journal of the Optical Society of America, 24(10):3326–3334, 2007.
 [27] G. Oxholm and K. Nishino. Multiview shape and reflectance from natural illumination. In CVPR, pages 2163–2170, 2014.
 [28] T. Papadhimitri and P. Favaro. A new perspective on uncalibrated photometric stereo. In CVPR, pages 1474–1481, 2013.
 [29] M. Pharr and G. Humphreys. Physically Based Rendering: From Theory to Implementation. Morgan Kaufmann Publishers Inc, 2010.
 [30] B. T. Phong. Illumination for computer generated pictures. Communications of ACM 18, 18(6):311–317, 1975.
 [31] Y. Quéau and J. D. Durou. Edgepreserving integration of a normal field: weighted least squares, tv and l1 approaches. In SSVM, volume 9087, pages 576–588, 2015.
 [32] A. C. Sankaranarayanan, A. Veeraraghavan, O. Tuzel, and A. Agrawal. Specular surface reconstruction from sparse reflection correspondences. In CVPR, pages 1245–1252, 2010.
 [33] W. A. P. Smith and F. Fang. Height from photometric ratio with modelbased light source selection. Computer Vision and Image Understanding, 145:128–138, 2016.
 [34] R. W. Sumner and J. Popović. Deformation transfer for triangle meshes. In ACM SIGGRAPH, volume 4, pages 399–405, 2004.
 [35] H. D. Tagare and R. J. P. DeFigueiredo. A framework for the construction of general reflectance maps for machine vision. CVGIP: Image Understanding, 57(3):265–282, 1993.
 [36] K. L. Tang, C. K. Tang, and T. T. Wong. Dense photometric stereo using tensorial belife propagation. In CVPR, volume 1, pages 132–139, 2005.
 [37] A. Tankus and N. Kiryati. Photometric stereo under perspective projection. In ICCV, volume 1, pages 611–616, 2005.
 [38] A. Tankus, N. Sochen, and Y. Yeshurun. Reconstruction of medical images by perspective shapefromshading. In ICPR, pages 778–781, 2004.
 [39] A. Tankus, N. Sochen, and Y. Yeshurun. Shapefromshading under perspective projection. Computers and Graphics, 63(1):21–43, 2005.
 [40] K. Tatemasu, Y. Iwahori, T. Nakamura, S. Fukui, R. J. Woodham, and K. Kasugai. Shape from endoscope image based on photometric and geometric constraints. Procedia Computer Science, 22:1285–1293, 2013.
 [41] C. Tomasi and T. Kanade. Shape and motion from image streams under orthography: a factorization method. International Journal of Computer Vision, 9(2):137–154, 1992.
 [42] S. Tozza, R. Mecca, M. Duocastella, and A. Del Bue. Direct differential photometric stereo shape recovery of diffuse and specular surfaces. Journal of Mathematical Imaging and Vision, 56:57–76, 2016.
 [43] E. Trucco and A. Verri. Introductory Techniques for 3D Computer Vision. PrenticeHall, 1998.

[44]
O. Vogel, L. Valgaerts, M. Breuß, and J. Weickert.
Making shape from shading work for realworld images.
In
DAGM Pattern Recognition
, volume 5748, pages 191–200. Springer Berlin Heidelberg, 2009.  [45] C. Wöhler. 3D Computer Vision. SpringerVerlag, 2013.
 [46] R. J. Woodham. Photometric stereo: a reflectance map technique for determining surface orientation from image intensity. In Image Understanding Systems and Industrial Applications, SPIE, volume 0155, pages 136–143, 1978.
 [47] R. J. Woodham. Photometric method for determining surface orientation from multiple images. Optical Engineering, 19(1):134–144, 1980.
 [48] L. Wu, A. Ganesh, B. Shi, Y. Matsushita, Y. Wang, and Y. Ma. Robust photometris stereo via lowrank matrix completion and recovery. In ACCV, volume 6494, pages 703–717, 2010.
 [49] Y. Xiong and S. Shafer. Depth from focusing and defocusing. In CVPR, pages 68–73, 1993.
 [50] C. Yu, Y. Seo, and S. Lee. Photometric stereo from maximum feasible lambertian reflections. In ECCV, volume 6314, pages 115–126, 2010.
Comments
There are no comments yet.