1 Introduction
Depth estimation has been researched for decades. A common and important assumption is the photoconsistency assumption, i.e., the colors of a point observed from different views ought to be similar. This assumption holds for nonocclusion points. However, it fails for occlusion points as these points can not be observed from all views.
Many works have been done to handle occlusions. Kolmogorov et al. [5] encodes the visibility constraint and introduces an occlusion term to smooth it. Woodford et al. [18] adds an additional second order smoothness terms and use Quadratic PseudoBoolean Optimization to solve it. Then, Bleyer et al. [11] applies the asymmetric occlusion model to improve depth estimation. However, due to the wide baseline and the lack of views, these works only notice the image of the occlusion, i.e., the occlusion point is visible in reference view and invisible in other views, and heavy occlusion can not be well handled.
Lightfield cameras from Lytro [7] and Raytrix [10] obtain a 4D light field by inserting a microlens array into the traditional camera [9], which provides hundreds of views in a single shot. Apart from this, the baseline between these views is very small, which means no aliasing occurs and the consistent correspondences still holds in this case [19]. Combining the multiview and the microbaseline, a more detailed and complete representation of occlusion appears, which renews the method to handle occlusion.
applies structure tensor to analyze the Epipolar Plane Image (EPI). This method only takes advantage of angular samples in one dimension, and the tensor field becomes too random to analyze in heavy occlusion. Yu
et al. [20] encodes the constraints of 3D lines and introduces Lind Assisted Graph Cuts (LAGC) to improve depth estimation. However, the 3D lines are partitioned into small and incoherent segments in heavy occlusion which leads to wrong estimation. Chen et al. [4] proposes to select the unoccluded views by using a bilateral metric in angular space. However, this selection lacks the guidance of the physical model and the number of unoccluded views is also a predefined parameter. Wang et al. [13, 14] analyzes the formation of occlusion and finds the consistency between the spatial patch and the angular patch in occlusion boundaries. They select the unoccluded views according to the edges in spatial patch. However, their method failed in multioccluder areas where the local patch can not be divided into two regions by a straight line, and it leads to over smooth results in these areas (Fig. 7, 8). Although Wang et al. have a more recent work [12] for depth estimation, it aims at solving shadings which is not applicable in our case.In this paper, we explore the light field occlusion theory for multioccluder occlusion, and propose an algorithm to regularize depth map. Our main contributions are:

The light field occlusion theory for multioccluder occlusions.

An algorithm to accurately select the unoccluded views in angular space with the guidance of the occlusion theory which does not need any predefined parameters.

A depth estimation algorithm which is robust to multioccluder occlusion.
In Section 2, we model the multioccluder occlusion in light field, and derive the occlusionconsistency between the spatial and angular space, i.e., the occluded views in angular space are projections of the occluder in spatial space. With the guidance of occluderconsistency, we select the unoccluded views for each candidate occlusion point using a partition in spatial patch in Section 3.1, and obtain an initial depth map using the unoccluded views in Section 3.2. Then, we detect the occlusion points from the initial depth map according to the visible constraint in Section 4.1. Finally the occlusion map and the unoccluded views are used to build an antiocclusion energy function to refine depth in Section 4.3. In Section 5, we provide complete comparisons with other stateoftheart algorithms, both in quantitative and in qualitative, showing great advantages compared with previous works [15, 16, 20, 4, 13].
2 The Light Field Occlusion Model
In this section, we will analyze the formation of the occlusion based on the physical model, and explore the occlusion theory for multioccluder occlusion.
2.1 Definitions and Notations
Before building the light field occlusion model, we first give some definitions and notations of the light field model.
Definition 1.
Singleoccluder occlusion refers to the occlusion that occluded views and unoccluded views can be divided halfandhalf.
Definition 2.
Multioccluder occlusion refers to the occlusion that there are more occluded views than unoccluded views.
Point in 4D light field describes a ray emitting from the world point , and intersecting with two parallel planes, namely, the camera plane and the image plane, which we refer to as angular and spatial space. Tab. 1 presents a list of terms used throughout this section.
Term  Definition 

angular/camera plane coordinates  
spatial/image plane coordinates  
world coordinates  
the vector in angular coordinate system 

the vector in spatial coordinate system  
the vector in world coordinate system 
2.2 Occlusion Model
Previous work [13] has proved the occluderconsistency for singleoccluder occlusion, i.e., when refocused to the correct depth, the edge which separates the unoccluded and occluded views in the angular patch has the same orientation as the occlusion edge in spatial patch. This property is useful for singleoccluder occlusions, however it fails in multioccluder situation as the unoccluded and occluded pixels in angular patch can not be divided into two regions by a straight line.
We first consider a simple multioccluder occlusion (Fig. 1). Considering a pixel on the focal plane (the left image in Fig. 1), and an occluder intersects at (). Note that the occluder has two edges, and the directional vectors of these two edges in the plane are,
(1)  
The larger angle between these two vectors denotes occluded areas (the golden areas in Fig. 1). Without loss of generality, we assume .
For any other pixel on the focal plane, it will be observed by the view iff it meets the following inequalities,
(2)  
We then project these inequalities from world coordinate system to the image system (the right image in the Fig. 1). The corresponding directional vectors of the and are and respectively. and , is a scale factor to denote the scaling relationship between the world coordinate system and the image coordinate system. For any other point on the image, it is a background point iff,
(3)  
Then considering the main lens plane (the left image in Fig. 1. The light field is refocused to the depth ). For any other view on the main lens plane, it can capture the pixel iff
(4)  
where , , , is a scale factor to denote the scaling relationship between the image coordinate system and the angular coordinate system.
Revisiting the Eqn. 3 and 4, it is noticed Eqn. 3, 4 have the same inequalities and , are oneone correspondence, as and are on a same line, the following proposition can be obtained,
Proposition 1.
The occluded views in angular space are projection of the occluder in spatial space.
In other words, in a local spatial patch, the corresponding views of the occluder are the occluded views, and the corresponding views of the background are the unoccluded views. This proposition is called occluderconsistency later.
Note that, the proposition mentioned above is obtained under a simple multioccluder assumption. For a more complex multioccluder, the boundaries of the occluder can be divided into more small straight lines by following the idea of the Calculus, and the Eqn. 3, 4 will contain more inequalities. No matter how many inequalities, the numbers of inequalities in Eqn. 3, 4 are equal and inequalities are oneone correspondence, and the Prop. 1 always holds.
2.3 Projection Radius
For occluded points, the occluded views in angular space are projection of the occluder in spatial space in a local patch. We derive the radius of the patch in a 2D light field. In Fig. 2, the purple lines denote the background at depth , the orange lines denote the occluder at depth , blue lines denote the camera plane, denotes a pixel in the background, and denote different views in light field.
Firstly, the light field is refocused to the background at depth (Fig. 2)[8],
(5) 
where is the input light field, is the refocused light field at depth , is the central view of the light field. It is noticed the light from views converge to the point , and the light from are blocked by the occluder. are the occluded views in angular space, and the images of these two views come from points . In other words, the horizontal distance between and is the projection radius.
Then, the light field is refocused to the foreground at depth (Fig. 2)
(6) 
where is the refocused light field at depth . It can be seen the light from all views converge to the point . As the horizontal distance between and is 0, the projection radius is obtained
(7)  
3 Depth Initialization
In this section, we will show how to use the occluderconsistency between the spatial and angular space to select the unoccluded views, and how to obtain an initial depth estimation using these unoccluded views.
3.1 UnOccluded Views Selection
We first give an important assumption about occlusion of the proposed algorithm. The occluder has a different color of the occluded point. For the situation that the occluder is similar to occluded point, as far as we know, no work can handle it. Based on this assumption, the following proposition holds,
Proposition 2.
An occlusion point is an edge point but an edge point may not be an occlusion point.
With Prop. 2, the canny edge detector is firstly applied to find the candidate occlusion points
. Then the Kmeans clustering
[1] is applied for the local image patch^{1}^{1}1The patch size is set as half of the angular resolution of light field initially since we do not have depth map here. centered at each occlusion point from (the feature is the RGB color, and the number of labels is 2). For each patch, the pixels which share the same label with the center pixel are labeled as background or unoccluded points. According to the occluderconsistency mentioned in Prop. 1, the corresponding views in angular patch of the center pixel are labeled as unoccluded views . These pixels are candidate occlusion pixels in the central view (Fig. 3). For pixels occluded in other views, the unoccluded views are voted from its neighborhood system (Fig. 4).3.2 Depth Estimation
With the unoccluded views selection, a robust initial depth estimation is obtained based on the classical photoconsistency in unoccluded views.
We refocus the light field to different depth ,
(8) 
where is the input 4D light field, is the refocused light field in depth , are the spatial coordinates, and are the angular coordinates. Then, the matching cost of each pixel is defined as,
(9) 
where is the unoccluded views set of the point (Sec. 3.1), and denotes the size of the set , and denotes the angular image of pixel at depth (In other words, , are the spatial coordinates of pixel ), and is the color of pixel in central view.
Then, the initial depth estimation of each pixel is obtained,
(10) 
4 Depth Regularization
In this section, we will show how to find the occlusion and regularize it with a global energy function.
4.1 Occlusion Detection
We find the occlusion point using the visible constraint, i.e., the occlusion point is visible in reference view and invisible in other views. In other words, if the difference of disparities of two neighboring pixels is larger than 1 pixel, there is an occlusion point here. In light field, this constraint can only find the occlusion point in the central view due to the multiple views, and the threshold value of the difference of the disparities ought to be relaxed to find the occlusion in other views,
(11) 
where is the angular resolution ^{2}^{2}2The angular resolution of the light field that we use in the experiment is ..
As the initial depth estimation in occluded points is unreliable, and sometimes it is too smooth and random to distinguish the occlusion point accurately from only two neighboring pixels. We select a disparity patch centered at each candidate occlusion point (the detected points by the Canny operator), and use the Kmeans clustering to divide the patch into 2 classes, then the difference of disparities of each class is determined by the subtraction of two centers,
(12) 
where is the center of the th class.
Finally the candidate occlusion point is determined by comparing the and ,
(13) 
4.2 Unoccluded Views ReSelection
For each occlusion point , we apply the Kmeans clustering for its local depth patch to find the background depth and occluder depth , and the projection radius is determined using Eqn. 7. Then, each patch is resized to the angular resolution of light field. The following procedure is the same as Sec. 3.1.
4.3 Final Depth Regularization
Finally, given the occlusion cues, we regularize with Markov Random Field (MRF) for a final depth map.
(14) 
where is the depth of pixel , and are neighboring pixels, and (=0.35 in our experiment) is to control the smooth term.
The data term measures the photoconsistency in unoccluded views,
(15) 
where (=3 in our experiment) controls the sensitivity of the function to large distance, and the definition of can be found in Eqn. 9.
The smooth term encodes the smoothness constraint between two neighboring pixels,
(16)  
where is the edge map of the central view image , and are three weighting factors. Comparing with previous works, we introduce the occlusion term and the edge term into the weighting function to preserve occlusion boundaries and keep the depth of occlusion boundaries similar.
The full description of the proposed algorithm is given in Algo. 1. First, edge detection is applied on central view image to find all possible occlusion point . Then, the unoccluded views of each candidate occlusion point are selected based on a Kmeans clustering. After that, an initial depth map is estimated by using the unoccluded views. Moreover, the occlusion map is detected by using the initial estimation. Finally, based on the unoccluded views and the occlusion map, the depth map is regularized with a MRF energy function.
5 Experimental Results
We compare our results with the globally consistent depth labeling (GCDL) by Wanner et al. [15], the lineassigned graphcut(LAGC) by Yu et al. [20], the Bilateral Consistency Metric (BCM) by Chen et al. [4] and the occlusionaware depth estimation (OADE) by Wang et al. [13]. Note that, the results of GCDL come from their published papers [17], the results of LAGC and OADE are obtained by running their published codes or executable files, and the results of BCM are provided by the authors.
The performance of the proposed algorithm is evaluated by using the most popular light field datasets [17]. This datasets are synthesized by the Blender, and each data includes a light field and its ground truth depth. For runtime, on a 3.4 GHz Intel i7 machine with 16 GB RAM, our MATLAB implementation takes about 1 hour on a color light field. Considering the precise results and the lowspeed of MATLAB, this time cost is acceptable.
5.1 Unoccluded views selection
A consensus on depth estimation in computer vision is that more effective views lead to more accurate depth estimation. So, the precision and the recall of the selected unoccluded views are important. We count the Fmeasure (the harmonic mean of precision and recall compared with the ground truth) of the unoccluded views in occlusion using our algorithm, and compare it with previous work
[13]. The quantitative comparisons are listed in Tab. 2, and the qualitative comparisons are shown in Fig. 5. It can be seen that our selection method outperforms previous work in the unoccluded views selection. And this advantage is more obvious especially in multioccluder areas. In Fig. 5, our method can always select accurate unoccluded views, however, Wang’s et al. [13] method always selects more occluded views and these selections will lead to over smooth results in occlusion areas (Fig. 7, 8). It is noticed our method performs not good in Horses. The reason is that there are many textures near the occlusion boundaries in background, and the Kmeans clustering based on color can not divide the background and occluder accurately in the complex texture areas.Buddha  Buddha2  Horses  Medieval  Mona  Papillon  StillLife  

OADE[13]  0.71  0.73  0.66  0.58  0.69  0.59  0.59 
Our method  0.71  0.74  0.62  0.60  0.79  0.79  0.72 
5.2 Occlusion Boundaries
For each data, we detect its occlusion boundaries using the depth map, and compute its Fmeasure. Then, we compare it with other stateoftheart algorithms. The quantitative comparisons are listed in Tab. 3, and the qualitative comparisons are shown in Fig. 6. Our algorithm outperforms the previous works. Note that the results of GCDL [15] and BCM [4] are not contained as it is difficult to run their codes in our experimental environment. However, as previous works [20, 13] have demonstrated their advantages to [15] and [4], these comparisons are convincing. Our method performs not good in StillLife (the third row in Fig. 6). That is because there are many weak occlusions (the difference of disparities is small) in StillLife. The difference of disparities in the boundaries of the bee is small, the occlusion detection method in Sec. 4.1 can not handle these occlusions well.
Buddha  Buddha2  Horses  Medieval  Mona  Papillon  StillLife  

LAGC[20]  0.54  0.41  0.55  0.32  0.64  0.53  0.49 
OADE[13]  0.71  0.70  0.75  0.47  0.75  0.65  0.82 
Our method  0.75  0.85  0.80  0.56  0.81  0.76  0.71 
Buddha  Buddha2  Horses  Medieval  Mona  Papillon  StillLife  

GCDL[15]  0.079  0.094  0.163  0.111  0.096  0.158  0.184 
LAGC[20]  0.134  0.179  0.188  0.144  0.119  0.406  0.150 
BCM[4]  0.057  0.139  0.122  0.129  0.077  0.108  0.113 
OADE[13]  0.095  0.107  0.140  0.115  0.089  0.125  0.212 
Our method  0.069  0.051  0.074  0.101  0.071  0.148  0.110 
5.3 Depth maps
The quantitative comparisons of the RMS errors of recovered disparity maps are listed in Tab. 4. Note that all results are obtained in a same parameters setting. Our algorithm outperforms previous stateoftheart algorithms in almost all datasets.
The qualitative comparisons of the recovered disparity map are shown in Fig. 7, 8. It can be seen that, our algorithm yields sharper occlusion boundaries. As our selection method for unoccluded views can always find them accurately (Fig. 5, Tab. 2), and do not select any other occluded views, our results are sharp in the multioccluder areas.
It is noticed the proposed algorithm performs not as good as OADE in the unoccluded views selection for Horses, however we get better results in depth map. This is because our energy function can preserve occlusion boundaries better. Moreover, we get the best results in the depth estimation for StillLife, however the Fmeasure of detected boundaries is not the best. The reason is that our algorithm performs much better than OADE in multioccluder boundaries which have a larger difference of disparities.
Apart from the heavy occlusion, the proposed algorithm also performs well for shadings (Fig. 7(b)), although it is not taken into account in our model. Comparing with the Buddha (Fig. 7(a)), there are more shadings in the buddha2 (Fig. 7(b)). Although all algorithms perform good in the Buddha, only our algorithm maintains the same level in the Buddha2. The reason for this phenomenon worth further study.
However, our algorithm can not handle the situation where the background has a similar texture or color to the occlusion. In the Fig. 8(b) (the green box), as the color of cloth in background is similar to the foreground, it is difficult to recover the true depth. Apart from this, our algorithm can not handle textureless areas just like all previous methods.
6 Conclusion and Future works
In this paper, we propose a new antiocclusion depth estimation algorithm by modeling the formulation of the occlusion. The model reveals an important property, the occluders are consistent between the spatial and angular space. Utilizing this information, we improve the depth estimation in occlusion areas in two ways. Firstly, the unoccluded views are accurately selected by a clustering in spatial space, and the classical photoconsistency is enforced in these views. Secondly, the occlusion map is detected using the edges and the initial depth map, and then combined into the smooth term in the MRF function to keep the occlusion boundaries sharp. We have demonstrated the advantages of the proposed algorithm compared with other stateoftheart algorithms in synthetic datasets.
Just as we mentioned in Sec. 5.3, our algorithm produces unexpected good results in shading situations although the shading is not considered in our model. It is worthy for us to investigate the phenomenon. Furthermore, as the light fields captured by real light field cameras have more noise compared with the synthetic datasets, it is essential to do more experiments on real data to better evaluate the performance of the proposed algorithm.
References
 [1] Yuichiro Anzai. Pattern Recognition & Machine Learning. Elsevier, 2012.
 [2] Yuri Boykov and Vladimir Kolmogorov. An experimental comparison of mincut/maxflow algorithms for energy minimization in vision. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 26(9):1124–1137, 2004.
 [3] Yuri Boykov, Olga Veksler, and Ramin Zabih. Fast approximate energy minimization via graph cuts. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 23(11):1222–1239, 2001.
 [4] Can Chen, Haiting Lin, Zhan Yu, Sing Kang, and Jingyi Yu. Light field stereo matching using bilateral statistics of surface cameras. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1518–1525, 2014.
 [5] Vladimir Kolmogorov and Ramin Zabih. Multicamera scene reconstruction via graph cuts. In Computer Vision – ECCV 2002, pages 82–96. Springer, 2002.
 [6] Vladimir Kolmogorov and Ramin Zabin. What energy functions can be minimized via graph cuts? Pattern Analysis and Machine Intelligence, IEEE Transactions on, 26(2):147–159, 2004.
 [7] Lytro. Lytro redefines photography with light field cameras. http://www.lytro.com, 2011.
 [8] Ren Ng. Digital light field photography. PhD thesis, stanford university, 2006.
 [9] Ren Ng, Marc Levoy, Mathieu Brédif, Gene Duval, Mark Horowitz, and Pat Hanrahan. Light field photography with a handheld plenoptic camera. Computer Science Technical Report CSTR, 2(11), 2005.
 [10] raytrix. Raytrix lightfield camera. http://www.raytrix.de, 2012.
 [11] Carsten Rother, Vladimir Kolmogorov, Victor Lempitsky, and Martin Szummer. Optimizing binary mrfs via extended roof duality. In Computer Vision and Pattern Recognition, 2007. CVPR’07. IEEE Conference on, pages 1–8. IEEE, 2007.
 [12] TingChun Wang, Manmohan Chandraker, Alexei Efros, and Ravi Ramamoorthi. Svbrdfinvariant shape and reflectance estimation from lightfield cameras. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
 [13] TingChun Wang, Alexei A Efros, and Ravi Ramamoorthi. Occlusionaware depth estimation using lightfield cameras. In Proceedings of the IEEE International Conference on Computer Vision, pages 3487–3495, 2015.
 [14] TingChun Wang, Alexei Alyosha Efros, and Ravi Ramamoorthi. Depth estimation with occlusion modeling using lightfield cameras. Pattern Analysis and Machine Intelligence, IEEE Transactions on (In press), 2016.
 [15] Sven Wanner and Bastian Goldluecke. Globally consistent depth labeling of 4d light fields. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 41–48. IEEE, 2012.

[16]
Sven Wanner and Bastian Goldluecke.
Variational light field analysis for disparity estimation and superresolution.
Pattern Analysis and Machine Intelligence, IEEE Transactions on, 36(3):606–619, 2014.  [17] Sven Wanner, Stephan Meister, and Bastian Goldluecke. Datasets and benchmarks for densely sampled 4d light fields. In VMV, pages 225–226. Citeseer, 2013.
 [18] Oliver Woodford, Philip Torr, Ian Reid, and Andrew Fitzgibbon. Global stereo reconstruction under secondorder smoothness priors. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 31(12):2115–2128, 2009.
 [19] Zhaolin Xiao, Qing Wang, Guoqing Zhou, and Jingyi Yu. Aliasing detection and reduction in plenoptic imaging. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3326–3333, 2014.
 [20] Zhan Yu, Xinqing Guo, Haibing Lin, Andrew Lumsdaine, and Jingyi Yu. Line assisted light field triangulation and stereo matching. In Proceedings of the IEEE International Conference on Computer Vision, pages 2792–2799, 2013.
Comments
There are no comments yet.