Depth estimation has been researched for decades. A common and important assumption is the photo-consistency assumption, i.e., the colors of a point observed from different views ought to be similar. This assumption holds for non-occlusion points. However, it fails for occlusion points as these points can not be observed from all views.
Many works have been done to handle occlusions. Kolmogorov et al.  encodes the visibility constraint and introduces an occlusion term to smooth it. Woodford et al.  adds an additional second order smoothness terms and use Quadratic Pseudo-Boolean Optimization to solve it. Then, Bleyer et al.  applies the asymmetric occlusion model to improve depth estimation. However, due to the wide baseline and the lack of views, these works only notice the image of the occlusion, i.e., the occlusion point is visible in reference view and invisible in other views, and heavy occlusion can not be well handled.
Light-field cameras from Lytro  and Raytrix  obtain a 4D light field by inserting a micro-lens array into the traditional camera , which provides hundreds of views in a single shot. Apart from this, the baseline between these views is very small, which means no aliasing occurs and the consistent correspondences still holds in this case . Combining the multi-view and the micro-baseline, a more detailed and complete representation of occlusion appears, which renews the method to handle occlusion.
applies structure tensor to analyze the Epipolar Plane Image (EPI). This method only takes advantage of angular samples in one dimension, and the tensor field becomes too random to analyze in heavy occlusion. Yuet al.  encodes the constraints of 3D lines and introduces Lind Assisted Graph Cuts (LAGC) to improve depth estimation. However, the 3D lines are partitioned into small and incoherent segments in heavy occlusion which leads to wrong estimation. Chen et al.  proposes to select the un-occluded views by using a bilateral metric in angular space. However, this selection lacks the guidance of the physical model and the number of un-occluded views is also a predefined parameter. Wang et al. [13, 14] analyzes the formation of occlusion and finds the consistency between the spatial patch and the angular patch in occlusion boundaries. They select the un-occluded views according to the edges in spatial patch. However, their method failed in multi-occluder areas where the local patch can not be divided into two regions by a straight line, and it leads to over smooth results in these areas (Fig. 7, 8). Although Wang et al. have a more recent work  for depth estimation, it aims at solving shadings which is not applicable in our case.
In this paper, we explore the light field occlusion theory for multi-occluder occlusion, and propose an algorithm to regularize depth map. Our main contributions are:
The light field occlusion theory for multi-occluder occlusions.
An algorithm to accurately select the un-occluded views in angular space with the guidance of the occlusion theory which does not need any predefined parameters.
A depth estimation algorithm which is robust to multi-occluder occlusion.
In Section 2, we model the multi-occluder occlusion in light field, and derive the occlusion-consistency between the spatial and angular space, i.e., the occluded views in angular space are projections of the occluder in spatial space. With the guidance of occluder-consistency, we select the un-occluded views for each candidate occlusion point using a partition in spatial patch in Section 3.1, and obtain an initial depth map using the un-occluded views in Section 3.2. Then, we detect the occlusion points from the initial depth map according to the visible constraint in Section 4.1. Finally the occlusion map and the un-occluded views are used to build an anti-occlusion energy function to refine depth in Section 4.3. In Section 5, we provide complete comparisons with other state-of-the-art algorithms, both in quantitative and in qualitative, showing great advantages compared with previous works [15, 16, 20, 4, 13].
2 The Light Field Occlusion Model
In this section, we will analyze the formation of the occlusion based on the physical model, and explore the occlusion theory for multi-occluder occlusion.
2.1 Definitions and Notations
Before building the light field occlusion model, we first give some definitions and notations of the light field model.
Single-occluder occlusion refers to the occlusion that occluded views and un-occluded views can be divided half-and-half.
Multi-occluder occlusion refers to the occlusion that there are more occluded views than un-occluded views.
Point in 4D light field describes a ray emitting from the world point , and intersecting with two parallel planes, namely, the camera plane and the image plane, which we refer to as angular and spatial space. Tab. 1 presents a list of terms used throughout this section.
|angular/camera plane coordinates|
|spatial/image plane coordinates|
the vector in angular coordinate system
|the vector in spatial coordinate system|
|the vector in world coordinate system|
2.2 Occlusion Model
Previous work  has proved the occluder-consistency for single-occluder occlusion, i.e., when refocused to the correct depth, the edge which separates the un-occluded and occluded views in the angular patch has the same orientation as the occlusion edge in spatial patch. This property is useful for single-occluder occlusions, however it fails in multi-occluder situation as the un-occluded and occluded pixels in angular patch can not be divided into two regions by a straight line.
We first consider a simple multi-occluder occlusion (Fig. 1). Considering a pixel on the focal plane (the left image in Fig. 1), and an occluder intersects at (). Note that the occluder has two edges, and the directional vectors of these two edges in the plane are,
The larger angle between these two vectors denotes occluded areas (the golden areas in Fig. 1). Without loss of generality, we assume .
For any other pixel on the focal plane, it will be observed by the view iff it meets the following inequalities,
We then project these inequalities from world coordinate system to the image system (the right image in the Fig. 1). The corresponding directional vectors of the and are and respectively. and , is a scale factor to denote the scaling relationship between the world coordinate system and the image coordinate system. For any other point on the image, it is a background point iff,
Then considering the main lens plane (the left image in Fig. 1. The light field is refocused to the depth ). For any other view on the main lens plane, it can capture the pixel iff
where , , , is a scale factor to denote the scaling relationship between the image coordinate system and the angular coordinate system.
The occluded views in angular space are projection of the occluder in spatial space.
In other words, in a local spatial patch, the corresponding views of the occluder are the occluded views, and the corresponding views of the background are the un-occluded views. This proposition is called occluder-consistency later.
Note that, the proposition mentioned above is obtained under a simple multi-occluder assumption. For a more complex multi-occluder, the boundaries of the occluder can be divided into more small straight lines by following the idea of the Calculus, and the Eqn. 3, 4 will contain more inequalities. No matter how many inequalities, the numbers of inequalities in Eqn. 3, 4 are equal and inequalities are one-one correspondence, and the Prop. 1 always holds.
2.3 Projection Radius
For occluded points, the occluded views in angular space are projection of the occluder in spatial space in a local patch. We derive the radius of the patch in a 2D light field. In Fig. 2, the purple lines denote the background at depth , the orange lines denote the occluder at depth , blue lines denote the camera plane, denotes a pixel in the background, and denote different views in light field.
where is the input light field, is the refocused light field at depth , is the central view of the light field. It is noticed the light from views converge to the point , and the light from are blocked by the occluder. are the occluded views in angular space, and the images of these two views come from points . In other words, the horizontal distance between and is the projection radius.
Then, the light field is refocused to the foreground at depth (Fig. 2)
where is the refocused light field at depth . It can be seen the light from all views converge to the point . As the horizontal distance between and is 0, the projection radius is obtained
3 Depth Initialization
In this section, we will show how to use the occluder-consistency between the spatial and angular space to select the un-occluded views, and how to obtain an initial depth estimation using these un-occluded views.
3.1 Un-Occluded Views Selection
We first give an important assumption about occlusion of the proposed algorithm. The occluder has a different color of the occluded point. For the situation that the occluder is similar to occluded point, as far as we know, no work can handle it. Based on this assumption, the following proposition holds,
An occlusion point is an edge point but an edge point may not be an occlusion point.
With Prop. 2, the canny edge detector is firstly applied to find the candidate occlusion points
. Then the K-means clustering is applied for the local image patch111The patch size is set as half of the angular resolution of light field initially since we do not have depth map here. centered at each occlusion point from (the feature is the RGB color, and the number of labels is 2). For each patch, the pixels which share the same label with the center pixel are labeled as background or un-occluded points. According to the occluder-consistency mentioned in Prop. 1, the corresponding views in angular patch of the center pixel are labeled as un-occluded views . These pixels are candidate occlusion pixels in the central view (Fig. 3). For pixels occluded in other views, the un-occluded views are voted from its neighborhood system (Fig. 4).
3.2 Depth Estimation
With the un-occluded views selection, a robust initial depth estimation is obtained based on the classical photo-consistency in un-occluded views.
We refocus the light field to different depth ,
where is the input 4D light field, is the refocused light field in depth , are the spatial coordinates, and are the angular coordinates. Then, the matching cost of each pixel is defined as,
where is the un-occluded views set of the point (Sec. 3.1), and denotes the size of the set , and denotes the angular image of pixel at depth (In other words, , are the spatial coordinates of pixel ), and is the color of pixel in central view.
Then, the initial depth estimation of each pixel is obtained,
4 Depth Regularization
In this section, we will show how to find the occlusion and regularize it with a global energy function.
4.1 Occlusion Detection
We find the occlusion point using the visible constraint, i.e., the occlusion point is visible in reference view and invisible in other views. In other words, if the difference of disparities of two neighboring pixels is larger than 1 pixel, there is an occlusion point here. In light field, this constraint can only find the occlusion point in the central view due to the multiple views, and the threshold value of the difference of the disparities ought to be relaxed to find the occlusion in other views,
where is the angular resolution 222The angular resolution of the light field that we use in the experiment is ..
As the initial depth estimation in occluded points is unreliable, and sometimes it is too smooth and random to distinguish the occlusion point accurately from only two neighboring pixels. We select a disparity patch centered at each candidate occlusion point (the detected points by the Canny operator), and use the K-means clustering to divide the patch into 2 classes, then the difference of disparities of each class is determined by the subtraction of two centers,
where is the center of the -th class.
Finally the candidate occlusion point is determined by comparing the and ,
4.2 Un-occluded Views Re-Selection
For each occlusion point , we apply the K-means clustering for its local depth patch to find the background depth and occluder depth , and the projection radius is determined using Eqn. 7. Then, each patch is resized to the angular resolution of light field. The following procedure is the same as Sec. 3.1.
4.3 Final Depth Regularization
Finally, given the occlusion cues, we regularize with Markov Random Field (MRF) for a final depth map.
where is the depth of pixel , and are neighboring pixels, and (=0.35 in our experiment) is to control the smooth term.
The data term measures the photo-consistency in un-occluded views,
where (=3 in our experiment) controls the sensitivity of the function to large distance, and the definition of can be found in Eqn. 9.
The smooth term encodes the smoothness constraint between two neighboring pixels,
where is the edge map of the central view image , and are three weighting factors. Comparing with previous works, we introduce the occlusion term and the edge term into the weighting function to preserve occlusion boundaries and keep the depth of occlusion boundaries similar.
The full description of the proposed algorithm is given in Algo. 1. First, edge detection is applied on central view image to find all possible occlusion point . Then, the un-occluded views of each candidate occlusion point are selected based on a K-means clustering. After that, an initial depth map is estimated by using the un-occluded views. Moreover, the occlusion map is detected by using the initial estimation. Finally, based on the un-occluded views and the occlusion map, the depth map is regularized with a MRF energy function.
5 Experimental Results
We compare our results with the globally consistent depth labeling (GCDL) by Wanner et al. , the line-assigned graph-cut(LAGC) by Yu et al. , the Bilateral Consistency Metric (BCM) by Chen et al.  and the occlusion-aware depth estimation (OADE) by Wang et al. . Note that, the results of GCDL come from their published papers , the results of LAGC and OADE are obtained by running their published codes or executable files, and the results of BCM are provided by the authors.
The performance of the proposed algorithm is evaluated by using the most popular light field datasets . This datasets are synthesized by the Blender, and each data includes a light field and its ground truth depth. For runtime, on a 3.4 GHz Intel i7 machine with 16 GB RAM, our MATLAB implementation takes about 1 hour on a color light field. Considering the precise results and the low-speed of MATLAB, this time cost is acceptable.
5.1 Un-occluded views selection
A consensus on depth estimation in computer vision is that more effective views lead to more accurate depth estimation. So, the precision and the recall of the selected un-occluded views are important. We count the F-measure (the harmonic mean of precision and recall compared with the ground truth) of the un-occluded views in occlusion using our algorithm, and compare it with previous work. The quantitative comparisons are listed in Tab. 2, and the qualitative comparisons are shown in Fig. 5. It can be seen that our selection method outperforms previous work in the un-occluded views selection. And this advantage is more obvious especially in multi-occluder areas. In Fig. 5, our method can always select accurate un-occluded views, however, Wang’s et al.  method always selects more occluded views and these selections will lead to over smooth results in occlusion areas (Fig. 7, 8). It is noticed our method performs not good in Horses. The reason is that there are many textures near the occlusion boundaries in background, and the K-means clustering based on color can not divide the background and occluder accurately in the complex texture areas.
5.2 Occlusion Boundaries
For each data, we detect its occlusion boundaries using the depth map, and compute its F-measure. Then, we compare it with other state-of-the-art algorithms. The quantitative comparisons are listed in Tab. 3, and the qualitative comparisons are shown in Fig. 6. Our algorithm outperforms the previous works. Note that the results of GCDL  and BCM  are not contained as it is difficult to run their codes in our experimental environment. However, as previous works [20, 13] have demonstrated their advantages to  and , these comparisons are convincing. Our method performs not good in StillLife (the third row in Fig. 6). That is because there are many weak occlusions (the difference of disparities is small) in StillLife. The difference of disparities in the boundaries of the bee is small, the occlusion detection method in Sec. 4.1 can not handle these occlusions well.
5.3 Depth maps
The quantitative comparisons of the RMS errors of recovered disparity maps are listed in Tab. 4. Note that all results are obtained in a same parameters setting. Our algorithm outperforms previous state-of-the-art algorithms in almost all datasets.
The qualitative comparisons of the recovered disparity map are shown in Fig. 7, 8. It can be seen that, our algorithm yields sharper occlusion boundaries. As our selection method for un-occluded views can always find them accurately (Fig. 5, Tab. 2), and do not select any other occluded views, our results are sharp in the multi-occluder areas.
It is noticed the proposed algorithm performs not as good as OADE in the un-occluded views selection for Horses, however we get better results in depth map. This is because our energy function can preserve occlusion boundaries better. Moreover, we get the best results in the depth estimation for StillLife, however the F-measure of detected boundaries is not the best. The reason is that our algorithm performs much better than OADE in multi-occluder boundaries which have a larger difference of disparities.
Apart from the heavy occlusion, the proposed algorithm also performs well for shadings (Fig. 7(b)), although it is not taken into account in our model. Comparing with the Buddha (Fig. 7(a)), there are more shadings in the buddha2 (Fig. 7(b)). Although all algorithms perform good in the Buddha, only our algorithm maintains the same level in the Buddha2. The reason for this phenomenon worth further study.
However, our algorithm can not handle the situation where the background has a similar texture or color to the occlusion. In the Fig. 8(b) (the green box), as the color of cloth in background is similar to the foreground, it is difficult to recover the true depth. Apart from this, our algorithm can not handle textureless areas just like all previous methods.
6 Conclusion and Future works
In this paper, we propose a new anti-occlusion depth estimation algorithm by modeling the formulation of the occlusion. The model reveals an important property, the occluders are consistent between the spatial and angular space. Utilizing this information, we improve the depth estimation in occlusion areas in two ways. Firstly, the un-occluded views are accurately selected by a clustering in spatial space, and the classical photo-consistency is enforced in these views. Secondly, the occlusion map is detected using the edges and the initial depth map, and then combined into the smooth term in the MRF function to keep the occlusion boundaries sharp. We have demonstrated the advantages of the proposed algorithm compared with other state-of-the-art algorithms in synthetic datasets.
Just as we mentioned in Sec. 5.3, our algorithm produces unexpected good results in shading situations although the shading is not considered in our model. It is worthy for us to investigate the phenomenon. Furthermore, as the light fields captured by real light field cameras have more noise compared with the synthetic datasets, it is essential to do more experiments on real data to better evaluate the performance of the proposed algorithm.
-  Yuichiro Anzai. Pattern Recognition & Machine Learning. Elsevier, 2012.
-  Yuri Boykov and Vladimir Kolmogorov. An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 26(9):1124–1137, 2004.
-  Yuri Boykov, Olga Veksler, and Ramin Zabih. Fast approximate energy minimization via graph cuts. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 23(11):1222–1239, 2001.
-  Can Chen, Haiting Lin, Zhan Yu, Sing Kang, and Jingyi Yu. Light field stereo matching using bilateral statistics of surface cameras. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1518–1525, 2014.
-  Vladimir Kolmogorov and Ramin Zabih. Multi-camera scene reconstruction via graph cuts. In Computer Vision – ECCV 2002, pages 82–96. Springer, 2002.
-  Vladimir Kolmogorov and Ramin Zabin. What energy functions can be minimized via graph cuts? Pattern Analysis and Machine Intelligence, IEEE Transactions on, 26(2):147–159, 2004.
-  Lytro. Lytro redefines photography with light field cameras. http://www.lytro.com, 2011.
-  Ren Ng. Digital light field photography. PhD thesis, stanford university, 2006.
-  Ren Ng, Marc Levoy, Mathieu Brédif, Gene Duval, Mark Horowitz, and Pat Hanrahan. Light field photography with a hand-held plenoptic camera. Computer Science Technical Report CSTR, 2(11), 2005.
-  raytrix. Raytrix lightfield camera. http://www.raytrix.de, 2012.
-  Carsten Rother, Vladimir Kolmogorov, Victor Lempitsky, and Martin Szummer. Optimizing binary mrfs via extended roof duality. In Computer Vision and Pattern Recognition, 2007. CVPR’07. IEEE Conference on, pages 1–8. IEEE, 2007.
-  Ting-Chun Wang, Manmohan Chandraker, Alexei Efros, and Ravi Ramamoorthi. Svbrdf-invariant shape and reflectance estimation from light-field cameras. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
-  Ting-Chun Wang, Alexei A Efros, and Ravi Ramamoorthi. Occlusion-aware depth estimation using light-field cameras. In Proceedings of the IEEE International Conference on Computer Vision, pages 3487–3495, 2015.
-  Ting-Chun Wang, Alexei Alyosha Efros, and Ravi Ramamoorthi. Depth estimation with occlusion modeling using light-field cameras. Pattern Analysis and Machine Intelligence, IEEE Transactions on (In press), 2016.
-  Sven Wanner and Bastian Goldluecke. Globally consistent depth labeling of 4d light fields. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 41–48. IEEE, 2012.
Sven Wanner and Bastian Goldluecke.
Variational light field analysis for disparity estimation and super-resolution.Pattern Analysis and Machine Intelligence, IEEE Transactions on, 36(3):606–619, 2014.
-  Sven Wanner, Stephan Meister, and Bastian Goldluecke. Datasets and benchmarks for densely sampled 4d light fields. In VMV, pages 225–226. Citeseer, 2013.
-  Oliver Woodford, Philip Torr, Ian Reid, and Andrew Fitzgibbon. Global stereo reconstruction under second-order smoothness priors. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 31(12):2115–2128, 2009.
-  Zhaolin Xiao, Qing Wang, Guoqing Zhou, and Jingyi Yu. Aliasing detection and reduction in plenoptic imaging. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3326–3333, 2014.
-  Zhan Yu, Xinqing Guo, Haibing Lin, Andrew Lumsdaine, and Jingyi Yu. Line assisted light field triangulation and stereo matching. In Proceedings of the IEEE International Conference on Computer Vision, pages 2792–2799, 2013.