Nowadays, depth sensors are getting increasingly popular that receive considerable attentions from researchers . The low-cost and realtime time features of depth sensor facilitate many vision tasks, such as image rendering, 3D reconstruction, image localization 
. However, among most of existing systems, the acquired depth image often suffers excessive noises that has degraded its performance and resulted in inaccurate estimation.
Many researcher have proposed different algorithms to resolve the above issue. For example, in , the authors try to use bilateral filter based method to remove the noises while preserve the edges. Similar techniques can also be found in . However, this types of methods fail for depth images of complex scenes as it wipe out many weak edges, which makes the resulting image less faithful to the original settings. Some other researchers casted the denoising process to a image in-painting problem by adopting exemplar-copy scheme . In recent papers, probabilistic framework based methods were introduced by considering depth denoising problem as a labeling process that cluster the image into multiple regions . This type of methods demonstrate favorable results. But it is expensive that applicable for realtime applications.
Ii Our Solution
Our approach for the depth denoising have three main phases involved. First, we use our edge detector to identify salient edges across the image, i.e. we apply the traditional Canny edge to extract all the possible distinct edges from an image. Second, we apply jointly bilateral filter to the image smoothen those regions with concinnous texture values while skips processing the parts with distinct structures . By this step, we can cluster the whole image by using the extracted structures by using those salient edges .
where and represent the center location on image for the Gaussian Kernels and . is the normalizing factor; is the spatial range. After this step, we use the exemplar based scheme to find an optimal patch from those available depth region to the target region. During the filling procedure, only patches the same region enclosed by extracted structures are used for the depth inference. Here we adopt the strategy as Criminisi proposed on isophote-driven sampling process .
where the metrics and are the confidence term and data term for the priority patch definition . The difference from our method from the one  is that only patches are clustered from the same region will be considered for the target region for in-painting. For each patch, a priority metric is assigned, which determines the order for the target region to fill. Similar to , we search in the source region for the potential textures to fill the target region. As only search patches that are clusterd from the same ground by the edges from step 1. Our is formally defined as:
Solving this optimization problem yields satisfactory depth map. Also, to speed up the performance, we apply histogram-based clustering to the image before the edge extraction which make the relisted structure with less distortion.
Iii The Results
Our proposed methods have been tested on two public dataset: Tsukuba Stereo database, which was collected by the Tsukuba University, and the frame sequence of “Ballet” captured from Microsoft research. Since the ground truth are provided, it allows us to evaluate our results on the accuracy of noise removal and computational cost. All the experiments were carried out on a PC with Intel(R) Core (TM) CPU E5-2620 v2@ 3.50GHz with 24.0GB RAM. First, we applied various patch sizes for the in-paiting as Table I shows. In the table, it demonstrated 8 samples from the more than 1800 images, has leads to significant improvement in PSNR (Peak Signal-to-noise ratio). As can be seen from the table, our algorithm has high accuracy on different patch sizes. Furthermore, according to our experiments, our algorithm has stable performance for different samples and various sizes of patches.
|Image ID||Patch size pixels||Patch size pixels||Patch size pixels|
-  K. Khoshelham, S. Elberink. “Accuracy and Resolution of Kinect Depth Data for Indoor Mapping Applications”. Sesors, vol. 12, pp. 1437-1454, 2012.
-  J. Shen, W. Tan. “Image-based indoor place-finder using image to plane matching”. IEEE International Conference on Multimedia and Expo (ICME), San Jose, 2013.
-  J. Shen, J. Zhao, and S.-C. Cheung. “Virtual Mirror By Fusing Multiple RGB-D Cameras”. In APSIPA Annual Summit & Conference, 2012.
-  S. Fleishman, I. Drori, and D. Cohen-Or, “Bilateral Mesh Denoising”. Proc.of ACM SIGGRAPH 2003, vol. 22, no.3, New York, pp. 950-953, July 2003.
-  J. Fu, D. Miao, W.R. Yu, S.Q. Wang, Y. Lu, and S.P. Li, “Kinect-Like Depth Data Compression”. IEEE Transactions on Multimedia, vol. 15, no.6, pp. 1340-1352, 2013.
-  L. Alvarez, R. Deriche, J. Sanchez, and J. Weickert, “Dense Disparity Map Estimation Respecting Image Discontinuities: A PDE and Scale-Space Based Approach”. Journal of Visual Communication and Image Representation, vol. 13, no.1-2, pp. 3-21, 2002.
-  R. Khoshabeh, S.H. Chan, and T.Q. Nguyen, “Spatio-temporal consistency in video disparity estimation”. Proc. of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Prague, Czech, pp. 885-888, 2011
-  J. Shen, P. -C. Su, S.-C. Cheung, and J. Zhao. “Virtual mirror rendering with stationary rgb-d cameras and stored 3-d background”. IEEE Transactions on Image Processing, vol. 22, pp. 3433-3448, 2013.
J. Shen, S.-C. Cheung. “Layer depth denoising and completion for structured-light rgb-d cameras”. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1187-1194, 2013.
-  C. Xiao, J. Gan. “Fast image dehazing using guided joint bilateral filter”. The Visual Computer, vol. 28, issue. 6, pp. 713-721, 2012.
-  P. P. A. Criminisi and K. Toyama. “Object Removal by Exemplar-Based Inpainting”. Proc. of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Wisconsin, vol. 2, 2003.
-  R. Achanta, S. Hemami, F. Estrada, S. Susstrunk. “Frequency-tuned salient region detection”. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1597-1604, 2009.