An Approach for Noise Removal on Depth Images

02/16/2016 ∙ by Rashi Chaudhary, et al. ∙ 0

Image based rendering is a fundamental problem in computer vision and graphics. Modern techniques often rely on depth image for the 3D construction. However for most of the existing depth cameras, the large and unpredictable noises can be problematic, which can cause noticeable artifacts in the rendered results. In this paper, we proposed an efficacious method for depth image noise removal that can be applied for most RGBD systems. The proposed solution will benefit many subsequent vision problems such as 3D reconstruction, novel view rendering, object recognition. Our experimental results demonstrate the efficacy and accuracy.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Nowadays, depth sensors are getting increasingly popular that receive considerable attentions from researchers [1]. The low-cost and realtime time features of depth sensor facilitate many vision tasks, such as image rendering, 3D reconstruction, image localization [2]

. However, among most of existing systems, the acquired depth image often suffers excessive noises that has degraded its performance and resulted in inaccurate estimation

[3].

Many researcher have proposed different algorithms to resolve the above issue. For example, in [4], the authors try to use bilateral filter based method to remove the noises while preserve the edges. Similar techniques can also be found in [5]. However, this types of methods fail for depth images of complex scenes as it wipe out many weak edges, which makes the resulting image less faithful to the original settings. Some other researchers casted the denoising process to a image in-painting problem by adopting exemplar-copy scheme [6][7]. In recent papers, probabilistic framework based methods were introduced by considering depth denoising problem as a labeling process that cluster the image into multiple regions [8][9]. This type of methods demonstrate favorable results. But it is expensive that applicable for realtime applications.

Ii Our Solution

Our approach for the depth denoising have three main phases involved. First, we use our edge detector to identify salient edges across the image, i.e. we apply the traditional Canny edge to extract all the possible distinct edges from an image. Second, we apply jointly bilateral filter to the image smoothen those regions with concinnous texture values while skips processing the parts with distinct structures [10]. By this step, we can cluster the whole image by using the extracted structures by using those salient edges [12].

(1)

where and represent the center location on image for the Gaussian Kernels and . is the normalizing factor; is the spatial range. After this step, we use the exemplar based scheme to find an optimal patch from those available depth region to the target region. During the filling procedure, only patches the same region enclosed by extracted structures are used for the depth inference. Here we adopt the strategy as Criminisi proposed on isophote-driven sampling process [11].

(2)

where the metrics and are the confidence term and data term for the priority patch definition . The difference from our method from the one [11] is that only patches are clustered from the same region will be considered for the target region for in-painting. For each patch, a priority metric is assigned, which determines the order for the target region to fill. Similar to [11], we search in the source region for the potential textures to fill the target region. As only search patches that are clusterd from the same ground by the edges from step 1. Our is formally defined as:

(3)

Solving this optimization problem yields satisfactory depth map. Also, to speed up the performance, we apply histogram-based clustering to the image before the edge extraction which make the relisted structure with less distortion.

Iii The Results

Our proposed methods have been tested on two public dataset: Tsukuba Stereo database, which was collected by the Tsukuba University, and the frame sequence of “Ballet” captured from Microsoft research. Since the ground truth are provided, it allows us to evaluate our results on the accuracy of noise removal and computational cost. All the experiments were carried out on a PC with Intel(R) Core (TM) CPU E5-2620 v2@ 3.50GHz with 24.0GB RAM. First, we applied various patch sizes for the in-paiting as Table I shows. In the table, it demonstrated 8 samples from the more than 1800 images, has leads to significant improvement in PSNR (Peak Signal-to-noise ratio). As can be seen from the table, our algorithm has high accuracy on different patch sizes. Furthermore, according to our experiments, our algorithm has stable performance for different samples and various sizes of patches.

Country List
Image ID Patch size pixels Patch size pixels Patch size pixels
Sample 206 5.262 5.231 5.520
Sample 12 5.712 5.801 5.725
Sample 1152 5.692 5.820 6.234
Sample 99 5.291 5.013 6.102
Sample 328 5.233 5.702 5.913
Sample 1602 5.234 5.133 5.913
Sample 5 5.203 5.238 6.110
TABLE I: Improvement on the PSNR by using different patches

Figures -1, 2 demonstrate the results by running our proposed depth denoising algorithm on the two public data sets. In terms of time performance, we tested on a 200 images with various sizes, it turns out to run reasonably fast, with the average processing time ms for an image with the size .

Fig. 1: Demo of depth denoising Results: left: original RGB image; middle: original depth image; right: our result image.

Fig. 2: Demo of reconstructed virtual images from the “Ballet” sequences: top row: two depth image frames processed by our algorithm; second row: the rendered results by using out refined depth image.

References

  • [1] K. Khoshelham, S. Elberink. “Accuracy and Resolution of Kinect Depth Data for Indoor Mapping Applications”. Sesors, vol. 12, pp. 1437-1454, 2012.
  • [2] J. Shen, W. Tan. “Image-based indoor place-finder using image to plane matching”. IEEE International Conference on Multimedia and Expo (ICME), San Jose, 2013.
  • [3] J. Shen, J. Zhao, and S.-C. Cheung. “Virtual Mirror By Fusing Multiple RGB-D Cameras”. In APSIPA Annual Summit & Conference, 2012.
  • [4] S. Fleishman, I. Drori, and D. Cohen-Or, “Bilateral Mesh Denoising”. Proc.of ACM SIGGRAPH 2003, vol. 22, no.3, New York, pp. 950-953, July 2003.
  • [5] J. Fu, D. Miao, W.R. Yu, S.Q. Wang, Y. Lu, and S.P. Li, “Kinect-Like Depth Data Compression”. IEEE Transactions on Multimedia, vol. 15, no.6, pp. 1340-1352, 2013.
  • [6] L. Alvarez, R. Deriche, J. Sanchez, and J. Weickert, “Dense Disparity Map Estimation Respecting Image Discontinuities: A PDE and Scale-Space Based Approach”. Journal of Visual Communication and Image Representation, vol. 13, no.1-2, pp. 3-21, 2002.
  • [7] R. Khoshabeh, S.H. Chan, and T.Q. Nguyen, “Spatio-temporal consistency in video disparity estimation”. Proc. of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Prague, Czech, pp. 885-888, 2011
  • [8] J. Shen, P. -C. Su, S.-C. Cheung, and J. Zhao. “Virtual mirror rendering with stationary rgb-d cameras and stored 3-d background”. IEEE Transactions on Image Processing, vol. 22, pp. 3433-3448, 2013.
  • [9]

    J. Shen, S.-C. Cheung. “Layer depth denoising and completion for structured-light rgb-d cameras”. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1187-1194, 2013.

  • [10] C. Xiao, J. Gan. “Fast image dehazing using guided joint bilateral filter”. The Visual Computer, vol. 28, issue. 6, pp. 713-721, 2012.
  • [11] P. P. A. Criminisi and K. Toyama. “Object Removal by Exemplar-Based Inpainting”. Proc. of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Wisconsin, vol. 2, 2003.
  • [12] R. Achanta, S. Hemami, F. Estrada, S. Susstrunk. “Frequency-tuned salient region detection”. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1597-1604, 2009.