Fast High-Dimensional Bilateral and Nonlocal Means Filtering

11/06/2018 ∙ by Pravin Nair, et al. ∙ indian institute of science 18

Existing fast algorithms for bilateral and nonlocal means filtering mostly work with grayscale images. They cannot easily be extended to high-dimensional data such as color and hyperspectral images, patch-based data, flow-fields, etc. In this paper, we propose a fast algorithm for high-dimensional bilateral and nonlocal means filtering. Unlike existing approaches, where the focus is on approximating the data (using quantization) or the filter kernel (via analytic expansions), we locally approximate the kernel using weighted and shifted copies of a Gaussian, where the weights and shifts are inferred from the data. The algorithm emerging from the proposed approximation essentially involves clustering and fast convolutions, and is easy to implement. Moreover, a variant of our algorithm comes with a guarantee (bound) on the approximation error, which is not enjoyed by existing algorithms. We present some results for high-dimensional bilateral and nonlocal means filtering to demonstrate the speed and accuracy of our proposal. Moreover, we also show that our algorithm can outperform state-of-the-art fast approximations in terms of accuracy and timing.



There are no comments yet.


page 2

page 3

page 7

page 8

page 9

page 10

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Smoothing images while preserving structures (edges, corners, lines, etc.) is a fundamental task in image processing. A classic example in this regard is the diffusion framework of Perona and Malik [1]. In the last few decades, several filtering based approaches have been proposed for this task. Prominent examples include bilateral filtering [2, 3, 4], mean shift filtering [5], weighted least squares [6], domain transform [7], guided filtering [8], and nonlocal means [9]. The brute-force implementation of most of these filters is computationally prohibitive and cannot be used for real-time applications. To address this problem, researchers have come up with approximation algorithms that can significantly accelerate the filtering without compromising the quality. Unfortunately, for bilateral and nonlocal means filtering, most of these algorithms can be used only for grayscale images. It is difficult to use them even for color filtering, while preserving their efficiency. The situation is more challenging for multispectral and hyperspectral images, where the range dimension is much larger. Of course, any algorithm for grayscale filtering can be used for channel-by-channel processing of high-dimensional images. However, by working in the combined intensity space, we can exploit the strong correlation between channels [4].

I-a High-Dimensional Filtering

The focus of the present work is on two popular smoothers, the bilateral and the nonlocal means filters, and their application to high-dimensional data. The former is used for edge-preserving smoothing in a variety of applications [10], while the latter is primarily used for denoising. Though nonlocal means has limited denoising capability compared to state-of-the-art denoisers [11, 12], it continues to be of interest due to its simplicity and the availability of low-complexity algorithms [13, 14, 15]. The connection between these filters is that they can be interpreted as a multidimensional Gaussian filter operating in the joint spatio-range space [16]. The term high-dimensional filtering is used when the dimension of the spatio-range space is large [14, 17, 18]. In this paper, we will use this term when the range dimension is greater than one.

An unified formulation of high-dimensional bilateral and nonlocal means filtering is as follows. Suppose that the input data is , where is the spatial domain, is the range space, and and are dimensions of the domain and range of . Let be the guide image, which is used to control the filtering [10]. The output is given by



(a) Lena ()
(b) Proposed, 274ms, 55.36dB.
(c) GCS [19], ms, dB.
(d) AM [14], ms, dB.
(e) Brute-force, sec.
(f) Error: (b) - (e).
(g) Error: (c) - (e).
(h) Error: (d) - (e).
Fig. 1: Bilateral filtering of a color image [20], where the filter parameters are and . We have used clusters for GCS and the proposed method. The number of manifolds was automatically set to in AM. The run-time and (see definition in (21)) with manifolds are sec and dB. In the second row, we show the error between a particular method (first row) and the brute-force implementation for just the red channel.

The aggregations in (1) and (2) are performed over a window around the pixel of interest, i.e., , where is the window length. We call the spatial kernel and the range kernel [4], where we denote the dimension of the range space of as . The spatial kernel is used to measure proximity in the spatial domain (as in classical linear filtering). On the other hand, the range kernel is used to measure proximity in the range space of .

For (joint) bilateral filtering, and are generally Gaussian. While and are identical for the bilateral filter (=), they are different for the joint bilateral filter [10]. In the original proposal for nonlocal means [9], are the spatial neighbors of pixel (extracted from a patch around ) and hence is times the size of image patch, is a multidimensional Gaussian, and is a box filter. Later, it was shown in [21] that by reducing the dimension of each patch (e.g., by applying PCA to the collection of patches), we can improve the speed and denoising performance. Similar to [14, 17, 18], we have also considered PCA-based nonlocal means in this work. Needless to say, the proposed algorithm can also work with full patches.

The brute-force computation of (1) and (2) clearly requires operations per pixel. In particular, the complexity scales exponentially with the window length , which can be large for some smoothing applications. In the last decade, several fast algorithms have been proposed for bilateral filtering of grayscale images [20, 22, 23, 24, 25, 26]. The complexity can be cut down from to using these algorithms. Similarly, fast algorithms have been proposed for nonlocal means [13, 15] that can reduce the complexity from to .

I-B Previous Work

Quantization methods. Durand et al. [23] proposed a novel framework for approximating (1) and (2

) using clustering and interpolation. In terms of our notations, their approximation of (

1) and (2) can be expressed as




The centers are obtained by uniformly sampling the range space of , whereas the interpolation coefficient is determined from the distance between and . Notice that the inner summations in (3) and (4) can be expressed as convolutions, which are computed using FFT in [23]. Furthermore, the entire processing is performed on the subsampled image, following which the output image is upsampled. The approximation turns out to be effective for grayscale images. This is because the range space is one-dimensional in this case, and hence a good approximation can be achieved using a small the number of samples . Moreover, it suffices to interpolate just the two neighboring pixels. Paris et al. [20] showed that the accuracy of [23] can be improved by downsampling the intermediate images (instead of the input image) involved in the convolutions. Chen et al. [16] proposed to accelerate [20] by performing convolutions in the higher-dimensional spatio-range space. Yang et al. [26] observed that this framework can also be used with non-Gaussian range kernels, and that convolutions can be used to improve the computational complexity. As the range dimension increases, these methods however become computationally inefficient. In particular, an exponentially large is required to achieve a decent approximation, and linear interpolation requires neighboring samples. In [27], Yang et al. extended this method to perform bilateral filtering of color images, and proposed a fast scheme for linear interpolation. However, [27] is based on uniform quantization, which is not optimal given the strong correlation between the color channels. In fact, a more efficient algorithm for color filtering was later proposed in [19], where clustering is used to find . Moreover, instead of interpolation, they used local statistics prior to improve the accuracy. However, the algorithm is used only for color bilateral filtering and its application to other forms of high-dimensional filtering is not reported in [19].

Fig. 2: Flowchart of our algorithm with the main modules. Also shown are the timings for the bilateral filtering of a MP color image. The filter parameters are and . When (number of clusters), total time required is ms and the is dB. The symbols used in the flowchart are defined in Section II (also see Algorithm 1).

Splat-blur-slice method. Another line of work is [14, 17, 18], which is based on a slightly different form of approximation. More precisely, they are based on the “splat-blur-slice” framework, which involves data partitioning (clustering or tessellation), fast convolutions, and interpolation as the core operations. These are considered to be the state-of-the-art fast algorithms for high-dimensional bilateral and nonlocal means filtering. The idea, originally proposed in [16], is based on the observation that [20] corresponds to convolutions in the joint spatio-range space. The general idea is to sample the input pixels in a different space (splatting), perform Gaussian convolutions (blurring), and resample the result back to the original pixel space (slicing). Adams et al. tessellated the domain and performed blurring using the -d tree in [17] and the permutohedral lattice in [18]. Gastal et al. [14] divided and resampled the domain into non-linear manifolds, and performed blurring on them. This was shown to be faster than all other methods within the splat-blur-slice framework.

Kernel approximation. In an altogether different direction, it was shown in [22, 24, 25, 28] that fast algorithms for bilateral filtering can be obtained by approximating the range kernel using the so-called shiftable functions, which includes polynomials and trigonometric functions. The above mentioned algorithms provide excellent accelerations for grayscale images, and are generally superior to algorithms based on data approximation. Unfortunately, it is difficult to approximate a high-dimensional Gaussian using low-order shiftable functions. For example, the Fourier expansion in [25]

is quite effective in one dimension (grayscale images), but its straightforward extension to even three dimensions (color images) results in exponentially many terms. This is referred to as the “curse of dimensionality”.

I-C Contributions

Existing algorithms have focussed on either approximating the data [14, 17, 18, 19, 26] or the range kernel [22, 24, 25, 29, 30]. In this paper, we combine the ideas of shiftability [22, 24, 28, 31] and data approximation [23, 26] within a single framework to approximate (1) and (2). In particular, we locally approximate the range kernel in (1) and (2) using weighted and shifted copies of a Gaussian, where the weights and shifts are determined from the data. More specifically, we use clustering to determine the shifts, whereby the correlation between data channels is taken into account. Once the shifts have been fixed, we determine the weights (coefficients) using least-squares fitting, where the data is again taken into account. An important technical point is that we are required to solve a least-squares problem at each pixel, which can be expensive. We show how this can be done using just one matrix inversion. In summary, the key distinctions of our method in relation to previous approaches are as follows:

  • We use data-dependent methods to calculate both and in (3) and (4

    ). The latter is done in a heuristic fashion in

    [14, 17, 18]. We note that our approximation also reduces to the form in (3) and (4). However, the notion of shiftable approximation (in a sense which will be made precise in Section II) plays an important role. Namely, it allows us to interpret the coefficients in (3) and (4) from an approximation-theoretic point of view. This in turn forms the basis of the proposed optimization used to compute the coefficients.

  • Unlike [19, 20, 23, 26, 27], our approach is scalable in the dimensions of the input and the guide .

  • An important difference with [22, 24, 25, 28] is that a different shiftable approximation is used at each pixel in our proposal, while the approximation is global in [22, 24, 25, 28]. This will be made precise in Section II.

  • Similar to [14, 17, 18], our method also involves splatting (clustering), blurring (convolutions) and slicing (weighted recombinations). However, the important difference is that we perform slicing in an optimal manner, which is done in a heuristic fashion in these methods.

We explain the proposed shiftable approximation in detail in Section II. The algorithm emerging from the approximation is conceptually simple and easy to code. The flowchart of the processing pipeline is shown in Figure 2. The computation-intensive components are “Cluster Range Space” and “Fast Convolutions”, for which highly optimized routines are available. Excluding clustering, the complexity is , where is the number of clusters. Notice that there is no dependence on . This is a big advantage compared to fast nonlocal means [13, 15], where the scaling is . The highlight of our proposal is that we are able to derive a bound on the approximation error for a particular variant of our algorithm [32]. In particular, the error is guaranteed to vanish if we use more clusters. This kind of guarantee is not offered by state-of-the-art algorithms for high-dimensional filtering [14, 17, 18], partly due to the complex nature of their formulation.

We use the proposed fast algorithm for various applications such as smoothing [4], denoising [9], low-light denoising [33], hyperspectral filtering [34], and flow-field denoising [35]. In particular, we demonstrate that our algorithm is competitive with existing fast algorithms for color bilateral and nonlocal means filtering (cf. Figures 1 and 11). We note that although our original target was high-dimensional filtering, our algorithm outperforms [26], which is considered as the state-of-the-art for bilateral filtering of grayscale images (cf. Figure 5). Finally, we note that we have not used the particular form of the Gaussian kernel in the derivation of the fast algorithm. Therefore, it can also be used for non-Gaussian kernels, such as the exponential and the Lorentz kernel [23, 36].

I-D Organization

This rest of the paper is organized as follows. In Section II, we describe the proposed approximation and the resulting fast algorithm. We also derive error bounds for a particular variant of the approximation. In Section III, we apply the fast algorithm for high-dimensional bilateral and nonlocal means filtering, and compare its performance (timing and accuracy) with state-of-the-art algorithms. We conclude the paper with a summary of the results in Section IV.

Ii Proposed Method

Notice that, if the guide has constant intensity value at each pixel, then (1) and (2) reduce to linear convolutions. This observation is essentially at the core of the fast algorithms in [20, 23, 26]. On the other hand, the fast algorithms in [22, 25, 28] are derived using a completely different idea, where the univariate Gaussian kernel is approximated using a polynomial or a trigonometric function . These functions are shiftable in the following sense: There exists basis functions such that, for any , we can write


where the coefficients depend only on . One can readily see that polynomials and trigonometric functions,


are shiftable. The utility of expansion (5) is that it allows us to factor out from the kernel. In particular, by replacing by , it was shown in [22] that we can compute the bilateral filter using fast convolutions. The Taylor polynomial of was used for in [24], and the Fourier approximation of was used in [22, 25, 28].

Ii-a Shiftable Approximation

As mentioned earlier, it is rather difficult to obtain shiftable approximations for high-dimensional Gaussians that are practically useful. We can use separable extensions of (6) to generate such approximations. However, the difficulty is that the number of terms grows as in this case, where is the order in (6). We address this fundamental problem using a data-centric approach. First, we do not use a shiftable (trigonometric or polynomial) approximation of [22, 25]. Instead, for some fixed pixel , we consider the multidimensional Gaussian centered at appearing in (1) and (2). Similar to (5), we wish to find such that


where depend only on . The key distinction with [22, 24, 25, 28] is that the functional approximation is defined locally in (7), namely, the approximation is different at each pixel. Moreover, (7) is neither a polynomial nor a trigonometric function. In spite of this, we continue to use the term “shiftable” to highlight the fact that the approximation is based on “shifts” of the basis functions . In this work, we propose to use the translates of the original kernel as the basis functions, i.e., we set . One can in principle use different basis functions, but we will not investigate this possibility in this paper.

The important consideration is that the shifts are set globally. Since the range kernel is defined via the guide , we propose to cluster the range space of and use the cluster centroids for . That is we partition the range space


into clusters . We set to be the centroid of . In summary, for each , we require that


At this point, notice the formal resemblance between (5) and (9). Hence, we will refer to (9) as a shiftable approximation in the rest of the discussion, even though it is not shiftable in the sense of (5).

Note that we need a good approximation in (9) only for . This is because the samples appearing in (1) and (2) are , where takes values in . To determine the coefficients, it is therefore natural to consider the following problem:

Fig. 3: Proposed approximation of the Gaussian range kernel in (9) for (left) and (right). The shifts are obtained by clustering a grayscale image whose histogram is shown above. As expected, the approximation is better for order compared to . In particular, the approximation is quite accurate over the entire dynamic range when . However, notice that for , the approximation is better in the interval , where the density is higher. As the approximation is data-driven, the error gets distributed in tune with the underlying histogram.
Fig. 4: Error rates for the proposed approximation (see text for the definition of ), where is the dimension of the range space of the guide image . We notice that the approximation error falls off with the increase in order . The error is averaged over the color images () from the Kodak dataset and hyperspectral images () from [37].

The difficulty is that this requires us to solve an expensive least-squares problem (the matrix to be inverted is large) for each . More importantly, we have a different inversion, at each pixel. This is time consuming and impractical. On the other hand, notice that is included in . Hence, we could perform the fitting over . Instead, we set and choose to fit over , which are quantized representatives of the points in . In short, while the cluster centers most likely do not belong to , they are representative of the larger set . Therefore, instead of (10), we consider the surrogate problem:

In matrix notation, we can compactly write this as


where and are given by


The solution of (11) is , where is the pseudo-inverse of

. In particular, we need to compute the pseudo-inverse just once; the coefficients at each pixel are obtained using a matrix-vector multiplication. In Figure

6, we show that solving (11) is much faster than solving (10), and that the coefficients from (11) are close to those from (10).

The approximation result for univariate Gaussians are shown in Figure 3. Notice that our approximation depends on the distribution of (because of the clustering), and the coefficients are obtained by solving a least square problem at each pixel . Hence, in Figure 3, a particular distribution is selected and the approximations for and are shown. The error rates (error vs. ) for multidimensional Gaussians (covariance ) corresponding to dimensions and are shown in Figure 4. The error for a fixed is simply (10) averaged over all the pixels.

To summarize, the steps in the proposed shiftable approximation are as follows:

  • Cluster the range space and use the centers for .

  • Set the basis functions as .

  • Set up using (12) and compute its pseudo-inverse .

  • At each pixel , set using (12) and .

We note that one can freely choose different shifts (and basis functions) in (9) for different applications. That is, (9) offers a broad approximation framework, where one can consider other rules for fixing the parameters (shifts and coefficients).

Ii-B Fast Algorithm

We now develop a fast algorithm by replacing the kernel in (1) and (2) with the approximation in (9). For , define and to be


For , , we replace in (1) and (2) with the approximation in (9). After some manipulations (see Appendix V-A), (1) and (2) are approximated as,


The advantage with (15) and (16) is clear once we notice that (13) and (14) can be expressed as convolutions. Indeed, by defining to be , where is given by (12), we can write

1Input: and , kernels and ;
2 Parameter: Number of clusters ;
3 Output: Approximation in (15);
4 cluster in (8) using bisecting -means;
5 Set up in (12) using and , and compute ;
6 for  do
7       for  do
8             ;
9             ;
11       end for
13 end for
14 for  do
15       ;
17 end for
18 Initialize and with zeros;
19 for  do
20       ;
21       ;
23 end for
Algorithm 1 Fast High-Dimensional Filtering.

where denotes linear convolution. In summary, using (9), we have been able to approximate the high-dimensional filtering using weighted combinations of fast convolutions. The overall process is summarized in Algorithm 1. Note that we have used , and to represent pointwise addition, multiplication, and division. In steps 1 and 1, denotes the mapping . The dominant computation in Algorithm 1 are the convolutions in steps 1 and 1. Clustering and inversion of is performed just once. Notice that , which is used for computing the coefficients in step 1, is anyways required to form the intermediate images in step 1. The flowchart of our algorithm, along with typical timings for a megapixel image, is shown in Figure 2. Note that the steps in Figure 2 (resp. loops in Algorithm 1) can be parallelized: clustering the range space (loop 1), finding coefficients (loop 1), and performing convolutions (loop 1).

Ii-C Implementation

We have used bisecting -means [38] for iteratively clustering the range space in step 1

. In particular, the cluster with largest variance is divided into two parts (using

-means clustering) at every iteration. This ensures that we do not end up with few large clusters at the end. We initialize the -means clustering using a pair of points with the largest separation. We picked bisecting -means over other clustering algorithms as its cost is linear in , and is generally faster than other clustering algorithms [38]. Importantly, its computational overhead is negligible compared to the time required for the convolutions. Needless to say, we can use any efficient clustering method and the fast algorithm would still go through.

We have used the Matlab routine pinv to compute in step 1. As is well-known, when is a box or Gaussian, we can convolve using operations, i.e., the cost does not depend on the size of spatial filter [39, 40]. The Gaussian convolutions in steps 1 and 1 are implemented using Young’s algorithm [40]. For box convolutions, we used a standard recursive procedure which require just four operations per pixel for any box size. Since the leading computations are the convolutions, and there are convolutions, the overall complexity is ; the additional term is for computing (12). Importantly, we are able to obtain a speedup of over the brute-force computations of (1) and (2). The storage complexity of Algorithm 1 is clearly linear in , as well as the image size. The MATLAB implementation of our algorithm is available here:

Ii-D Approximation Accuracy

We note that the algorithm in [32] is a variant of our method. Indeed, for , if we set the coefficients in (9) to be


then we obtain the fast algorithm in [32]. In other words, the filtering is performed on a cluster-by-cluster basis in this case. Correspondingly, (15) reduces to


assuming that is in cluster .

In (15) and (16), we weight the results from the clusters, where the weights are obtained using the optimization in (11). We will demonstrate in Section III that the weighting helps in reducing the approximation error. Unfortunately, it also makes the analysis of the algorithm difficult. Intuitively though, one would expect the error from (15) to be less than that from (18). For the latter approximation, we have the following result [32].

Theorem 1 (Error bound)

Suppose that and are non-negative, and that the latter is Lipschitz continuous, i.e., for some , , for any and in . Then, for some constant ,


where is given by (18), and is the clustering error:


We note that the assumptions in Theorem 1 are valid when is a box or Gaussian, and is a multidimensional Gaussian. For completeness, we provide the derivation of Theorem 1 in Appendix V-B. For several clustering methods, the clustering error vanishes as [38]. For any of these methods, we can approximate (1) with arbitrary accuracy provided is sufficiently large. We note that such a guarantee is not available for existing fast approximation algorithms for high-dimensional filtering [14, 17, 18].

Iii Numerical Results

We now report some numerical results for high-dimensional filtering. We have used the Matlab implementation of Algorithm 1. For fair timing comparisons with Adaptive Manifolds (AM) [14] and Global Color Sparseness (GCS) [19], we have used the Matlab code provided by the authors. The timing analysis was performed on an Intel 4-core 3.4 GHz machine with 32 GB memory. The grayscale and color images used for our experiments were obtained from standard databases111,, The infrared and hyperspectral images are the ones used in [14] and [37, 41]. To compare the filtering accuracy with existing methods, we have fixed the timings by adjusting the respective approximation order. The objective of this section is to demonstrate that, in spite of its simple formulation, our algorithm is competitive with existing fast approximations of bilateral and nonlocal means filtering.

We have used an isotropic Gaussian range kernel for bilateral and nonlocal means filtering :

where the standard deviation

is used to control the effective width (influence) of the kernel. We recall from (1) and (2) that the input to is the difference . In one of the experiments, is an anisotropic Gaussian; we have explicitly mentioned this at the right place. The spatial kernel for bilateral filtering is also Gaussian:

where is again the standard deviation. For nonlocal means (NLM), is a box filter, namely, no spatial weighting is used. The filter width for bilateral filtering is , while that for NLM is explicitly mentioned (typically, ).

Following [24, 26], the error between the outputs of the brute-force implementation and the fast algorithm is measured using the peak signal-to-noise ratio (PSNR). This is given by , where


and is the number of pixels. Note that is also used to measure denoising performance; in this case, and are the denoised and clean images in (21). It should be evident from the context as to what is being measured using . We also use [42] for measuring visual quality.

Iii-a Grayscale Bilateral Filtering

As mentioned earlier, although the proposed method is targeted at high-dimensional filtering, we can outperform [26], which is considered the state-of-the-art for bilateral filtering of grayscale images. We demonstrate this in Table I and Figure 5 using a couple of examples. In particular, notice in Figure 5 that artifacts are visible in Yang’s result (when is small.). Our is almost dB higher, and our output does not show any artifacts. A detailed PSNR comparison is provided in Table I. We notice that, for the same timing, our PSNR is consistently better than [26]. Note that we have set as the number of clusters (resp. quantization bins) for our method (resp. [26]).

(a) Input ().
(b) Brute-force filtering.
(c) Proposed (43.5 dB).
(d) Yang ( dB).
Fig. 5: Visual comparison of bilateral filtering for the House image. Filter parameters: and . For a fair comparison, we have used four clusters for the proposed method and four bins for Yang’s method [26]. Notice that our result is better than Yang’s, both visually (compare boxed areas) and in terms of . This is because we use non-uniform quantization (clustering) and data-driven approximation (see Figure 3).
Yang [26]
TABLE I: Comparison of approximation accuracy () for bilateral filtering of Barbara (). Average timing is ms for Yang and ms for proposed method.

For completeness, we have reported the timings for different in Table II, where we have used Young’s fast algorithm for performing the Gaussian convolutions [40]. As expected, the timing essentially does not change with .

Time (ms)
TABLE II: timings for bilateral filtering of Barbara for varying ().
(a) Input ().
(b) Brute-force ( ms).
(c) Using (10) ( sec, dB).
(d) Using (11) (27 ms, 32.6 dB).
Fig. 6: Bilateral filtering of a grayscale image. Parameters: and . The coefficients are obtained from (10) and (11) in (c) and (d).

In Figure 6, we have compared the bilateral filtering results obtained using (10) and (11). Since (11) is an approximation of (10), we see a drop in PSNR using (10), but importantly there is a significant speed up using (11). In fact, the time required using (10) is more than that of the brute-force implementation.

Iii-B Color Bilateral Filtering

We next perform fast bilateral filtering of color images, for which the state-of-the-art methods are AM [14] and GCS [19]. We have also compared with our previous work [32], which we simply refer to as “Clustering”. In Figure 7, we have studied the effect of the number of clusters on the filtering accuracy, for our method, GCS, AM and [32]. While GCS performs better for small (coarse approximation), our method outperforms GCS when . Note that the number of manifolds can only take dyadic (discrete) values [14].

Fig. 7: Scaling of filtering accuracy (average ) with number of clusters (manifolds for AM), for bilateral filtering of color images from the Kodak dataset. The parameters are and .
size\ P K K table for color images (, ).
Fig. 8: Timing (seconds) vs. resolution (MP) for color filtering.

In Table 8, we report the filtering accuracy for full HD, ultra HD, and ultra HD by varying the number of clusters . For a given resolution and , we have averaged the PSNR over six high resolution images222 with that resolution. Notice that, irrespective of the image resolution, the PSNR values are above dB with just clusters. In Figure 8, the timings for filtering color images are plotted for varying image sizes ( MP to MP). This plot verifies the linear dependence of the timing on the image size, irrespective of the number of clusters used.

Fig. 9: Approximation accuracy () for bilateral filtering of color Barbara (), at different values averaged over . The average timings are: Proposed: ms, Global Color Sparseness: ms, Adaptive Manifolds: ms and Clustering [32]: ms.

In Figure 9, we have compared the and timings for bilateral filtering a color image, at different and values. We have used clusters for all three methods: proposed, GCS, and [32]. We use the default settings for the number of manifolds in AM. Notice that we are better by dB, while the timings are roughly the same. In particular, notice the PSNR improvement that we obtain over [32] by optimizing the coefficients, i.e., by using (15) in place of (18). Interestingly, [32] can outperform GCS and AM at large values.

A visual comparison for color bilateral filtering was already shown in Figure 1, along with the error images, timings, and s. In particular, notice that the error from our method is smaller than AM and GCS. This is interesting as our method is conceptually simpler than these algorithms — AM has a complex formulation and GCS is based on two-pass filtering.

In Figure 10, a visual result is provided to highlight how the approximation accuracy scales with . Notice that the accuracy improves significantly as increases from to . In fact, the approximation already resembles the result of brute-force filtering when is .

(a) ( ms, dB).
(b) ( ms, dB).
(c) ( ms, dB).
(d) ( ms, dB).
(e) Brute-force.
Fig. 10: Bilateral filtering of Peppers for different values of ( and ). Notice how the accuracy improves with .
(a) Clean/Noisy ( dB, ).
(b) [21] ( min, dB, ).
(c) Ours (1.7 sec, 31.2 dB, 0.92).
(d) AM ( sec, dB, ).
(e) [12] (sec, dB, ).
(f) Clean/Noisy ( dB, ).
(g) [21] ( min, dB, ).
(h) Ours (2.7 sec, 26.8 dB, 0.82).
(i) AM ( sec, dB, ).
(j) [12] ( sec, dB, ).
Fig. 11: Color image denoising (Gaussian noise) using PCA-NLM [21], where the patch and search sizes are and . For PCA-NLM, AM and our method, PCA was used to reduce the range dimension from to . We used clusters for our method. The number of manifolds was automatically set to in the AM code [14]. The run-time, and SSIM for AM with manifolds are sec, dB and (top image) and sec, dB and (bottom image). For comparison, we have also shown the result from a state-of-the-art Gaussian denoiser [12].

Iii-C Color Nonlocal Means

In NLM, is the noisy image (corrupted with additive Gaussian noise) and is a square patch of pixels around . In particular, if the patch is , then the dimension of is for a color image. A popular trick to accelerate NLM (called PCA-NLM) is to project the patch onto a lower-dimensional space using PCA [21]. We note that PCA was used for reducing the patch dimension (as explained earlier) for both AM and our algorithm.

A visual comparison is provided in Figure 11 for low and high noise, where the superior performance of our method over AM is evident. We have also compared with a state-of-the-art Gaussian denoiser (CPU implementation333 based on deep neural nets [12]. It is not surprising that the result from [12] is better both in terms of PSNR and visual quality. However, we are somewhat faster. Figure 11 also suggests that our method is visually closer to PCA-NLM at both low and high noise levels, albeit with a significant speedup ( min to sec).

In Figure 12, we have reported Gaussian denoising results on the Kodak dataset for different noise levels (). While the proposed method and AM perform similarly for small , the former is more robust when is large. In fact, the denoising performance of AM degrades rapidly with the increase in . Another important point is that our result is close to PCA-NLM in terms of PSNR and SSIM for all noise levels.

Fig. 12: Comparison of the denoising performances of PCA-NLM [21], Adaptive Manifolds, and the proposed method. The noise level is denoted by . The PCA dimension was set to for all methods. The and [42] values are averaged over the images from the Kodak dataset. The parameters used are , and .

Iii-D Hyperspectral Denoising

We next perform denoising of hyperspectral images using the bilateral filter. For such images, the range dimension (i.e., the number of spectral bands) is high. In Figure 13, denoising results are exclusively shown to highlight the speedup of our approximation over brute-force implementation for a -band image. Notice the dramatic reduction in timing compared to the brute-force implementation. Moreover, notice that the denoising quality is in fact quite good for our method (compare the text in the boxed regions).

(a) Input [37].
(b) 440nm band.
(c) Brute-force ( min).
(d) Proposed (15 sec).
Fig. 13: Denoising of a hyperspectral image using bilateral filtering ( and ). The image in (b) shows one of the noisy bands; the same band after filtering is shown in (c) and (d). We have shown one of the bands just for visualization; the filtering was performed on the entire hyperspectral image and not on a band-by-band basis.
(k) (a) Infrared input.
(l) (b) Noisy low-light input.
(m) (c) Without infrared.
(n) (d) With infrared data.
(o) (e) [43].
Fig. 14: Nonlocal means denoising of a low-light image using [43] and the proposed method, with and without infrared data [33]. Notice that we obtain better denoising by taking the infrared information into account (see zoomed sections).
(a) Clean/Noisy().
(b) Proposed.
(c) [44].
(d) [45].
Fig. 15: Hyperspectral denoising of a natural image corrupted with Gaussian noise. Image size: ( bands. For our method, , and . We used iterations for [44] and iteration for [45]. The (timing, MPSNR, MSSIM) for (b), (c) and (d) are (36 sec, 31.29 dB, 0.87), ( sec, dB, ) and ( sec, dB, ).

Moreover, we also compare with recent optimization-based denoising methods [44, 45], where parameters are tuned accordingly. Visual and quantitative comparisons are shown in Figure 15 (Pavia dataset). For quantitative comparisons, we have used MPSNR and MSSIM, which are simply the PSNR and SSIM values averaged over the spectral bands. We notice that the restoration obtained using our method is better than [44, 45] (source code made public by authors), which is supported by the metrics shown in the figures. The same is visually evident from a comparison of the boxed regions in Figure 15. In particular, the color is not restored properly in [44], and grains can be seen in [45]. As expected, we are much faster than these iterative methods, since we perform the filtering in one shot.

Iii-E Low-light Denoising

Finally, we use our fast algorithm for NLM denoising of low-light images using additional infrared data [33]. In Figure 14, we have shown a visual result which compares our method, both with and without infrared data, and [43] (an optimization method). We set the patch and search window sizes as and . We used an anisotropic Gaussian kernel, where for the low-light data and for the infrared data. The dimension was reduced from to using PCA, and the infrared data was added as the seventh dimension. We used clusters for our method. In Figure 14, notice that some salient features are lost if we solely use the low-light input as the guide. Instead, if we add the infrared data to the guide, then the restored image is sharper and the features are more apparent compared to [43] and NLM without infrared data.

Iii-F Flow-field Denoising

Finally, we apply the proposed method for flow-field denoising. In particular, sharp directional changes in the flow can be preserved much better using NLM, while simultaneously removing the noise (with Gaussian filtering as the baseline). An instance of flow-field denoising [35] is shown in Figure 16 and compared with Gaussian smoothing.

(a) Clean flow-field.
(b) Noisy flow-field ().
(c) NLM output.
(d) Gaussian smoothing.
Fig. 16: Denoising of a synthetic flow-field using proposed NLM approximation and Gaussian smoothing. The synthetic flow-field used is and . The patch and search window sizes for NLM are and , and . We have used PCA to reduce the dimension of the patch from to . Notice that isotropic Gaussian smoothing fails to preserve sharp changes in the flow direction.

Iv Conclusion

We proposed a framework for fast high-dimensional filtering by approximating both the data and the kernel. In particular, we derived an algorithm that fuses the scalability of the former with the approximation capability of the latter. At the core of our algorithm is the concept of shiftable approximation, which allows us to interpret the coefficients in the framework of Durand et al. [23] from an approximation-theoretic point of view. We proposed an efficient method for determining the shifts (centers) and the coefficients using -means clustering (inspired by [19]) and data-driven optimization. Though the proposed algorithm is conceptually simple and easy to implement (about lines of code), it was shown to yield promising results for diverse applications. In particular, our algorithm was shown to be competitive with state-of-the-art methods for fast bilateral and nonlocal means filtering of color images.


The authors thank the editor and the anonymous reviewers for their comments and suggestions. We also thank the authors of [12, 14, 19, 44] for distributing their code. We also thank Jingxiang Yang for helping us debug the code of [45].

V Appendix

V-a Derivation of (15) and (16)

We recall that the basis of the approximation is the following: For , we replace in (1) and (2) with . That is, we approximate the numerator of (1) with


Exchanging the sums, we can write (22) as

where is defined in (13). Similarly, we can write (23) as , where is defined in (14). This completes the derivation of (15) and (16).

V-B Proof of Theorem 1

For notational convenience, let , where

and is as defined in (2). For some fixed , assume that where . Then, following (18),


Note that we can write

By triangle inequality, we can bound using


where we have used the fact that . This follows from (13), (14), and (24). Now

Note that we can assume that each is in . This is simply because the weights appear both in the numerator and denominator in (1) and (24). Moreover, since the range of is , using triangle inequality again, we obtain




Now, since and are non-negative, . Using the fact that along with (25), (26) and (27), we obtain


where . On squaring (28) and summing over all pixels, we arrive at (19).


  • [1] P. Perona and J. Malik, “Scale-space and edge detection using anisotropic diffusion,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, no. 7, pp. 629–639, 1990.
  • [2] V. Aurich and J. Weule, “Non-linear gaussian filters performing edge preserving diffusion,” Mustererkennung, pp. 538–545, 1995.
  • [3] S. M. Smith and J. M. Brady, “SUSAN- A new approach to low level image processing,”

    International Journal of Computer Vision

    , vol. 23, no. 1, pp. 45–78, 1997.
  • [4] C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” Proc. IEEE International Conference on Computer Vision, pp. 839–846, 1998.
  • [5] D. Comaniciu and P. Meer, “Mean shift analysis and applications,” Proc. IEEE International Conference on Computer Vision, vol. 2, pp. 1197–1203, 1999.
  • [6] Z. Farbman, R. Fattal, D. Lischinski, and R. Szeliski, “Edge-preserving decompositions for multi-scale tone and detail manipulation,” ACM Transactions on Graphics, vol. 27, no. 3, p. 67, 2008.
  • [7] E. S. Gastal and M. M. Oliveira, “Domain transform for edge-aware image and video processing,” ACM Transactions on Graphics, vol. 30, no. 4, pp. 69:1–69:12, 2011.
  • [8] K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 6, pp. 1397–1409, 2013.
  • [9] A. Buades, B. Coll, and J.-M. Morel, “A non-local algorithm for image denoising,”

    Proc. IEEE Conference on Computer Vision and Pattern Recognition

    , vol. 2, pp. 60–65, 2005.
  • [10] S. Paris, P. Kornprobst, J. Tumblin, and F. Durand, “Bilateral filtering: Theory and Applications,” Foundations and Trends® in Computer Graphics and Vision, vol. 4, no. 1, pp. 1–73, 2009.
  • [11] K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian et al., “Image denoising with block-matching and 3D filtering,” Proceedings of SPIE, vol. 6064, no. 30, pp. 606 414–606 414, 2006.
  • [12] K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising,” IEEE Transactions on Image Processing, vol. 26, no. 7, pp. 3142–3155, 2017.
  • [13] J. Darbon, A. Cunha, T. F. Chan, S. Osher, and G. J. Jensen, “Fast nonlocal filtering applied to electron cryomicroscopy,” Proc. IEEE International Symposium on Biomedical Imaging: From Nano to Macro, pp. 1331–1334, 2008.
  • [14] E. S. Gastal and M. M. Oliveira, “Adaptive manifolds for real-time high-dimensional filtering,” ACM Transactions on Graphics, vol. 31, no. 4, pp. 33:1–33:13, 2012.
  • [15] J. Wang, Y. Guo, Y. Ying, Y. Liu, and Q. Peng, “Fast non-local algorithm for image denoising,” Proc. IEEE International Conference on Image Processing, pp. 1429–1432, 2006.
  • [16] J. Chen, S. Paris, and F. Durand, “Real-time edge-aware image processing with the bilateral grid,” ACM Transactions on Graphics, vol. 26, no. 3, p. 103, 2007.
  • [17] A. Adams, J. Baek, and M. A. Davis, “Fast high-dimensional filtering using the permutohedral lattice,” Computer Graphics Forum, vol. 29, no. 2, pp. 753–762, 2010.
  • [18] A. Adams, N. Gelfand, J. Dolson, and M. Levoy, “Gaussian kd-trees for fast high-dimensional filtering,” ACM Transactions on Graphics, vol. 28, no. 3, p. 21, 2009.
  • [19] M. G. Mozerov and J. Van De Weijer, “Global color sparseness and a local statistics prior for fast bilateral filtering,” IEEE Transactions on Image Processing, vol. 24, no. 12, pp. 5842–5853, 2015.
  • [20] S. Paris and F. Durand, “A fast approximation of the bilateral filter using a signal processing approach,” Proc. European Conference on Computer Vision, pp. 568–580, 2006.
  • [21] T. Tasdizen, “Principal neighborhood dictionaries for nonlocal means image denoising,” IEEE Transactions on Image Processing, vol. 18, no. 12, pp. 2649–2660, 2009.
  • [22] K. N. Chaudhury, D. Sage, and M. Unser, “Fast O(1) bilateral filtering using trigonometric range kernels,” IEEE Transactions on Image Processing, vol. 20, no. 12, pp. 3376–3382, 2011.
  • [23] F. Durand and J. Dorsey, “Fast bilateral filtering for the display of high-dynamic-range images,” ACM Transactions on Graphics, vol. 21, no. 3, pp. 257–266, 2002.
  • [24] F. Porikli, “Constant time O(1) bilateral filtering,” Proc. IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8, 2008.
  • [25] K. Sugimoto and S. I. Kamata, “Compressive bilateral filtering,” IEEE Transactions on Image Processing, vol. 24, no. 11, pp. 3357–3369, 2015.
  • [26] Q. Yang, K. H. Tan, and N. Ahuja, “Real-time O(1) bilateral filtering,” Proc. IEEE Conference on Computer Vision and Pattern Recognition, pp. 557–564, 2009.
  • [27] Q. Yang, N. Ahuja, and K.-H. Tan, “Constant time median and bilateral filtering,” International Journal of Computer Vision, vol. 112, no. 3, pp. 307–318, 2015.
  • [28] K. N. Chaudhury and S. D. Dabhade, “Fast and provably accurate bilateral filtering,” IEEE Transactions on Image Processing, vol. 25, no. 6, pp. 2519–2528, 2016.
  • [29] S. Ghosh and K. N. Chaudhury, “Fast bilateral filtering of vector-valued images,” Proc. IEEE International Conference on Image Processing, pp. 1823–1827, 2016.
  • [30] ——, “Color bilateral filtering using stratified fourier sampling,” Proc. Global Conference on Signal and Information Processing, 2018.
  • [31] K. N. Chaudhury, “Acceleration of the shiftable O(1) algorithm for bilateral filtering and nonlocal means,” IEEE Transactions on Image Processing, vol. 22, no. 4, pp. 1291–1300, 2013.
  • [32] P. Nair and K. N. Chaudhury, “Fast high-dimensional filtering using clustering,” Proc. IEEE International Conference on Image Processing, pp. 240–244, 2017.
  • [33] S. Zhuo, X. Zhang, X. Miao, and T. Sim, “Enhancing low light images using near infrared flash images,” Proc. IEEE International Conference on Image Processing, pp. 2537–2540, 2010.
  • [34] Q. Yuan, L. Zhang, and H. Shen, “Hyperspectral image denoising employing a spectral–spatial adaptive total variation model,” IEEE Transactions on Geoscience and Remote Sensing, vol. 50, no. 10, pp. 3660–3677, 2012.
  • [35] M. A. Westenberg and T. Ertl, “Denoising 2-D vector fields by vector wavelet thresholding,” Journal of WSCG, pp. 33–40, 2005.
  • [36] Q. Yang, “Hardware-efficient bilateral filtering for stereo matching,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 5, pp. 1026–1032, 2014.
  • [37] S. M. C. N. D. H. Foster, K. Amano and M. J. Foster, “Frequency of metamerism in natural scenes,” Journal of the Optical Society of America A, vol. 23, pp. 2359–2372, 2006.
  • [38] P.-N. Tan, M. Steinbach, and V. Kumar, Introduction to Data Mining.   Addison-Wesley Longman Publishing Co., Inc., 2005.
  • [39] R. Deriche, “Recursively implementating the Gaussian and its derivatives,” INRIA, Research Report RR-1893, 1993.
  • [40] I. T. Young and L. J. Van Vliet, “Recursive implementation of the Gaussian filter,” Signal Processing, vol. 44, no. 2, pp. 139–151, 1995.
  • [41] “Hyperspectral image database.” [Online]. Available:
  • [42] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600–612, 2004.
  • [43] X. Yang and X. Lin, “Brightening and denoising lowlight images,” Proc. International Conference on Information, Communications and Signal Processing, pp. 1–5, 2015.
  • [44] F. Fan, Y. Ma, C. Li, X. Mei, J. Huang, and J. Ma, “Hyperspectral image denoising with superpixel segmentation and low-rank representation,” Information Sciences, vol. 397, pp. 48–68, 2017.
  • [45] Y.-Q. Zhao and J. Yang, “Hyperspectral image denoising via sparse representation and low-rank constraint,” IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no. 1, pp. 296–308, 2015.