Smoothing images while preserving structures (edges, corners, lines, etc.) is a fundamental task in image processing. A classic example in this regard is the diffusion framework of Perona and Malik . In the last few decades, several filtering based approaches have been proposed for this task. Prominent examples include bilateral filtering [2, 3, 4], mean shift filtering , weighted least squares , domain transform , guided filtering , and nonlocal means . The brute-force implementation of most of these filters is computationally prohibitive and cannot be used for real-time applications. To address this problem, researchers have come up with approximation algorithms that can significantly accelerate the filtering without compromising the quality. Unfortunately, for bilateral and nonlocal means filtering, most of these algorithms can be used only for grayscale images. It is difficult to use them even for color filtering, while preserving their efficiency. The situation is more challenging for multispectral and hyperspectral images, where the range dimension is much larger. Of course, any algorithm for grayscale filtering can be used for channel-by-channel processing of high-dimensional images. However, by working in the combined intensity space, we can exploit the strong correlation between channels .
I-a High-Dimensional Filtering
The focus of the present work is on two popular smoothers, the bilateral and the nonlocal means filters, and their application to high-dimensional data. The former is used for edge-preserving smoothing in a variety of applications , while the latter is primarily used for denoising. Though nonlocal means has limited denoising capability compared to state-of-the-art denoisers [11, 12], it continues to be of interest due to its simplicity and the availability of low-complexity algorithms [13, 14, 15]. The connection between these filters is that they can be interpreted as a multidimensional Gaussian filter operating in the joint spatio-range space . The term high-dimensional filtering is used when the dimension of the spatio-range space is large [14, 17, 18]. In this paper, we will use this term when the range dimension is greater than one.
An unified formulation of high-dimensional bilateral and nonlocal means filtering is as follows. Suppose that the input data is , where is the spatial domain, is the range space, and and are dimensions of the domain and range of . Let be the guide image, which is used to control the filtering . The output is given by
The aggregations in (1) and (2) are performed over a window around the pixel of interest, i.e., , where is the window length. We call the spatial kernel and the range kernel , where we denote the dimension of the range space of as . The spatial kernel is used to measure proximity in the spatial domain (as in classical linear filtering). On the other hand, the range kernel is used to measure proximity in the range space of .
For (joint) bilateral filtering, and are generally Gaussian. While and are identical for the bilateral filter (=), they are different for the joint bilateral filter . In the original proposal for nonlocal means , are the spatial neighbors of pixel (extracted from a patch around ) and hence is times the size of image patch, is a multidimensional Gaussian, and is a box filter. Later, it was shown in  that by reducing the dimension of each patch (e.g., by applying PCA to the collection of patches), we can improve the speed and denoising performance. Similar to [14, 17, 18], we have also considered PCA-based nonlocal means in this work. Needless to say, the proposed algorithm can also work with full patches.
The brute-force computation of (1) and (2) clearly requires operations per pixel. In particular, the complexity scales exponentially with the window length , which can be large for some smoothing applications. In the last decade, several fast algorithms have been proposed for bilateral filtering of grayscale images [20, 22, 23, 24, 25, 26]. The complexity can be cut down from to using these algorithms. Similarly, fast algorithms have been proposed for nonlocal means [13, 15] that can reduce the complexity from to .
I-B Previous Work
) using clustering and interpolation. In terms of our notations, their approximation of (1) and (2) can be expressed as
The centers are obtained by uniformly sampling the range space of , whereas the interpolation coefficient is determined from the distance between and . Notice that the inner summations in (3) and (4) can be expressed as convolutions, which are computed using FFT in . Furthermore, the entire processing is performed on the subsampled image, following which the output image is upsampled. The approximation turns out to be effective for grayscale images. This is because the range space is one-dimensional in this case, and hence a good approximation can be achieved using a small the number of samples . Moreover, it suffices to interpolate just the two neighboring pixels. Paris et al.  showed that the accuracy of  can be improved by downsampling the intermediate images (instead of the input image) involved in the convolutions. Chen et al.  proposed to accelerate  by performing convolutions in the higher-dimensional spatio-range space. Yang et al.  observed that this framework can also be used with non-Gaussian range kernels, and that convolutions can be used to improve the computational complexity. As the range dimension increases, these methods however become computationally inefficient. In particular, an exponentially large is required to achieve a decent approximation, and linear interpolation requires neighboring samples. In , Yang et al. extended this method to perform bilateral filtering of color images, and proposed a fast scheme for linear interpolation. However,  is based on uniform quantization, which is not optimal given the strong correlation between the color channels. In fact, a more efficient algorithm for color filtering was later proposed in , where clustering is used to find . Moreover, instead of interpolation, they used local statistics prior to improve the accuracy. However, the algorithm is used only for color bilateral filtering and its application to other forms of high-dimensional filtering is not reported in .
Splat-blur-slice method. Another line of work is [14, 17, 18], which is based on a slightly different form of approximation. More precisely, they are based on the “splat-blur-slice” framework, which involves data partitioning (clustering or tessellation), fast convolutions, and interpolation as the core operations. These are considered to be the state-of-the-art fast algorithms for high-dimensional bilateral and nonlocal means filtering. The idea, originally proposed in , is based on the observation that  corresponds to convolutions in the joint spatio-range space. The general idea is to sample the input pixels in a different space (splatting), perform Gaussian convolutions (blurring), and resample the result back to the original pixel space (slicing). Adams et al. tessellated the domain and performed blurring using the -d tree in  and the permutohedral lattice in . Gastal et al.  divided and resampled the domain into non-linear manifolds, and performed blurring on them. This was shown to be faster than all other methods within the splat-blur-slice framework.
Kernel approximation. In an altogether different direction, it was shown in [22, 24, 25, 28] that fast algorithms for bilateral filtering can be obtained by approximating the range kernel using the so-called shiftable functions, which includes polynomials and trigonometric functions. The above mentioned algorithms provide excellent accelerations for grayscale images, and are generally superior to algorithms based on data approximation. Unfortunately, it is difficult to approximate a high-dimensional Gaussian using low-order shiftable functions. For example, the Fourier expansion in 
is quite effective in one dimension (grayscale images), but its straightforward extension to even three dimensions (color images) results in exponentially many terms. This is referred to as the “curse of dimensionality”.
Existing algorithms have focussed on either approximating the data [14, 17, 18, 19, 26] or the range kernel [22, 24, 25, 29, 30]. In this paper, we combine the ideas of shiftability [22, 24, 28, 31] and data approximation [23, 26] within a single framework to approximate (1) and (2). In particular, we locally approximate the range kernel in (1) and (2) using weighted and shifted copies of a Gaussian, where the weights and shifts are determined from the data. More specifically, we use clustering to determine the shifts, whereby the correlation between data channels is taken into account. Once the shifts have been fixed, we determine the weights (coefficients) using least-squares fitting, where the data is again taken into account. An important technical point is that we are required to solve a least-squares problem at each pixel, which can be expensive. We show how this can be done using just one matrix inversion. In summary, the key distinctions of our method in relation to previous approaches are as follows:
). The latter is done in a heuristic fashion in[14, 17, 18]. We note that our approximation also reduces to the form in (3) and (4). However, the notion of shiftable approximation (in a sense which will be made precise in Section II) plays an important role. Namely, it allows us to interpret the coefficients in (3) and (4) from an approximation-theoretic point of view. This in turn forms the basis of the proposed optimization used to compute the coefficients.
We explain the proposed shiftable approximation in detail in Section II. The algorithm emerging from the approximation is conceptually simple and easy to code. The flowchart of the processing pipeline is shown in Figure 2. The computation-intensive components are “Cluster Range Space” and “Fast Convolutions”, for which highly optimized routines are available. Excluding clustering, the complexity is , where is the number of clusters. Notice that there is no dependence on . This is a big advantage compared to fast nonlocal means [13, 15], where the scaling is . The highlight of our proposal is that we are able to derive a bound on the approximation error for a particular variant of our algorithm . In particular, the error is guaranteed to vanish if we use more clusters. This kind of guarantee is not offered by state-of-the-art algorithms for high-dimensional filtering [14, 17, 18], partly due to the complex nature of their formulation.
We use the proposed fast algorithm for various applications such as smoothing , denoising , low-light denoising , hyperspectral filtering , and flow-field denoising . In particular, we demonstrate that our algorithm is competitive with existing fast algorithms for color bilateral and nonlocal means filtering (cf. Figures 1 and 11). We note that although our original target was high-dimensional filtering, our algorithm outperforms , which is considered as the state-of-the-art for bilateral filtering of grayscale images (cf. Figure 5). Finally, we note that we have not used the particular form of the Gaussian kernel in the derivation of the fast algorithm. Therefore, it can also be used for non-Gaussian kernels, such as the exponential and the Lorentz kernel [23, 36].
This rest of the paper is organized as follows. In Section II, we describe the proposed approximation and the resulting fast algorithm. We also derive error bounds for a particular variant of the approximation. In Section III, we apply the fast algorithm for high-dimensional bilateral and nonlocal means filtering, and compare its performance (timing and accuracy) with state-of-the-art algorithms. We conclude the paper with a summary of the results in Section IV.
Ii Proposed Method
Notice that, if the guide has constant intensity value at each pixel, then (1) and (2) reduce to linear convolutions. This observation is essentially at the core of the fast algorithms in [20, 23, 26]. On the other hand, the fast algorithms in [22, 25, 28] are derived using a completely different idea, where the univariate Gaussian kernel is approximated using a polynomial or a trigonometric function . These functions are shiftable in the following sense: There exists basis functions such that, for any , we can write
where the coefficients depend only on . One can readily see that polynomials and trigonometric functions,
are shiftable. The utility of expansion (5) is that it allows us to factor out from the kernel. In particular, by replacing by , it was shown in  that we can compute the bilateral filter using fast convolutions. The Taylor polynomial of was used for in , and the Fourier approximation of was used in [22, 25, 28].
Ii-a Shiftable Approximation
As mentioned earlier, it is rather difficult to obtain shiftable approximations for high-dimensional Gaussians that are practically useful. We can use separable extensions of (6) to generate such approximations. However, the difficulty is that the number of terms grows as in this case, where is the order in (6). We address this fundamental problem using a data-centric approach. First, we do not use a shiftable (trigonometric or polynomial) approximation of [22, 25]. Instead, for some fixed pixel , we consider the multidimensional Gaussian centered at appearing in (1) and (2). Similar to (5), we wish to find such that
where depend only on . The key distinction with [22, 24, 25, 28] is that the functional approximation is defined locally in (7), namely, the approximation is different at each pixel. Moreover, (7) is neither a polynomial nor a trigonometric function. In spite of this, we continue to use the term “shiftable” to highlight the fact that the approximation is based on “shifts” of the basis functions . In this work, we propose to use the translates of the original kernel as the basis functions, i.e., we set . One can in principle use different basis functions, but we will not investigate this possibility in this paper.
The important consideration is that the shifts are set globally. Since the range kernel is defined via the guide , we propose to cluster the range space of and use the cluster centroids for . That is we partition the range space
into clusters . We set to be the centroid of . In summary, for each , we require that
At this point, notice the formal resemblance between (5) and (9). Hence, we will refer to (9) as a shiftable approximation in the rest of the discussion, even though it is not shiftable in the sense of (5).
Note that we need a good approximation in (9) only for . This is because the samples appearing in (1) and (2) are , where takes values in . To determine the coefficients, it is therefore natural to consider the following problem:
The difficulty is that this requires us to solve an expensive least-squares problem (the matrix to be inverted is large) for each . More importantly, we have a different inversion, at each pixel. This is time consuming and impractical. On the other hand, notice that is included in . Hence, we could perform the fitting over . Instead, we set and choose to fit over , which are quantized representatives of the points in . In short, while the cluster centers most likely do not belong to , they are representative of the larger set . Therefore, instead of (10), we consider the surrogate problem:
In matrix notation, we can compactly write this as
where and are given by
The solution of (11) is , where is the pseudo-inverse of
. In particular, we need to compute the pseudo-inverse just once; the coefficients at each pixel are obtained using a matrix-vector multiplication. In Figure6, we show that solving (11) is much faster than solving (10), and that the coefficients from (11) are close to those from (10).
The approximation result for univariate Gaussians are shown in Figure 3. Notice that our approximation depends on the distribution of (because of the clustering), and the coefficients are obtained by solving a least square problem at each pixel . Hence, in Figure 3, a particular distribution is selected and the approximations for and are shown. The error rates (error vs. ) for multidimensional Gaussians (covariance ) corresponding to dimensions and are shown in Figure 4. The error for a fixed is simply (10) averaged over all the pixels.
To summarize, the steps in the proposed shiftable approximation are as follows:
Cluster the range space and use the centers for .
Set the basis functions as .
Set up using (12) and compute its pseudo-inverse .
At each pixel , set using (12) and .
We note that one can freely choose different shifts (and basis functions) in (9) for different applications. That is, (9) offers a broad approximation framework, where one can consider other rules for fixing the parameters (shifts and coefficients).
Ii-B Fast Algorithm
where denotes linear convolution. In summary, using (9), we have been able to approximate the high-dimensional filtering using weighted combinations of fast convolutions. The overall process is summarized in Algorithm 1. Note that we have used , and to represent pointwise addition, multiplication, and division. In steps 1 and 1, denotes the mapping . The dominant computation in Algorithm 1 are the convolutions in steps 1 and 1. Clustering and inversion of is performed just once. Notice that , which is used for computing the coefficients in step 1, is anyways required to form the intermediate images in step 1. The flowchart of our algorithm, along with typical timings for a megapixel image, is shown in Figure 2. Note that the steps in Figure 2 (resp. loops in Algorithm 1) can be parallelized: clustering the range space (loop 1), finding coefficients (loop 1), and performing convolutions (loop 1).
. In particular, the cluster with largest variance is divided into two parts (using-means clustering) at every iteration. This ensures that we do not end up with few large clusters at the end. We initialize the -means clustering using a pair of points with the largest separation. We picked bisecting -means over other clustering algorithms as its cost is linear in , and is generally faster than other clustering algorithms . Importantly, its computational overhead is negligible compared to the time required for the convolutions. Needless to say, we can use any efficient clustering method and the fast algorithm would still go through.
We have used the Matlab routine pinv to compute in step 1. As is well-known, when is a box or Gaussian, we can convolve using operations, i.e., the cost does not depend on the size of spatial filter [39, 40]. The Gaussian convolutions in steps 1 and 1 are implemented using Young’s algorithm . For box convolutions, we used a standard recursive procedure which require just four operations per pixel for any box size. Since the leading computations are the convolutions, and there are convolutions, the overall complexity is ; the additional term is for computing (12). Importantly, we are able to obtain a speedup of over the brute-force computations of (1) and (2). The storage complexity of Algorithm 1 is clearly linear in , as well as the image size. The MATLAB implementation of our algorithm is available here: https://github.com/pravin1390/FastHDFilter.
Ii-D Approximation Accuracy
assuming that is in cluster .
In (15) and (16), we weight the results from the clusters, where the weights are obtained using the optimization in (11). We will demonstrate in Section III that the weighting helps in reducing the approximation error. Unfortunately, it also makes the analysis of the algorithm difficult. Intuitively though, one would expect the error from (15) to be less than that from (18). For the latter approximation, we have the following result .
Theorem 1 (Error bound)
Suppose that and are non-negative, and that the latter is Lipschitz continuous, i.e., for some , , for any and in . Then, for some constant ,
where is given by (18), and is the clustering error:
We note that the assumptions in Theorem 1 are valid when is a box or Gaussian, and is a multidimensional Gaussian. For completeness, we provide the derivation of Theorem 1 in Appendix V-B. For several clustering methods, the clustering error vanishes as . For any of these methods, we can approximate (1) with arbitrary accuracy provided is sufficiently large. We note that such a guarantee is not available for existing fast approximation algorithms for high-dimensional filtering [14, 17, 18].
Iii Numerical Results
We now report some numerical results for high-dimensional filtering. We have used the Matlab implementation of Algorithm 1. For fair timing comparisons with Adaptive Manifolds (AM)  and Global Color Sparseness (GCS) , we have used the Matlab code provided by the authors. The timing analysis was performed on an Intel 4-core 3.4 GHz machine with 32 GB memory. The grayscale and color images used for our experiments were obtained from standard databases111https://goo.gl/821N2G, https://goo.gl/2fcNmu, https://goo.gl/MvxCMX.. The infrared and hyperspectral images are the ones used in  and [37, 41]. To compare the filtering accuracy with existing methods, we have fixed the timings by adjusting the respective approximation order. The objective of this section is to demonstrate that, in spite of its simple formulation, our algorithm is competitive with existing fast approximations of bilateral and nonlocal means filtering.
We have used an isotropic Gaussian range kernel for bilateral and nonlocal means filtering :
where the standard deviationis used to control the effective width (influence) of the kernel. We recall from (1) and (2) that the input to is the difference . In one of the experiments, is an anisotropic Gaussian; we have explicitly mentioned this at the right place. The spatial kernel for bilateral filtering is also Gaussian:
where is again the standard deviation. For nonlocal means (NLM), is a box filter, namely, no spatial weighting is used. The filter width for bilateral filtering is , while that for NLM is explicitly mentioned (typically, ).
and is the number of pixels. Note that is also used to measure denoising performance; in this case, and are the denoised and clean images in (21). It should be evident from the context as to what is being measured using . We also use  for measuring visual quality.
Iii-a Grayscale Bilateral Filtering
As mentioned earlier, although the proposed method is targeted at high-dimensional filtering, we can outperform , which is considered the state-of-the-art for bilateral filtering of grayscale images. We demonstrate this in Table I and Figure 5 using a couple of examples. In particular, notice in Figure 5 that artifacts are visible in Yang’s result (when is small.). Our is almost dB higher, and our output does not show any artifacts. A detailed PSNR comparison is provided in Table I. We notice that, for the same timing, our PSNR is consistently better than . Note that we have set as the number of clusters (resp. quantization bins) for our method (resp. ).
For completeness, we have reported the timings for different in Table II, where we have used Young’s fast algorithm for performing the Gaussian convolutions . As expected, the timing essentially does not change with .
In Figure 6, we have compared the bilateral filtering results obtained using (10) and (11). Since (11) is an approximation of (10), we see a drop in PSNR using (10), but importantly there is a significant speed up using (11). In fact, the time required using (10) is more than that of the brute-force implementation.
Iii-B Color Bilateral Filtering
We next perform fast bilateral filtering of color images, for which the state-of-the-art methods are AM  and GCS . We have also compared with our previous work , which we simply refer to as “Clustering”. In Figure 7, we have studied the effect of the number of clusters on the filtering accuracy, for our method, GCS, AM and . While GCS performs better for small (coarse approximation), our method outperforms GCS when . Note that the number of manifolds can only take dyadic (discrete) values .
In Table 8, we report the filtering accuracy for full HD, ultra HD, and ultra HD by varying the number of clusters . For a given resolution and , we have averaged the PSNR over six high resolution images222https://hdqwalls.com with that resolution. Notice that, irrespective of the image resolution, the PSNR values are above dB with just clusters. In Figure 8, the timings for filtering color images are plotted for varying image sizes ( MP to MP). This plot verifies the linear dependence of the timing on the image size, irrespective of the number of clusters used.
In Figure 9, we have compared the and timings for bilateral filtering a color image, at different and values. We have used clusters for all three methods: proposed, GCS, and . We use the default settings for the number of manifolds in AM. Notice that we are better by dB, while the timings are roughly the same. In particular, notice the PSNR improvement that we obtain over  by optimizing the coefficients, i.e., by using (15) in place of (18). Interestingly,  can outperform GCS and AM at large values.
A visual comparison for color bilateral filtering was already shown in Figure 1, along with the error images, timings, and s. In particular, notice that the error from our method is smaller than AM and GCS. This is interesting as our method is conceptually simpler than these algorithms — AM has a complex formulation and GCS is based on two-pass filtering.
In Figure 10, a visual result is provided to highlight how the approximation accuracy scales with . Notice that the accuracy improves significantly as increases from to . In fact, the approximation already resembles the result of brute-force filtering when is .
Iii-C Color Nonlocal Means
In NLM, is the noisy image (corrupted with additive Gaussian noise) and is a square patch of pixels around . In particular, if the patch is , then the dimension of is for a color image. A popular trick to accelerate NLM (called PCA-NLM) is to project the patch onto a lower-dimensional space using PCA . We note that PCA was used for reducing the patch dimension (as explained earlier) for both AM and our algorithm.
A visual comparison is provided in Figure 11 for low and high noise, where the superior performance of our method over AM is evident. We have also compared with a state-of-the-art Gaussian denoiser (CPU implementation333https://github.com/cszn/FFDNet) based on deep neural nets . It is not surprising that the result from  is better both in terms of PSNR and visual quality. However, we are somewhat faster. Figure 11 also suggests that our method is visually closer to PCA-NLM at both low and high noise levels, albeit with a significant speedup ( min to sec).
In Figure 12, we have reported Gaussian denoising results on the Kodak dataset for different noise levels (). While the proposed method and AM perform similarly for small , the former is more robust when is large. In fact, the denoising performance of AM degrades rapidly with the increase in . Another important point is that our result is close to PCA-NLM in terms of PSNR and SSIM for all noise levels.
Iii-D Hyperspectral Denoising
We next perform denoising of hyperspectral images using the bilateral filter. For such images, the range dimension (i.e., the number of spectral bands) is high. In Figure 13, denoising results are exclusively shown to highlight the speedup of our approximation over brute-force implementation for a -band image. Notice the dramatic reduction in timing compared to the brute-force implementation. Moreover, notice that the denoising quality is in fact quite good for our method (compare the text in the boxed regions).
Moreover, we also compare with recent optimization-based denoising methods [44, 45], where parameters are tuned accordingly. Visual and quantitative comparisons are shown in Figure 15 (Pavia dataset). For quantitative comparisons, we have used MPSNR and MSSIM, which are simply the PSNR and SSIM values averaged over the spectral bands. We notice that the restoration obtained using our method is better than [44, 45] (source code made public by authors), which is supported by the metrics shown in the figures. The same is visually evident from a comparison of the boxed regions in Figure 15. In particular, the color is not restored properly in , and grains can be seen in . As expected, we are much faster than these iterative methods, since we perform the filtering in one shot.
Iii-E Low-light Denoising
Finally, we use our fast algorithm for NLM denoising of low-light images using additional infrared data . In Figure 14, we have shown a visual result which compares our method, both with and without infrared data, and  (an optimization method). We set the patch and search window sizes as and . We used an anisotropic Gaussian kernel, where for the low-light data and for the infrared data. The dimension was reduced from to using PCA, and the infrared data was added as the seventh dimension. We used clusters for our method. In Figure 14, notice that some salient features are lost if we solely use the low-light input as the guide. Instead, if we add the infrared data to the guide, then the restored image is sharper and the features are more apparent compared to  and NLM without infrared data.
Iii-F Flow-field Denoising
Finally, we apply the proposed method for flow-field denoising. In particular, sharp directional changes in the flow can be preserved much better using NLM, while simultaneously removing the noise (with Gaussian filtering as the baseline). An instance of flow-field denoising  is shown in Figure 16 and compared with Gaussian smoothing.
We proposed a framework for fast high-dimensional filtering by approximating both the data and the kernel. In particular, we derived an algorithm that fuses the scalability of the former with the approximation capability of the latter. At the core of our algorithm is the concept of shiftable approximation, which allows us to interpret the coefficients in the framework of Durand et al.  from an approximation-theoretic point of view. We proposed an efficient method for determining the shifts (centers) and the coefficients using -means clustering (inspired by ) and data-driven optimization. Though the proposed algorithm is conceptually simple and easy to implement (about lines of code), it was shown to yield promising results for diverse applications. In particular, our algorithm was shown to be competitive with state-of-the-art methods for fast bilateral and nonlocal means filtering of color images.
Exchanging the sums, we can write (22) as
V-B Proof of Theorem 1
For notational convenience, let , where
Note that we can write
By triangle inequality, we can bound using
Note that we can assume that each is in . This is simply because the weights appear both in the numerator and denominator in (1) and (24). Moreover, since the range of is , using triangle inequality again, we obtain
-  P. Perona and J. Malik, “Scale-space and edge detection using anisotropic diffusion,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, no. 7, pp. 629–639, 1990.
-  V. Aurich and J. Weule, “Non-linear gaussian filters performing edge preserving diffusion,” Mustererkennung, pp. 538–545, 1995.
S. M. Smith and J. M. Brady, “SUSAN- A new approach to low level image
International Journal of Computer Vision, vol. 23, no. 1, pp. 45–78, 1997.
-  C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” Proc. IEEE International Conference on Computer Vision, pp. 839–846, 1998.
-  D. Comaniciu and P. Meer, “Mean shift analysis and applications,” Proc. IEEE International Conference on Computer Vision, vol. 2, pp. 1197–1203, 1999.
-  Z. Farbman, R. Fattal, D. Lischinski, and R. Szeliski, “Edge-preserving decompositions for multi-scale tone and detail manipulation,” ACM Transactions on Graphics, vol. 27, no. 3, p. 67, 2008.
-  E. S. Gastal and M. M. Oliveira, “Domain transform for edge-aware image and video processing,” ACM Transactions on Graphics, vol. 30, no. 4, pp. 69:1–69:12, 2011.
-  K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 6, pp. 1397–1409, 2013.
A. Buades, B. Coll, and J.-M. Morel, “A non-local algorithm for image
Proc. IEEE Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 60–65, 2005.
-  S. Paris, P. Kornprobst, J. Tumblin, and F. Durand, “Bilateral filtering: Theory and Applications,” Foundations and Trends® in Computer Graphics and Vision, vol. 4, no. 1, pp. 1–73, 2009.
-  K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian et al., “Image denoising with block-matching and 3D filtering,” Proceedings of SPIE, vol. 6064, no. 30, pp. 606 414–606 414, 2006.
-  K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising,” IEEE Transactions on Image Processing, vol. 26, no. 7, pp. 3142–3155, 2017.
-  J. Darbon, A. Cunha, T. F. Chan, S. Osher, and G. J. Jensen, “Fast nonlocal filtering applied to electron cryomicroscopy,” Proc. IEEE International Symposium on Biomedical Imaging: From Nano to Macro, pp. 1331–1334, 2008.
-  E. S. Gastal and M. M. Oliveira, “Adaptive manifolds for real-time high-dimensional filtering,” ACM Transactions on Graphics, vol. 31, no. 4, pp. 33:1–33:13, 2012.
-  J. Wang, Y. Guo, Y. Ying, Y. Liu, and Q. Peng, “Fast non-local algorithm for image denoising,” Proc. IEEE International Conference on Image Processing, pp. 1429–1432, 2006.
-  J. Chen, S. Paris, and F. Durand, “Real-time edge-aware image processing with the bilateral grid,” ACM Transactions on Graphics, vol. 26, no. 3, p. 103, 2007.
-  A. Adams, J. Baek, and M. A. Davis, “Fast high-dimensional filtering using the permutohedral lattice,” Computer Graphics Forum, vol. 29, no. 2, pp. 753–762, 2010.
-  A. Adams, N. Gelfand, J. Dolson, and M. Levoy, “Gaussian kd-trees for fast high-dimensional filtering,” ACM Transactions on Graphics, vol. 28, no. 3, p. 21, 2009.
-  M. G. Mozerov and J. Van De Weijer, “Global color sparseness and a local statistics prior for fast bilateral filtering,” IEEE Transactions on Image Processing, vol. 24, no. 12, pp. 5842–5853, 2015.
-  S. Paris and F. Durand, “A fast approximation of the bilateral filter using a signal processing approach,” Proc. European Conference on Computer Vision, pp. 568–580, 2006.
-  T. Tasdizen, “Principal neighborhood dictionaries for nonlocal means image denoising,” IEEE Transactions on Image Processing, vol. 18, no. 12, pp. 2649–2660, 2009.
-  K. N. Chaudhury, D. Sage, and M. Unser, “Fast O(1) bilateral filtering using trigonometric range kernels,” IEEE Transactions on Image Processing, vol. 20, no. 12, pp. 3376–3382, 2011.
-  F. Durand and J. Dorsey, “Fast bilateral filtering for the display of high-dynamic-range images,” ACM Transactions on Graphics, vol. 21, no. 3, pp. 257–266, 2002.
-  F. Porikli, “Constant time O(1) bilateral filtering,” Proc. IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8, 2008.
-  K. Sugimoto and S. I. Kamata, “Compressive bilateral filtering,” IEEE Transactions on Image Processing, vol. 24, no. 11, pp. 3357–3369, 2015.
-  Q. Yang, K. H. Tan, and N. Ahuja, “Real-time O(1) bilateral filtering,” Proc. IEEE Conference on Computer Vision and Pattern Recognition, pp. 557–564, 2009.
-  Q. Yang, N. Ahuja, and K.-H. Tan, “Constant time median and bilateral filtering,” International Journal of Computer Vision, vol. 112, no. 3, pp. 307–318, 2015.
-  K. N. Chaudhury and S. D. Dabhade, “Fast and provably accurate bilateral filtering,” IEEE Transactions on Image Processing, vol. 25, no. 6, pp. 2519–2528, 2016.
-  S. Ghosh and K. N. Chaudhury, “Fast bilateral filtering of vector-valued images,” Proc. IEEE International Conference on Image Processing, pp. 1823–1827, 2016.
-  ——, “Color bilateral filtering using stratified fourier sampling,” Proc. Global Conference on Signal and Information Processing, 2018.
-  K. N. Chaudhury, “Acceleration of the shiftable O(1) algorithm for bilateral filtering and nonlocal means,” IEEE Transactions on Image Processing, vol. 22, no. 4, pp. 1291–1300, 2013.
-  P. Nair and K. N. Chaudhury, “Fast high-dimensional filtering using clustering,” Proc. IEEE International Conference on Image Processing, pp. 240–244, 2017.
-  S. Zhuo, X. Zhang, X. Miao, and T. Sim, “Enhancing low light images using near infrared flash images,” Proc. IEEE International Conference on Image Processing, pp. 2537–2540, 2010.
-  Q. Yuan, L. Zhang, and H. Shen, “Hyperspectral image denoising employing a spectral–spatial adaptive total variation model,” IEEE Transactions on Geoscience and Remote Sensing, vol. 50, no. 10, pp. 3660–3677, 2012.
-  M. A. Westenberg and T. Ertl, “Denoising 2-D vector fields by vector wavelet thresholding,” Journal of WSCG, pp. 33–40, 2005.
-  Q. Yang, “Hardware-efficient bilateral filtering for stereo matching,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 5, pp. 1026–1032, 2014.
-  S. M. C. N. D. H. Foster, K. Amano and M. J. Foster, “Frequency of metamerism in natural scenes,” Journal of the Optical Society of America A, vol. 23, pp. 2359–2372, 2006.
-  P.-N. Tan, M. Steinbach, and V. Kumar, Introduction to Data Mining. Addison-Wesley Longman Publishing Co., Inc., 2005.
-  R. Deriche, “Recursively implementating the Gaussian and its derivatives,” INRIA, Research Report RR-1893, 1993.
-  I. T. Young and L. J. Van Vliet, “Recursive implementation of the Gaussian filter,” Signal Processing, vol. 44, no. 2, pp. 139–151, 1995.
-  “Hyperspectral image database.” [Online]. Available: http://lesun.weebly.com/hyperspectral-data-set.html
-  Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600–612, 2004.
-  X. Yang and X. Lin, “Brightening and denoising lowlight images,” Proc. International Conference on Information, Communications and Signal Processing, pp. 1–5, 2015.
-  F. Fan, Y. Ma, C. Li, X. Mei, J. Huang, and J. Ma, “Hyperspectral image denoising with superpixel segmentation and low-rank representation,” Information Sciences, vol. 397, pp. 48–68, 2017.
-  Y.-Q. Zhao and J. Yang, “Hyperspectral image denoising via sparse representation and low-rank constraint,” IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no. 1, pp. 296–308, 2015.