is widely used in computer vision and image processing for edge-preserving smoothing. Unlike linear convolutional filters, the bilateral filter uses a range kernel along with a spatial kernel, where both kernels are usually Gaussian . The input to the range kernel is the intensity difference between the pixel of interest and its neighbor. If the difference is large (e.g., the pixels are from different sides of an edge), then the weight assigned to the neighboring pixel is small, and it is essentially excluded from the aggregation. This mechanism, which avoids the mixing of pixels with large intensity differences, ensures the preservation of sharp edges. However, this also makes the filter non-linear and computation intensive.
An adaptive variant of the bilateral filter was introduced in , where the center and width of the Gaussian range kernel is allowed to change from pixel to pixel. It was used for image sharpness enhancement along with noise removal. Adaptation of the range kernel was necessary since the standard bilateral filter cannot perform sharpening. The amount of sharpening and noise removal at a particular pixel is controlled by adapting the center and width of the kernel. A variable-width range kernel was also used in [6, 7] for removing compression and registration artifacts.
I-a Adaptive bilateral filter
In this paper, we will use the filter definition in , which is as follows. Let be the input image, where is the image domain. The output image is given by
Here is a window centered at the origin, and is the local Gaussian range kernel:
In the classical bilateral filter, the width of the range kernel is the same at each pixel . On the other hand, the center is simply the intensity of the pixel of interest . The center is set to be in , where is an offset image. Apart from the applications in [5, 6, 7], the adaptive bilateral filter is naturally more versatile than its non-adaptive counterpart. We demonstrate this using a novel texture filtering application in Section IV. The objective in texture filtering is to remove coarse textures (along with fine details) from the image, while preserving the underlying structure . This is illustrated using an example in Figure 6. As evident from the example, by adapting the width at each pixel (which controls the aggregation), we are simultaneously able to preserve sharp edges and smooth coarse textures. This is somewhat difficult to achieve using a fixed kernel (see discussion in Section IV).
A well-known limitation of the bilateral filter is that it is computation intensive . Needless to say, this also applies for the adaptive bilateral filter. Considerable work has been done to accelerate the bilateral filter. However the task of speeding up its adaptive counterpart has received scant attention, if any. In this paper, we develop a fast algorithm for approximating (1) that works with any spatial kernel. When the spatial kernel is box or Gaussian, the per-pixel complexity of our algorithm is independent of the window size (constant-time algorithm). In practice, our algorithm can accelerate the brute-force computation by at least , without appreciably compromising the filtering quality.
Apart from the bilateral filter, other edge-preserving filters have also been proposed in the literature. These include the content adaptive bilateral filter , and the guided image filters [10, 11]. The former  is a variant of the bilateral filter in which the widths of both the spatial and range kernels are allowed to change from pixel to pixel. However, to the best of our knowledge, there is no known fast (e.g., constant-time) algorithm for implementing this filter. As the name suggests, in the guided filters [10, 11], edge-preserving is performed using the edges present in the so-called “guide” image. Similar to our algorithm, both these filters have constant per-pixel complexity w.r.t. the filter size. These filters have been shown to be useful for a large number of applications [9, 10, 11]. In particular, it is in principle possible to use these filters for the deblocking and texture filtering applications considered in Section IV, though its not obvious how one could use then for image sharpening.
We now review existing fast algorithms for classical bilateral filtering, and explain why most of them cannot be used for the adaptive variant. In particular, we discuss the fast algorithm in  that motivated the present work.
I-B Fast bilateral filter
It follows from (1) and (2) that the brute-force computation of the classical bilateral filter requires operations per pixel, where we recall that is the spatial kernel width. This makes the real-time implementation challenging when is large, e.g., in applications that require greater aggregation. In fact, based on scale-space considerations, one would expect to scale with the image resolution. To address this, several fast algorithms have been proposed in the literature whose run-time is independent of . The speedup achieved by most of these so-called algorithms relies on the fact that box and Gaussian convolutions can be computed at fixed cost for any arbitrary [13, 14]
. These algorithms can be broadly classified into four classes as discussed below.
that are based on quantization and interpolation. Here, the intensity range of the input image is quantized into a small set of levels, and exact bilateral filtering is performed for each level using convolutions. The bilateral filtering at an intermediate intensity is approximated by interpolating the exact bilateral filterings. The convolutions are performed on images that are defined for a fixed range kernel. Consequently, these algorithms cannot be used for adaptive bilateral filtering.
The algorithms in the next class use the so-called shiftable approximation of the range kernel to approximate the bilateral filtering using convolutions [17, 18, 19, 20, 21, 22, 23, 24, Ghosh2018]. In particular, Taylor approximation is used in , trigonometric (Fourier) approximation is used in [18, 19, 20, 23], and a Gaussian-weighted polynomial approximation is used in . The shiftable approximation in  is derived using eigendecomposition. These algorithms are fundamentally based on the decomposition of a fixed range kernel. As a result, they cannot be used to speed up (1).
A third set of algorithms use high-dimensional convolutions for fast bilateral filtering [25, 26]. They are based on the observation that by concatenating the spatial and range dimensions, the bilateral filter can be expressed as a convolution in this higher dimensional space. However, if the range kernel changes at every pixel, the high-dimensional filtering cannot be expressed as a convolution. Therefore, these methods also fail for our case of interest.
The fourth class comprises of histogram-based algorithms. Here the bilateral filtering is expressed as an operator acting on the local histogram. It was observed in [27, 28, 29] that the bilateral filter is in fact related to other non-linear filters that operate on local intensity histograms. For box (uniform) spatial kernels, an algorithm based on integral histograms was first proposed in . Later, it was shown in  how this can be extended to Gaussian spatial kernels using a combination of box kernels. A similar approach was advocated in  for arbitrary spatial kernels. More recently, a novel approach based on imposing a prior on the local histogram was proposed by Mozerov and van de Weijer . We will focus on this approach and show how it can be extended for adaptive bilateral filtering. First, we review its salient features.
I-C Speedup using histogram approximation
The spatial kernel is included in the specification of the histogram in , along with the frequency of local intensity values. As a result, the idea of using local histograms can be generalized to arbitrary spatial kernels, beyond just box kernels 
. By approximating the histogram using an uniform distribution, the authors were able to express the filtering in terms of the standard error function. This resulted in a fast algorithm which was eventually used for bilateral filtering of color images. An interesting aspect of this method is that, unlike other histogram based methods, it does not involve any form of kernel approximation; the approximation is purely of the local histogram. The approximation process involves just one convolution with the spatial kernel, which does not involve the range kernel parameters. The parameters only appear in the standard error function mentioned earlier, which is evaluated at each pixel. Therefore, the formal structure of the algorithm remains unaffected if we adapt the parameters. In particular, we will show how a fast and accurate algorithm for adaptive bilateral filtering can be developed by refining the approximation in.
Our primary contribution is to develop the core idea in  to formulate a fast algorithm for adaptive bilateral filtering. We propose a computational framework for improving the histogram approximation in , while ensuring that the resulting algorithm is significantly faster than the brute-force computation. Moreover, we also demonstrate the utility of the proposed algorithm for applications such as image sharpening, JPEG deblocking, and texture filtering.
On the technical front, we demonstrate that a uniform prior on the histogram (as used in ) results in a poor approximation. Indeed, one would expect the intensity distribution around an edge, or in a textured region, to be far from uniform. This prompted us to model the local histogram using polynomials. A computational requirement arising from this choice is to approximate the local histogram without computing it explicitly. We propose to do this by matching the moments of the polynomial to those of the histogram. The advantage with this proposal is that the moments of the histogram can be computed using convolutions. The moment matching problem reduces to inverting a linear system , where both and vary from pixel to pixel. This is of course computationally prohibitive. However, we show that it suffices to consider a single by appropriately transforming the local range data. In fact, this particular turns out to be a well-studied matrix, whose inverse has a closed-form expression. Similar to , we are able to approximate (1) and (2) using a series of definite integrals. In fact, the integrals are related to each other, and we devise a fast recursion using integration-by-parts. An important distinction from 
is that we are able to efficiently compute the limits of integration using an existing fast algorithm. The limits are set in a somewhat heuristic fashion in. As we will show later, setting the limits correctly can help improve the approximation accuracy.
The theoretical complexity of the proposed algorithm is with respect to . In fact, the speedup over the brute-force implementation is quite significant in practice. For the applications in Section IV, we can obtain at least acceleration (this can go up to ), while ensuring an acceptable approximation accuracy (PSNR dB, [17, 12]). To the best of our knowledge, this is the first algorithm for adaptive bilateral filtering reported in the literature.
The remaining paper is organized as follows. The proposed approximation of the adaptive bilateral filter and the resulting fast algorithm are developed in Section II. In Section III, we compare our method with  and the brute-force implementation in terms of timing and approximation accuracy. In Section IV, we apply our algorithm for image sharpening, compression artifact removal, and texture filtering. We conclude the paper with a discussion in Section V.
Ii Analytic Approximation
Ii-a Main idea
It was observed in  that the classical bilateral filter can be expressed as a weighted average of local pixel intensities. We adopt the same idea for the adaptive bilateral filter. More specifically, for , let
For , define the (weighted) histogram
Consider the bounds on ,
Note that if , that is, if the neighborhood of contains exactly one intensity value, then , and hence no processing is required. For the rest of the discussion, we will assume that , unless stated otherwise. Our proposal is to approximate (which has a finite domain) with a function defined on the interval , and to replace the finite sums in (6) and (7) with integrals. In other words, we consider the following approximations of (6) and (7):
then we can write (9) as
When , we can compute (13) using just two evaluations of the standard error function . When , then the integrand in (13) is a product of a monomial and a Gaussian. We will show in Section II-C, how we can efficiently compute (13) for using recursions.
In summary, when is a polynomial, we can express (9) using integrals that can be computed efficiently. The number of integrals depend on the degree , but do not scale with the width of the spatial filter. We will later demonstrate that reasonably good approximations of (1) and (2) can be obtained using low-degree polynomials.
Ii-B Polynomial fitting
We now provide technical details as to how the polynomial fitting can be performed in an efficient manner. Assume that a polynomial of degree is used to approximate the histogram at some . In particular, let the polynomial be given by (10), and we wish to approximate (5
) using this polynomial. To estimate the coefficients, we propose to match the first moments of and . That is, we wish to find such that, for ,
Ideally, should be a density function, i.e., a non-negative function with unit integral. However, since would appear in the numerator and denominator of (6), the latter normalization is not required. On the other hand, we simply choose to drop the non-negativity requirement; in fact, the approximation is empirically non-negative in most cases (see Figure 9).
Let and . Then we can write (14) as
where is given by
Needless to say, , and depend on , but we will avoid writing this explicitly to keep the notations simple.
Notice that the brute-force computation of and given by (8), and the moments , requires operations. This is in fact the complexity of the filter that we set out to approximate. Fortunately, it turns out that there are faster ways of computing them.
There exists an algorithm for computing the bounds in (8) that has complexity with respect to .
In particular, the bounds can be computed by adapting the MAX-FILTER algorithm in  that has complexity. This was originally designed to compute the local maximum at each pixel, but it can trivially be modified to compute the local minimum.
For fixed , the moments
can be computed with complexity with respect to .
As a consequence of the above observations, we can compute and at every with complexity. However, we still need to perform one matrix inversion at each pixel, which in fact would be slower than computing (1) and (2). Fortunately, there is a simple solution to this problem. This is based on the observation that if and , then (16) becomes
This is called the Hilbert matrix. Importantly, the inverse of the Hilbert matrix admits a closed-form expression .
The Hilbert matrix (18) is invertible and its inverse is given by
In summary, if the domain of the local histogram can be stretched from to , then the moment matching problem at every can be solved using a single closed-form inversion. Now, the domain stretching can be done using an affine transformation. In particular, define
Note that is a shifted and scaled copy of . Next, define a copy of the original histogram on the new domain:
Let be the -th moment of :
As discussed previously, we approximate using a polynomial on . In particular, if we set
For fixed , we can compute from . In particular, , and for ,
Moreover, define to be
Define to be
Then, for ,
In short, we first rescale the original histogram, then average using the new histogram, and finally rescale the output to get back the original result. As described earlier, we approximate using (23), and replace the sums with integrals in (26). More precisely, we approximate the numerator and denominator of (26) using
where we recall that is given by (25). In other words, for fixed , we are required to compute integrals of the form
for , where and .
As before, it is understood that , and depend on .
In Figure 9, we explain the domain stretching and polynomial fitting procedures using an example.
Ii-C Recursive computation of integrals
We now explain how can be efficiently computed using recursions. Notice that ratio of the integrands of and is , which (up to an affine scaling) is the derivative of the argument in the exponential. More specifically, consider the following equation:
By integrating this over , we arrive at the relation
This gives us the following recursive formula:
We can use this to compute starting with and . However,
which can be computed using two evaluations of the standard error function :
On the other hand,
Ii-D Algorithm and implementation
The steps in the proposed approximation are summarized in Algorithm 1. The computation in line 1 is performed using the algorithm in , which is and very fast. The main computations are essentially in lines 1, 1, 1, and 1. Note that for all , hence we need just convolutions to compute in line 1. As we shall see later, it is sufficient to set for most applications. The convolutions can be performed using a recursive Gaussian filter [13, 14]. However, we used the Matlab inbuilt “imfilter” which is faster than our implementation of  for practical values of (though its run-time scales with ). In fact, the Gaussian convolutions dominate the total runtime of the algorithm.
Recall that in line 1 we do not need to actually invert
since a closed-form formula is available. Thus only a matrix-vector multiplication is required. In fact, sinceis common for all pixels, we can compute for all in a single step by multiplying with the matrix containing the values of for all . This is precisely the advantage of shifting and scaling the histogram domain as discussed in Section II-B.
To compute (line 1) we need just two evaluations of the error function, whereas for we need two evaluations of the exponential function. Further, note that the exponential in the third term in (30) is already computed in (32) for , and hence need not be computed again. On the overall, we need just two evaluations of the error and the exponential functions, i.e., four transcendental function evaluations per pixel (this should be compared with the brute-force implementation, which requires transcendental function evaluations per pixel). Thus, as claimed earlier, our algorithm has complexity w.r.t. .
Iii Comparisons And Discussion
We recall that there are no existing algorithms in the literature for fast adaptive bilateral filtering. A relevant comparison is that between Algorithm 1 and the brute-force implementation of (1). A practical problem in this regard is that we need rules to set the center and width at every pixel. Of course, these rules will depend on the application at hand. In Section IV, we describe three applications of adaptive bilateral filtering, where these rules will be provided. We will compare the approximation accuracy of our fast algorithm with the brute-force implementation at that point.
On the other hand, to demonstrate the acceleration achieved by the proposed algorithm, we can use any values for and . In particular, we will demonstrate the speedup obtained using Algorithm 1 for the classical bilateral filter, where we set and to a fixed value for all .
As mentioned earlier, the present fast algorithm was inspired by the histogram-based approximation in . For completeness, we compare Algorithm 1 with  for classical bilateral filtering. This also gives us a particularly simple way to assess the accuracy of the proposed algorithm. In particular, we will show that our algorithm is more accurate (in terms of PSNR) compared to .
To objectively compare two images, we use the peak signal-to-noise ratio (PSNR). Following [15, 16, 30, 12], we measure the approximation accuracy of Algorithm 1 by observing the PSNR between the exact (brute-force) and approximate filterings (i.e. and in Section II). The approximation is considered satisfactory if the PSNR is at least dB [17, 30, 12].
To generate all the results reported in this paper, we have used Matlab 9.1 on a Linux system running on a 3.4 GHz CPU with 32 GB memory. The values of used for the experiments are on a scale of , the dynamic range of an -bit image. Before applying Algorithm 1, we rescaled the dynamic range to and to (recall that the input image is assumed to take values in ). The output of the algorithm is finally rescaled back to .
In Figure 18, we compare the results of Algorithm 1 and  for the parameter settings and . The results of Algorithm 1 are shown for two different approximation orders (cf. caption for details). For the method in , we set , which results in the narrowest valid uniform distribution (cf.  for definition of ). Note that we actually fit a polynomial of degree (a constant function) to the local histogram when ; this is done by matching the zeroth moments of the constant function and the histogram. On the other hand, the first moment is used for matching in . As a result, the two approaches result in different heights of the constant function. However, the height appears in the numerator and denominator of (11) and is thus canceled out. Therefore the height of the constant function may be set to any non-zero value. Thus, the only difference between our method (when ) and  is that the intervals on which the constant functions are fitted are different. These are and for our method and  respectively, where is the smoothed version of . Notice in Figures 18(e) and (g) that using the exact values of and alone results in few dBs improvement. The result using is shown to confirm our intuition that a higher-degree polynomial should result in better approximation. We have reported some additional PSNR values for different and in Table I. As expected, the PSNR progressively increases with .
We now compare the timing of Algorithm 1 with the brute-force implementation of (1). The timings in Table II are for the image in Figure 18. Notice that for a fixed , there is very little variation in timing for different . Our algorithm is thus as claimed. In contrast, the timing for the brute-force implementation increases with as expected. Notice in Table II that, even when is as high as , the speedup is by at least and often as much as .
To demonstrate the potential of the proposed fast algorithm, we consider three applications of the adaptive bilateral filter: (1) image sharpening and noise removal, (2) JPEG deblocking, and (3) texture filtering. The first application is directly based on the method in , whereas the second is a minor modification of the method in . Thus, our essential contribution is the speedup obtained using our algorithm. The third application is novel in the sense that we propose a new method for texture removal, and we show that our results are competitive with state-of-the-art methods.
Iv-a Image sharpening and noise removal
The adaptive bilateral filter was originally used in  for sharpening along with denoising of fine grains. We apply our fast algorithm for this task.
Following , we set where is the input image. The offset is set to be
where is the average intensity in the -neighborhood of . To determine , we high-pass filter the input image using the Laplacian-of-Gaussian filter, and apply an affine mapping on the absolute value of the resulting image to get . The affine function maps low values of its input to high values of and vice-versa.
A typical result of sharpening and noise removal in shown in Figure 25. The input image and the values of are shown in Figures 25(a) and 25(f). The image in Figure 25(c) is obtained by applying Algorithm 1 on the input image, using the prescribed values of and . Notice that the output image contains visibly sharper boundaries and less noise grains than the input. Importantly, it is visually indistinguishable from the output of brute-force implementation shown in Figure 25(d). Indeed, the PSNR between the images in (c) and (d) is quite high ( dB). However, note that our method is about times faster. The image in Figure 25(b) is obtained by applying the classical bilateral filter on the input image. Note that bilateral filtering removes noise as expected, but it cannot sharpen the edges. In , the adaptive bilateral filter was used for sharpening a text image. We show the result for the same image in Figure 29, where the speedup using Algorithm 1 is about .
Iv-B JPEG deblocking
It was shown in  that adaptive bilateral filtering can be used to remove blocking artifacts from JPEG images, where the width of the spatial kernel was also adapted. However, adapting the spatial kernel width is beyond the scope of the proposed algorithm. We will rather show that changing the range kernel width alone can satisfactorily remove blocking. The motivation in  for using a locally adaptive range width is as follows. In JPEG, the image is tiled using blocks, and the blocks are compressed using DCT. Blocking artifacts occur due to discontinuities at the boundary of adjacent blocks. To effectively smooth out the discontinuity at a boundary pixel , the authors in  proposed to set in proportion with the intensity difference across the boundary.
For completeness, we briefly describe how is computed in . The image is first divided into non-overlapping blocks corresponding to the blocks used in JPEG compression. Then a discontinuity map is calculated at each pixel as follows. At a vertical or horizontal edge of a given block, is set to be the intensity difference across the respective edge (corner pixels are handled appropriately). The corresponding values for the four center pixels in the block are set to zero. For the remaining pixels in the block, is set using linear interpolation. Finally, is set as
The deblocking outputs of this method are shown in Figures 41 and 50.
For completeness we compare the results with , , , and 
111Codes: : http://yu-li.github.io/
: https://github.com/coolbay/Image-deblocking-SSRQC. We note that  was originally intended for use in a contrast-enhancement pipeline, but its results are acceptable when used for deblocking. The methods in  and  used an optimization model, hence their performance is the best among all the methods considered. However both of them are quite slow. Adaptive bilateral filtering (Algorithm 1) shown in (c) gives an acceptable performance at a much higher speed, while its PSNR is less by only about dB than  and . We note that Algorithm 1 is few orders faster than [36, 37]. The method in  is faster than ours, but the deblocking using Algorithm 1 is superior. Compared to , Algorithm 1 is superior in terms of both runtime and performance. Additionally, it is about faster than brute-force adaptive bilateral filtering (shown in (d)). We found that adaptive bilateral filtering performs well for low compression rates. Accordingly, we have used a JPEG quality factor of for compressing the images in Figures 41 and 50.
Iv-C Texture filtering
We have already seen in Figure 6 that adaptive bilateral filtering offers more flexibility for texture filtering. The problem of separating textures from the underlying structure is a well-studied one; see e.g. [44, 43, 45, 8, 46, 42, 39, 47, 40]. While the classical bilateral filter can remove fine textures from an image, it cannot directly be used for removing coarse textures. However, it was shown in  that a variant of the bilateral filter, the joint bilateral filter, can be used for texture filtering.
We propose to use adaptive bilateral filtering for this task, by setting and adjusting so that textured regions are aggressively smoothed out (along with fine details and homogeneous regions), while sharp edges are not. In this regard, we require a metric which can discriminate between texture and strong edges in an image. One such metric is the modified relative total variation (mRTV) . This is given by
where is a local neighborhood of , is the spatial gradient at ,
and is a small number used to avoid division by zero. The mRTV value is high near sharp edges and low in textured regions; see  for further details about mRTV.
As before, we obtain via an affine transformation of , so that is high whenever is low and vice versa. This ensures that aggressive smoothing is performed in textured regions, but without excessively blurring the strong edges. An example of a map computed using this rule is presented in Figure 53. These values are used in Algorithm 1. The computation of and were performed on a grayscale version of the color image.
To completely remove coarse textures, we apply Algorithm 1 for a second time, after scaling down by a factor of (to reduce the smoothing). This smooths out the textures completely, but introduces some blurring near sharp edges. We therefore use the method in Section IV-A to sharpen the filtered image. In summary, we apply Algorithm 1 thrice — twice for smoothing and once for sharpening. Examples are shown in Figure 78 (second column), where the parameters of Algorithm 1 have been chosen to obtain the best visual output. The color images are filtered on a channel-by-channel basis.
We note that the rolling guidance filter  uses a similar strategy, namely smoothing followed by sharpening, to remove coarse textures in an image. In this method, a Gaussian smoothing is first applied to the input image to smooth out the textures. This is followed by a number of iterations of the joint bilateral filter to reconstruct the blurred edges.
We compare our method with six different methods, namely [8, 39, 40, 41, 42], and 
222Codes: : https://github.com/bsham/SDFilter
: https://www.researchgate.net/publication/261438330_CartoonTexture_decomposition_-_source_and_test_sample. The filtering results on three different images are shown in Figure 78. For all methods, we have used the default parameters as far as possible; whenever the default parameters are not available, we adjusted the parameters to get the best visual output.
For the image in the first row of Figure 78, the best filtering results are obtained using  and . Our method comes close, but is faster. The method in  removes textures very well, but causes loss of shading. On the other hand, the shading is well preserved by , but textured regions are not completely smoothed out. The output of  looks somewhat blurry, while in  some residual textures still remain near edges (compare the highlighted patches).
A similar observation applies to the results in the other two rows of Figure 78. An interesting observation is that the structure of the eye in the second row is preserved well by our method and , but other methods introduce smearing along the edges. This is evident from the highlighted patches. For the image in the third row, we note that our method preserves small-scale structures quite well (see the highlighted regions), a property which is also shown by  and .
Comparing the timings for each of the three images, we note that the rolling guidance filter  is the only method which is faster than our method. However, as noted above, the proposed method exhibits a better texture suppressing property than . The proposed method is about faster than , about faster than , and about faster than . It is more than faster than both  and .
We developed a fast algorithm for adaptive bilateral filtering by approximating histograms using polynomials. To the best of our knowledge, this the first such algorithm. In particular, we showed that our proposal significantly improves on the idea from  of approximating histograms using uniform priors. Moreover, we demonstrated that our algorithm can produce acceptable results using just few convolutions; this enables us to achieve significant accelerations. In particular, the proposed algorithm is at least faster than the brute force implementation. It should be noted that the algorithm can work with any spatial filter, and not just Gaussians. Moreover, if an implementation is available for the spatial filter, then the overall filtering can be computed at cost with respect to the spatial filter size.
We also demonstrated the effectiveness of the proposed algorithm for sharpening, deblocking, and texture filtering. In particular, the proposed algorithm is accurate enough to perform image sharpening as per the proposal in . By adapting the method in , we showed that our algorithm can also be used to satisfactorily deblock JPEG compressed images. In particular, we showed that our algorithm is much faster than iterative optimization methods for deblocking, though the outputs are visually similar. Finally, we proposed a novel method for texture filtering using our algorithm, that is competitive with state-of-the-art methods in terms of runtime and performance.
Due to the flexibility offered by the adaptive bilateral filter, it can in principle be used to improve upon any application that has traditionally been performed using the bilateral filter, provided one can come up with the right rule for adapting the local values. Having offered a fast algorithm, we hope to draw the interest of the community towards harnessing this versatility. An interesting open question is whether the present ideas can be adapted for processing color images in the joint color space.
Vii-a Proof of Proposition ii.2
Vii-B Proof of Proposition ii.4
Vii-C Proof of Proposition ii.5
-  V. Aurich and J. Weule, “Non-linear Gaussian filters performing edge preserving diffusion,” Mustererkennung, pp. 538–545, 1995.
-  S. M. Smith and J. M. Brady, “Susan—a new approach to low level image processing,” International Journal of Computer Vision, vol. 23, no. 1, pp. 45–78, 1997.
-  C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” Proc. IEEE International Conference on Computer Vision, pp. 839–846, 1998.
-  S. Paris, P. Kornprobst, J. Tumblin, and F. Durand, Bilateral Filtering: Theory and Applications. Now Publishers Inc., 2009.
-  B. Zhang and J. P. Allebach, “Adaptive bilateral filter for sharpness enhancement and noise removal,” IEEE Transactions on Image Processing, vol. 17, no. 5, pp. 664–678, 2008.
-  M. Zhang and B. K. Gunturk, “Compression artifact reduction with adaptive bilateral filtering,” Proc. SPIE Visual Communications and Image Processing, vol. 7257, 2009.
-  S. Mangiat and J. Gibson, “Spatially adaptive filtering for registration artifact removal in hdr video,” Proc. IEEE International Conference on Image Processing, pp. 1317–1320, 2011.
-  H. Cho, H. Lee, H. Kang, and S. Lee, “Bilateral texture filtering,” ACM Transactions on Graphics, vol. 33, no. 4, p. 128, 2014.
-  Z. Li, J. Zheng, Z. Zhu, W. Yao, S. Wu, and S. Rahardja, “Content adaptive bilateral filtering,” Proc. IEEE International Conference on Multimedia and Expo Workshops, pp. 1–6, 2013.
-  K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 6, pp. 1397–1409, 2013.
-  Z. Li, J. Zheng, Z. Zhu, W. Yao, and S. Wu, “Weighted guided image filtering,” IEEE Transactions on Image Processing, vol. 24, no. 1, pp. 120–129, 2015.
-  M. G. Mozerov and J. van de Weijer, “Global color sparseness and a local statistics prior for fast bilateral filtering,” IEEE Transactions on Image Processing, vol. 24, no. 12, pp. 5842–5853, 2015.
-  R. Deriche, “Recursively implementing the Gaussian and its derivatives,” INRIA, Research Report RR-1893, 1993.
-  I. Young and L. van Vliet, “Recursive implementation of the Gaussian filter,” Signal Processing, vol. 44, pp. 139–151, 1995.
-  F. Durand and J. Dorsey, “Fast bilateral filtering for the display of high-dynamic-range images,” ACM Transactions on Graphics, vol. 21, no. 3, pp. 257–266, 2002.
Q. Yang, K. H. Tan, and N. Ahuja, “Real-time bilateral filtering,”
Proc. IEEE Computer Vision and Pattern Recognition, pp. 557–564, 2009.
-  F. Porikli, “Constant time bilateral filtering,” Proc. IEEE Computer Vision and Pattern Recognition, pp. 1–8, 2008.
-  K. N. Chaudhury, D. Sage, and M. Unser, “Fast bilateral filtering using trigonometric range kernels,” IEEE Transactions on Image Processing, vol. 20, no. 12, pp. 3376–3382, 2011.
-  K. N. Chaudhury, “Acceleration of the shiftable algorithm for bilateral filtering and nonlocal means,” IEEE Transactions on Image Processing, vol. 22, no. 4, pp. 1291–1300, 2013.
-  K. Sugimoto and S. I. Kamata, “Compressive bilateral filtering,” IEEE Transactions on Image Processing, vol. 24, no. 11, pp. 3357–3369, 2015.
-  K. N. Chaudhury and S. D. Dabhade, “Fast and provably accurate bilateral filtering,” IEEE Transactions on Image Processing, vol. 25, no. 6, pp. 2519–2528, 2016.
-  S. Ghosh and K. N. Chaudhury, “On fast bilateral filtering using fourier kernels,” IEEE Signal Processing Letters, vol. 23, no. 5, pp. 570–573, 2016.
P. Nair, A. Popli, and K. N. Chaudhury, “A fast approximation of the bilateral filter using the discrete fourier transform,”Image Processing On Line, vol. 7, pp. 115–130, 2017.
-  G. Papari, N. Idowu, and T. Varslot, “Fast bilateral filtering for denoising large 3D images,” IEEE Transactions on Image Processing, vol. 26, no. 1, pp. 251–261, 2017.
-  S. Paris and F. Durand, “A fast approximation of the bilateral filter using a signal processing approach,” Proc. European Conference on Computer Vision, pp. 568–580, 2006.
-  L. Dai, M. Yuan, and X. Zhang, “Speeding up the bilateral filter: A joint acceleration way,” IEEE Transactions on Image Processing, vol. 25, no. 6, pp. 2657–2672, 2016.
-  R. van den Boomgaard and J. van de Weijer, “On the equivalence of local-mode finding, robust estimation and mean-shift analysis as used in early vision tasks,” Proc. International Conference on Pattern Recognition, vol. 3, pp. 927–930, 2002.
-  J. J. Koenderink and A. J. Van Doorn, “The structure of locally orderless images,” International Journal of Computer Vision, vol. 31, no. 2-3, pp. 159–168, 1999.
-  J. van de Weijer and R. van den Boomgaard, “Local mode filtering,” Proc. IEEE Computer Vision and Pattern Recognition, vol. 2, pp. 428–433, 2001.
-  K. Zhang, G. Lafruit, R. Lauwereins, and L. V. Gool, “Constant time joint bilateral filtering using joint integral histograms,” IEEE Transactions on Image Processing, vol. 21, no. 9, pp. 4309–4314, 2012.
-  S. Pan, X. An, and H. He, “Optimal bilateral filter with arbitrary spatial and range kernels using sparse approximation,” Mathematical Problems in Engineering, vol. 2014, 2014, Article ID 289517.
-  W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, Numerical Recipes 3rd Edition: The Art of Scientific Computing. New York, NY, USA: Cambridge University Press, 2007.
J. Munkhammar, L. Mattsson, and J. Rydén, “Polynomial probability distribution estimation using the method of moments,”PLOS ONE, vol. 12, no. 4, pp. 1–14, 2017.
-  M.-D. Choi, “Tricks or treats with the Hilbert matrix,” The American Mathematical Monthly, vol. 90, no. 5, pp. 301–312, 1983.
-  Y. Li, F. Guo, R. T. Tan, and M. S. Brown, “A contrast enhancement framework with JPEG artifacts suppression,” Proc. European Conference on Computer Vision, pp. 174–188, 2014.
-  J. Zhang, R. Xiong, C. Zhao, Y. Zhang, S. Ma, and W. Gao, “CONCOLOR: Constrained non-convex low-rank model for image deblocking,” IEEE Transactions on Image Processing, vol. 25, no. 3, pp. 1246–1259, 2016.
-  C. Zhao, J. Zhang, S. Ma, F. X., Y. Zhang, and W. Gao, “Reducing image compression artifacts by structural sparse representation and quantization constraint prior,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 27, no. 10, pp. 2057–2071, 2017.
-  S. A. Golestaneh and D. M. Chandler, “Algorithm for JPEG artifact reduction via local edge regeneration,” Journal of Electronic Imaging, vol. 23, no. 1, pp. 013 018–1–013 018–13, 2014.
-  B. Ham, M. Cho, and J. Ponce, “Robust guided image filtering using nonconvex potentials,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 1, pp. 192–207, 2018.
-  A. Belyaev and P. A. Fayolle, “Adaptive curvature-guided image filtering for structure + texture image decomposition,” IEEE Transactions on Image Processing, vol. 27, no. 10, pp. 5192–5203, 2018.
-  S. Ono, “ gradient projection,” IEEE Transactions on Image Processing, vol. 26, no. 4, pp. 1554–1564, 2017.
-  Q. Zhang, X. Shen, L. Xu, and J. Jia, “Rolling guidance filter,” Proc. European Conference on Computer Vision, pp. 815–830, 2014.
-  D. Szolgay and T. Sziranyi, “Adaptive image decomposition into cartoon and texture parts optimized by the orthogonality criterion,” IEEE Transactions on Image Processing, vol. 21, no. 8, pp. 3405–3415, 2012.
-  A. Buades, T. M. Le, J. M. Morel, and L. A. Vese, “Fast cartoon + texture image filters,” IEEE Transactions on Image Processing, vol. 19, no. 8, pp. 1978–1986, 2010.
-  L. Xu, Q. Yan, Y. Xia, and J. Jia, “Structure extraction from texture via relative total variation,” ACM Transactions on Graphics, vol. 31, no. 6, pp. 139:1–139:10, 2012.
-  S. Ono, T. Miyata, and I. Yamada, “Cartoon-texture image decomposition using blockwise low-rank texture characterization,” IEEE Transactions on Image Processing, vol. 23, no. 3, pp. 1128–1142, 2014.
-  P. Xu and W. Wang, “Improved bilateral texture filtering with edge-aware measurement,” IEEE Transactions on Image Processing, vol. 27, no. 7, pp. 3621–3630, 2018.