From the very beginning there a rapid increasing interest in models and methods for discontinuous phenomena, both in the computer graphics and vision community. The main focus lies in the identification of discontinuities in data perturbed by noise. To be more specific, in image data where noise needs to be removed from image while preserving basic significant features like gradients, jumps, spikes, edges and boundaries.
Methods dealing with region boundary preservation have a long tradition in imaging and in their state of self applied art. Each one have their promising contributions from various branches of mathematical perspective in their respective application domain.
An evolutionary review study has been performed with detail analysis of every method starting from diffusion processperona , nonlinear Gaussian filtersnonlinear , bilateral filtertomasi ; porikli ; chen ; kunalda to its extension the trilateral filter prasun ; vaudrey .
Edge preserving filtering is a technique to smooth images while preserving edges. It all started in 1985 with the work of L.P Yaroslavsky yaroslavsky , with his neighborhood filter which averages pixels with a similar grey level value and belong- ing to the spatial neighborhood. In 1990 Perona and Malik perona worked on realizing semantically meaningful edges using a diffusion process. To overcome some of its limitation Aurich and Weule in the year 1995 made a breakthrough invention on edge preserving method using non-linear Gaussian filters. Their concept laid the foundation stone of the origin of bilateral filters. It was later rediscovered by Smith and Brady smith , and Tomasi and Manduchi tomasi in the year 1998 gave it its current name ’the Bilateral filter’. Ever since it has found a widespread use in various image processing applications like video abstraction, demosaicing, optical-flow estimation, stereo matching, tone mapping, stylization, relighting and denoising. Its various advantages lies firstly with its non-iterative and simple formulation where each pixel is replaced by a weighted average of its neighborhood pixels making it adaptive to application specific ambiance. Secondly it depends on few parameters (spatial domain and intensity range kernels). Finally with the superb contributions of various efficient numerical schemes the filter can be computed very fast reasonably on large images also in real time environment with dedicated graphics hardware.
A great deal of implemented papers tomasi ; porikli ; chen ; pham ; dynamic ; kunalda ; cviu have been studied, which explores the various feature of the edge preserving filters(mainly bilateral filter) constituting its behaviour and gives a detail understanding of its strengths and weaknesses in various application environment domains, the various optimization techniques for further betterment which consequently lead to its extension the next generation trilateral filter prasun ; vaudrey
. It provides stronger noise reduction and better outlier rejection in high-gradient regions, and it mimics the edge-limited smoothing behavior of shock-forming PDEs by region finding with a fast min-max stack. This study is supposed to provide a good insight to the designers among the selection of the design methods of their choice deciding upon the various numerical analysis compatible with their applications in hand.
This paper is organized as follows. Section 2 presents the diffusion technique employed to detect the edge, namely the anisotropic diffusion, next realizing the same through nonlinear Gaussian filters, linear Gaussian filtering and the nonlinear extension to the bilateral filter and determining the computational cost, number of iterations followed by the various effects on it by changing its parameters. Section 3 discusses the various ways to efficiently implement the bilateral filter together with the variations of each method to achieve the other and also the several links mainly of the bilateral filter to other frameworks employing the different methods to interpret it. Section 4 describes various challenging applications of the edge-preserving filters. Section 5 describes the various interrelationships among the edge preserving filters with an in depth mathematical analysis of each transition. Section 6 exhibits the various modifications, extensions and substitutes of the bilateral filter to the next generation trilateral filters.
2 Edge preserving through nonlinear diffusion to trilateral filtering
To introduce the concept of edge preserving we begin with our discussion with the nonlinear diffusion technique by Perona and Malik perona . They proposed a nonlinear diffusion method for avoiding the blurring and localization problems caused by linear diffusion filtering. It aims at reducing image noise without removing significant parts of the image content, typically lines, edges or other details that are important for the interpretation of the image.
2.1 Motivation leading to realizing the anisotropic diffusion filter
Suppose we have an edge/not edge estimation function such that
If (x,y) is a part of an edge then naturally little smoothing should be applied unlike otherwise. Fortunately the gradient of the brightness function acts as a detector which tells whether (x,y) is a part of an edge or not denoted by . Let denote a subset of the plane and I(.,t): be a family of gray scale images therefore anisotropic diffusion is defined as
where and denotes the laplacian and gradient respectively, div(…) is the divergence operator and is the diffusion coefficient, controls the rate of diffusion i.e blurring intensity according to and is the function of the image gradient so as to preserve the edges in the image. When is large (x,y) is not a part of an edge else otherwise. The edge estimation function is E(x,y,t)= correspondingly the two functions for the diffusion coefficient, the corresponding two functions for the diffusion coefficients are
where the constant controls the sensitivity to the edges and is either chosen experimentally or as a function of the image noise.
If gradient is (a,b), then a perpendicular vector to it, can be (a, -b). A dynamic kernel is contracted along the direction of the normal, ending in an elliptical kernel.
We have successfully implemented this algorithm after identifying the parallelism inherent with it on reconfigurable hardware and have attained a prominent acceleration elsevier . We have presented the FPGA implementation of an edge preserving anisotropic diffusion filter for digital images. The designed architecture completely replaced the convolution operation and realized the same using simple arithmetic subtraction of the neighboring intensities within a kernel, preceded by multiple operations in parallel within the kernel. Our proposed hardware design architecture completely substituted standard convolution operation gonzaleg , required for evaluating the intensity gradient within the mask. We used simple arithmetic subtraction to calculate the intensity gradients of the neighboring pixels, which saved 9 multiplication and 8 addition operations per convolution respectively by computing only a single arithmetic operation for each gradient direction as per equation 5. Thus, reducing a huge amount of computational time and operations to a linear computational complexity. The hardware implementation of the proposed design shows better performance in terms of throughput ( increase of 10% ), total power ( reduction of 24% ) and a reduced resource utilization in comparison with the related research works In the later sections we will show the relationship of this algorithm with its counterparts.
2.2 Edge preserving diffusion through Non-Linear Gaussian filters
Aurich and Weule in the year 1995 nonlinear proposed a new diffusion method for edge preserving smoothing of images. Unlike the anisotropic diffusion methodology it is based on a modification of the way the solution of the heat conductance equation is obtained by convolving the Gaussian kernel with the initial data. Their method employs a simple non-linear modifications of the Gaussian filters thereby overcoming the lengthy iteration steps and convergence problems. A sequential chain of non-linear filters combining the significant as well as advantageous aspects of both the anisotropic diffusion and robust filtering has been applied overcoming their drawbacks.
2.2.1 The non-linear Gaussian filtering approach
A simple linear Gaussian filtering convolution is defined by Gf=g*f where f is the signal and g is the Gaussian function. The image signals are defined on a discrete bounded set of pixels and as a result region boundary effects needs to be taken care off. The linear Gaussian filter using the position-dependent renormalization of the weights is defined by
with = and
In order to preserve edges one should avoid averaging across edges. Thus if q and p are at different sides of an edge f(q) should be disregarded while calculating the average over a neigbourhood of p. Usually, however, the location of the edges is not yet known, even worse: Edge-preserving is done in order to detect edges more easily. Therefore the value f(q)
is not completey omitted in the averaging process, but its weight is lowered when it is likely that there is an edge between q and p. How to estimate this probability? The differenceneeds to be looked upon. The bigger it is, the more probable it seems that there is an edge between p and q. This has been realised by multiplying by an additional weight As because the noise distribution in images in real situations is bell-shaped i.e Gaussian like so it is suggestive to choose i.e also as Gaussian, thereby generating the non-linear Gaussian filter given by
It is well known that the Gaussian filter is the basic solution of the heat conductance equation, therefore the non-linear Gaussian filter can be utilized to calculate some form of anisotropic diffusion. With an aim to further decrease the noise an extra parameter has been added to the nonlinear Gaussian filter equation 7 resulting in
Experimental observations have revealed that larger the value of narrower is the noise distribution of the filters as shown in fig 4 below.
The value of the parameters namely and plays an important role in determining the shape of the output signal when the filter is applied on step and ramp like edges.
Case 1: When is larger than the maximum grey value the region boundaries tend to smooth as non-linear Gaussian filter tends to work like a linear Gaussian filter.
Case 2: When is large and is small the filter can sharpen ramp edges as shown in fig 5 below.
It has been noticed that should be chosen greater than 1 to sharpen the ramp edge f.
According to the authors nonlinear a sequence of filtering operations are applied with varying parameters in order to achieve the desired smoothing effects. The first filtering step aims to reduce the contrast of the fine details in the image followed by the subsequent steps which does the same thing but tries to sharpen the edges of the coarser structures which is supposed to be blurred by the first filtering step.
2.2.2 Effect of varying parameters
In the first step should be very small so as to reduce the unwanted blurring of the coarser structures but at least larger than the size of the fine structures. The on the other hand determines the maximal contrast fine details approximately in order to be finally eliminated as a result. From the next filtering steps the needs to be increased and to be decreased to sharpen the edges of the coarser structure. Experiments have proved that almost always best results are achieved if is doubled and is halved before performing the next filtering step.
Fig 6 below shows the result of filtering with a filter chain of five stages with initial parameters as =0.5 and =128 respectively.
This design introduces some flexibility w.r.t its anisotropic counterpart as this Gaussian filtering can be considered as an anisotropic diffusion process which can be stopped at certain times and can be restarted later with new parameters.
This operator smooths f while retaining the edges present in .
2.3 Nonlinear Gaussian filtering extension to bilateral filtering
2.3.1 Gaussian smoothing to Bilateral filtering
A Gaussian blur (a.k.a Gaussian smoothing) is the result of blurring an image by a Gaussian function typically used to reduce image noise and reduce detail. Each output image pixel value is a weighted sum of its neighbors in the input image. Mathematically, applying a Gaussian blur to an image is nothing but convolving the image with a Gaussian function.
An image filtered by Gaussian Convolution is given by:
denotes the 2D Gaussian kernel. It is a weighted average of the intensity of the adjacent positions with a weight decreasing with the spatial distance to the center position p. The weight for pixel q is defined by the Gaussian
where is a parameter defining the neighborhood size.
2.3.2 Drawbacks of Gaussian smoothing
Suppose a bright pixel has a strong influence over an adjacent dark pixel although these two pixel intensity values are quite different. As a result, after convolving image edges are blurred because pixels across discontinuities are averaged together. Which concludes that action of the Gaussian convolution is independent of the image content but depends only their distance in the image.
What is a bilateral filter?
Before defining it at first the concept of domain and range filtering should be discussed, origin of which is linked to the nonlinear Gaussian filter as discussed above. Traditional filtering is domain filtering which focuses on the closeness by weighing pixel values with coefficients which fall of with distance. Similarly range filtering averages image values with weight that decrease with intensity dissimilarity. It is this component which preserves the edges while averaging the homogeneous intensity regions.
More likely to mention is that range filtering is non linear because their weights depend upon the neighborhood image intensity or color. Spatial locality is itself very promising and range filtering itself sometimes distorts an image’s color map. So when the domain as well as range filtering is combined together with both the advantages the duo is termed as bilateral filtering and is defined by,
where is the normalization factor, and
denotes the standard deviation for the domain and range kernel, accordinglydenotes the spatial Gaussian weighting whose value decreases with increase in pixel distance and which decreases with the influence of intensity difference i.e between pixels say p and q. Here we need to figure out the original framework from where the bilateral filter equation has been derived. So we need to look back in equation 6 and equation 7 where see that the additional term that has been added in equation 8 to reduce the influence of f(q) as increases and later on is assigned. So this is actually the basis of the similarity function whose value is high in homogeneous regions within a boundary and drastically reduces across boundary on opposite sides of an edge and is analogous to the in equation 13 which is nothing but the range kernel which is responsible for the sensitiveness to edges.
2.3.3 Filter controlling parameters
Unlike anisotropic diffusion filtering, the bilateral filter is controlled by only two parameters to control its behavior namely the domain() and range kernel() standard deviations respectively.
Range kernel std. dev.(). With the increase in the filter gradually approximates towards the normal Gaussian convolution. As the range component increases it flattens and widens resulting in an increase in dynamic range of the image intensity so its very likely to penalize the identification of intensity gradient present at the edges.
Domain kernel std. dev.(). The effect of variation of domain kernel std. dev. as already been shown in fig. 7. It smooths the large features and blurs the detailed textures upon increasing. The response of the bilateral filter is the multiplication of its constituent domain and range kernel respectively. For a large spatial Gaussian multiplied with narrow range Gaussian achieves limited smoothing in spite of the large spatial extent. The range weight enforces a strict preservation of the gradient contours.
2.3.4 Iteration effect
Bilateral filter is inherently non-iterative in nature. However it can be iterated. Iteration leads to almost piecewise constant result as shown in fig. 12. Likewise we try to relate the effect of iteration with the varying filter parameters like and . It has been observed that since and are interlinked the effect of varying one doesn’t reflect much without the other parameter as shown in fig. 9.
2.3.5 Computational complexity
There is no doubt that the bilateral filter is a very cost effective algorithm to compute specially when the is large as it needs to compute a huge neighborhood of pixels, compute the weights of two parameters and , followed by their product and an expensive normalization step.
2.3.6 Decomposition into multiple components
As shown in fig. 11 the bilateral filter can decompose an image into two parts namely the filtered image and the subtracted/residual component. The filtered image contains the large scale features, sacrificing the minute textures respecting the strong edges. The remnant image left after subtracting the filtered image with the original one, contains only those portions the filter removed, which can be treated further as either as fine textures or noise as the requirement demands. So we see that bilateral filter provides an efficient way to smooth an image while preserving the edges and has got various efficient numerical schemes to realize the same together with its wide variety of application areas to be discussed in detail in the next subsequent sections.
3 Efficient ways to implement bilateral filtering
Normal conventional bilateral filter as proposed by C.Tomasi and R.Manduchi tomasi is very computationally intensive especially when spatial kernel is large. Referring to equation 16 for each pixel and , whose loop is nested within . So for a 10 megapixel photo there is iterations involved which make the system very slow and would approximately take more than 10 minutes to process per image.
To achieve the filtering there are several efficient approaches which reply on mathematical approximations yielding similar results with acceleration and accuracy that needs to be discussed in the following subsequent sections.
3.1 Brute Force Approach
The Brute Force approach follows the conventional way (direct implementation of the bilateral filter) as discussed with equations 16 to 18. Its computational complexity is where the size of the spatial domain (cardinality of ) denoting the number of pixels. It’s clear that the quadratic complexity enhances the computational cost especially for large spatial domains. So there is a slight modification made with a view to reduce the computational complexity. The idea is to consider only the neighborhood pixels of the central pixel of interest referring to equation 16 such that considering the contribution of pixels outside the range of 2 is negligible because of spatial kernel. The complexity has been decreased to and is only effective for small kernels due to its dependency on . But for this approach the computational complexity increases to a great extent. For a 8 megapixel image the number of iterations required is which is obviously not a feasible solution (for the classical approach) for implementation in hardware as the computational complexity increases for processing nested loops of pixels.
3.2 Jacobi algorithm approach
In this section we need to discuss the ways to speed-up the bilateral filter and increase it smoothing effect together with its implementation procedure for piece-wise linear signals. Author elad here shown that the bilateral filter can be achieved from the Bayesian approach using a penalty function, and from it a single iteration of the Jacobi algorithm (also known as the diagonal normalized steepest descent DNSD) yields the bilateral filter. The corresponding penalty function is defined as
where is the unknown signal and the result should be as close as possible to the measured signal . The matrix stands for a one-sample shift to the right operation elad . is the normalized weight matrix. The bilateral filter can be accelerated in the following way:
A quadratic penalty function say:
the SD (steepest descent) reads
The author have shown a close resemblance of equation 19 with 20 by choosing from 20 as for AND for .
One effective way to accelerate the SD convergence is the Gauss-Siedel(GS) approach where the samples of are sequentially computed from to ( scalar samples in vector ) and for the calculation of , updated values of are used instead of values. This technique is known to be stable and converge to the same global minimum point of the penalty function given in 20. The GS intuition is to compute the output sample by sample and to utilize the already computed values whenever possible.
While some describe this process in a more systematic way by decomposition of the Hessian to the upper-triangle, lower-triangle and diagonal parts.
Alternatively the filter can be accelerated by slicing the gradient into several parts. From equation 21 it can be written for iterations in the following form :
Proceeding in this way the final solution tends to be closer to the global minimum point of the penalty function in 20. Finally a good de-noising has been achieved with reduced computational complexity but the computational analysis of the complexity has not been shown in the paperelad . Well as of its implementation in hardware the iterative steps can be thought of to design as recursive procedure by loop unrolling in hardware.
3.3 Layered Approach to approximate the filter
the spatial kernel. The final layers are obtained by upsampling the convolution outputs followed by linearly interpolating between the two nearest layers as shown in step 3 of the pseudocode of the Algorithm1 which dramatically accelerates the computation.
As per Algorithm 1 the downsampling process can be done in software domain itself. However as in step 1 the convolution operation of each layer with the spatial kernel can be accomplished in parallel with the specialized customized hardware filters meant for convolution operation as discussed in section 3.5 and 3.8 respectively which can accelerate the execution as the layers are independent to each other. Followed by the normalization operations where the pixels need to be divided, CORDIC dividor can be applied thereafter which has got its inherent parallelism for the desired accelerated operation.
3.4 Bilateral Grid (3D kernel) approach
J.Chen et. al chen evolved with a new idea of data structure to represent image data such that the kernel coefficient weights depends only on the distance between points. The new data structure is the bilateral grid (a volumetric data structure), enabling fast edge preserving image processing.
Here a 2D image I is represented as a 3D grid where the first two dimensions of the grid corresponds to the pixel position and the third dimension the gray level pixel intensity . The total process has been illustrated in Fig.13 and 14 with the corresponding algorithm in Algorithm 2. The complexity of the algorithm is where and are the size of the spatial and range domain respectively. There is a problem for color images as it becomes 5D grid and requires large amount of memory.
Chen et, al chen have already implemented it in real time video rate using a dedicated graphics processor. Although it computationally very fast still we do not intent to use this procedure of filtering as it does not yield good denoised image quality in signal to noise ratio.
3.5 Separable Filter Kernel Approach
Pham et al.pham presented a separable implementation of the bilateral filter. The crux of the idea was that multidimensional kernel implementation is computationally expensive. So in order to approximate the 2D bilateral filter it is decomposed into two 1D filters, filtering row followed by column or vice versa. Of course not all kernels are separable. Say at first the row filtering is applied and to the intermediate results the column filtering is applied to get the same 2D filtered output. The computational complexity decreases linearly from quadratic nature of brute force approach and leads to as it neighborhood computation is single dimensional, where the size of the spatial domain (cardinality of ) denoting the number of pixels and defines the neighborhood size of the spatial domain. Separable kernel computes the axis-aligned separable approximation of the bilateral filter which is suitable for homogeneous areas and symmetrical edges but performs poorly in regions of complex textures. There are methods to overcome such problems but it comes with an additional computation overhead. We have successfully implemented the separable kernel based concept of pham applied on the trigonometric based kernel kunalda on FPGA and obtained a good acceleration as shown in indicon .
3.6 Local Histogram Approach
Bein Weiss weiss computed the bilateral filter from the histogram of image. Considering the spatial weight as a square box function the bilateral filter can be represented as containing only the range kernel component with the domain kernel as a constant entity. Their idea is as follows: Considering two adjacent pixels and and their corresponding neighborhoods and . Based on these scenario the histogram of the neighborhood is computed keeping an eye with the similarity of the histogram of the other neighborhood. Once it is known the bilateral filter can be computed as shown:
This implementation if followed by iteration the filter thrice to remove the strange band artifacts near edges of strong intensity gradients which remains after filtering. The complexity of the algorithm is of the order of . It can handle filter kernels of any size in short time but has a setback w.r.t processing color images independently for each separate channel. It can exploit vector instructions of CPU and as a result parallel operations of the independent instruction can be implemented concurrently with full acceleration. Interested readers need to refer to the paper weiss for the detail explanation of the algorithm.
3.7 Constant time bilateral filter implementation using polynomial range kernels.
Later on Porikli porikli in 2008 showed that Gaussian range and arbitrary spatial bilateral filters can be realized by Taylor series as linear filter decomposition with a noticeably good filter response. Considering arbitrary spatial filters a polynomial range has been defined as:
where is the order of the polynomial. Considering bilateral filter of the form
where and denotes the range and domain kernels respectively and the normalising term Now for , in Eqn 25 and substituting the value in Egn 26 the polynomial range arbitrary spatial filter is obtained as
The corresponding linear filter responses are , , etc. Thereby substituting the above values in Eqn. 27 it can be rewritten as
where , etc. Accordingly the normalization term is and the spatial filter is not constrained here resulting for any desired filter to get chosen. For Eqn. 25 can be written from Eqn. 26 as
So its clear from equations 28 and 29 that the bilateral filter has been achieved with polynomial range terms in terms of the spatial filter without approximation. The author had presented one more way namely the Gaussian range. Additional smoothness to realize the bilateral filter by exploiting the property of Gaussian filters to be easily differentiable and can be expressed in terms of linear transforms. The range component is given by
The equation can be rewritten as
Thereby applying Taylor expansion to Eqn. 31 the bilateral filter expansion upto second order and third order derivatives can be obtained as shown:
Additively the normalizing terms are similar to some extent containing same terms. So the bilateral filter has been realized using the weighted sum of the spatial filtered responses of the powers of the original image. Their complexity does not scale with the shape or size of the filter and are referred to as algorithm as its complexity.
It is not a good idea to implement this algorithm in hardware as it would increase the complexity for developing the power series as exponential operation execution is not feasible in hardware.
3.8 Constant time bilateral filter implementation using trigonometric range kernels.
Here the authors Kunal et. al kunalda have used the class of trigonometric kernels which turned out to be sufficiently rich, allowing for the approximation of the standard Gaussian bilateral filter. This has been done by generalizing the idea of the using polynomial kernels porikli . It has been observed that for a fixed number of terms the quality of approximation achieved using trigonometric kernels is much superior than obtained using polynomial kernels. This was done by approximating the Gaussian range kernel of the bilateral filter using raised cosines. They observed that trigonometric functions share a common property of polynomials which allows one to linearize the otherwise non-linear bilateral filter. The trigonometric functions share a common property of polynomial which allows one to linearise the otherwise nonlinear bilateral filter. Recalling the original equation of the bilateral filter in Eqn 16, it is seen that the nonlinear property of the bilateral filter mainly arises due to the range kernel component. If the range part is kept constant and is replaced by a unity value then the filter can be implemented in constant-time, irrespective of the shape or size of the filter. And it is only possible if the range component can be written in the form of a cosine term say
where and is the dynamic range of a grayscale image and is equal to 255 and . The authors have also shown that this idea can also be applied to more general trigonometric ideas. Interested readers can view the details. Now in order to penalize large difference in intensity than small ones it is needed that the family of cosines should be symmetric, non-negative and monotonic. These properties are all enjoyed by the family of raised cosines of the form
Now as the value of the raised power in Eqn. 35 gets large the quickly gains Gaussian like behavioral curve over half the period as increases. This property speeds up the rate of convergence than that of the Taylor polynomials porikli used to approximate the Gaussian range kernel by suitably scaling the Gaussian kernel. This approach greatly reduces the computational complexity by and can also be implemented in parallel further accelerating its speed producing artifact-free results. We like to refer to his article kunalda for the detail of the algorithm. We have further improved it w.r.t speed of execution by realizing the filter on an FPGA indicon . We have done an FPGA based implementation of an edge preserving fast bilateral filter on a hardware software co-design environment of this most recent algorithm preserving the boundaries, spikes and canyons in presence of noise. Trigonometric terms (like sine and cosine as in equation 34 and 35) implementation meant for trigonometric range kernel execution for realizing the fast bilateral filter has been done using the customized circuit (with CORDIC processor) for robust angle computation. Customized CORDIC processor has been designed to compute sine and cosine data with a provision to compute large angles more than the inherent and to accommodate/map within it. Followed by the four stage parallel pipelined architecture greatly improves the speed of operation. Moreover, our separable kernel implementation of the filtering hardware increases the speed of execution by almost five times than the traditional convolution filtering, while utilizing less hardware resource indicon . Author in kunalda have shown that by exploiting the central property of trigonometric function aka shiftability which assists in reducing the redundant computations inherent in the filtering operations. It has been shown that using the shiftable kernels certain complex filtering can be reduced to simply that of computing the moving sum of a stack of images where each image in the stack is obtained through an elementary pointwise transform of the input image adding two significant advantages. Firstly fast recursive algorithms can be used for computing the moving sum and, secondly, parallel computation can be used to further speed up the computation kunalda ; indicon .
Some of the benchmark implementation outcomes has been presented in Figures 15 and 16 respectively. From Fig. 15 we get that the output of Tomasis method produces a particular value of the denoised output but consumes a huge time. The outcome of Chen’s method in (d) takes the least time of all but with a cost of a decreased with a slight degraded image quality. (e) shows better results w.r.t both the and time of execution. Also the quality of approximation achieved using trigonometric kernels is much better than obtained using polynomial for a fixed number of terms. Depending on the availability of the resources, the Bilateral grid approach of Chen chen executes well for high resolution real time videos on graphics hardware. The local histogram approach of Weiss weiss is better suited for filter kernels of arbitrary size. Finally as discussed above a step ahead to reduce the complexity to Taylor series polynomial followed by a more advanced version the trigonometric range kernel has been deduced so far. Table 1 summarizes the complexity order of each of the previously discussed techniques.
|Brut Force Approach (section 3.1)|
|Layered Approach (section 3.3)|
|Bilateral grid(3D kernel) approach (section 3.4)|
|Separable filter kernel approach (section 3.5)|
|Local Histogram Approach (section 3.6)|
|Constant time polynomial range approach (section 3.7)|
|Constant time trigonometric range approach (section 3.8)|
4 Applications of edge preserving filters
This section discusses the the uses of the edge preserving filters (anisotropic and bilateral) for a wide range of applications.
4.1 Contrast Enhancement.
There lies some problems to eliminate the noise from the edges in case of nonlinear isotropic diffusion filter. Anisotropic diffusion do solve the problem as it considers both the modulus of the edge detector as well as its direction. The authors stuttgart
constructed an orthonormal system of eigenvectors
of the diffusion tensorsuch that
It is needed to be smoothed along the edge than across it. As a result Weickert Weickert
choosed two eigenvaluesand such that
The goal of the filter decides the choice of the eigenvalues and . Smoothing within each region and inhibition of diffusion across edges can be obtained by reducing the diffusivity perpendicular to edges. Increasing it enhances the contrast. It depicts the changes in the Gaussian like function with temporal evolution. This edge location remains stable over some time till the process of contrast enhancement is concluded the steepness of edge decreases and the image converges towards a constant image.
4.2 Coherence enhancement
It begins with the wish to process one-dimensional features like line-like features stuttgart . Cottet et. al cottet with their diffusion tensor , eigenvectors as in Equation 36, with the eigenvalues as shown:
which denotes the diffusion process diffusing perpendicular to . As , becomes an eigenvector of with the corresponding eigenvalue diminishing to 0, halting the process completely. Now by substituting by a more robust local descriptor one can improve the parallel line like textures thereby enhancing the coherence-enhancing anisotropic diffusion. Figure 18 shows the results of the AD filtering applied on a fingerprint image.
Both anisotropic as well as bilateral filtering are used for denoising purpose. Here in this section we try to focus more on the use of bilateral filter to remove noise from digital images.
There are several sources of noise corrupting a digital image like dark current noise link due to thermally generated electrons at the sensor site, shot noise arising out of quantum uncertainty in photo-electron generation. Amplifier noise and quantization noise occur during the conversion of the number of electrons generated to pixel intensities link . The basic purpose of the bilateral filter is image denoising and has a wide range of applications like image restoration, medical imaging, object tracking etc. to name a few. Bilateral filtering is very simple, easy to understand and its recent implementations are very fast. It preserves object contours (edge preserving) and generates sharp results. It smooths the homogeneous intensity regions while respecting the edges and in this way it reduces the noise injected in images but sometimes the edge preservation is not accurate and introduces undesirable staircase effect and cartoon-like features. It is undoubtedly a good solution but not for all kinds of application. The application areas where it has been applied are :
4.3.1 Normal denoising
As we know that bilateral filtering is used for denoising purposes. We like to validate it via some experimental results as shown in Figure 19 and 20 with the parameters correspondingly as shown. Figure 19 shows the original image with its corresponding noisy one after adding white Gaussian noise. The right hand side contains its filtered version with zoomed insets surrounding it to provide a clear vision. Figure 20 shows the filtered version applying various benchmark methods starting from normal classical implementation as in , shows the 3D grid implementation and shows the trigonometric kernel implementation kunalda . The corresponding denoised images are also visible and they are used as per the demand in applications.
The experiment has been implemented 10 times in software and the corresponding mean squared error () obtained has been averaged by 10 to get the averaged , which is used to calculate the of the software.
4.3.2 Video implementation
T.Pham pham , J. Chen chen Bennet et. al bennett shows that bilateral filter can be implemented for real time videos. T. Pham have shown that better image quality and and compression efficiency can be achieved if the original video is preprocessed with the separable filter kernel computing at a fraction of executing time unlike the traditional one. Chen et. al have shown the use of a new data structure the bilateral grid approach already discussed in section 3.4. Using it edge aware image manipulations such as local tone mapping on high resolution images can be done at real time frame rate with parallelising the algorithmic instructions on modern GPUs.
4.3.3 Medical Image Processing
In medical image processing Wong et. al wong and D. Bhonsle et. al ijigsp removes the noises corrupting the images mainly in presence of white Gaussian noise. Although it doesn’t work good with salt and pepper noise. Wong improved the structure preservation abilities of the bilateral filter by an additional weight component depending on the local shape and orientation known as trilateral filtering.
4.3.4 Orientation smoothing
D.Chen et. al lenkaji introduced first the concept of orientation bilateral filter which combines the concept of average(arithmetic mean value) orientation and the bilateral filter to maintain microstructural features and to smooth orientation noise which needs a fast computation of the disorientation using quaternion equation by Hamiltonian as follows:
where denotes the is the axis and is the angle of rotation of the axis which completes the orientation specification. The average orientation based on the orientation filter is given by the weighted function:
where the weight is the product of two components namely the domain and range part respectively as given by and which are obtained by methods described in section 2.3.1 of this paper. The intensity difference of the range component here denotes the misorientation between the
sampling point and the center point . the smoothing process is continued as per the equation 42 and the output is as shown in Figure 23.
Through this the authors have tried to improve the angular precision of orientation maps for deposited and deformed structures of pure copper obtained from electron backscattered diffraction (EBSD) measurements used in the quantitative characterization of crystallographic microstructures.
Paris et. al used also applied the same filter to smooth the 2D orientation field from optical measurements for hairstyle modeling parishair as shown in Figure 24.
4.4 Contrast management
Bilateral filtering can be applied for contrast management applications like grouping large and small scale components of an image (detail enhancement) or reduction, texture and illuminating separation, tone mapping and management and generation of high dynamic range imaging elad .
4.4.1 Decomposition for detail enhancement and Image fusion
For detail enhancement Fattal et. al fattal used a new image-based technique for enhancing the shape and surface details of an object. They computed a multiscale decomposition based on the bilateral filter for a set of photographs taken from a fixed viewpoint, but under varying lighting conditions as an input. For a basic multiscale bilateral decomposition its purpose is to build a series of filtered images that preserve the strongest edges in while smoothing small changes in intensity. At the finest scale , is set and then iteratively the bilateral filter is applied to compute:
where is the pixel coordinate, and are the widths of the spatial and range Gaussians respectively and is an offset relative to that runs across the support of the spatial Gaussian. Finally an enhanced image combining detail information at each scale across all the input images needs to be reconstructed. It is simple to implement, fast and accurate. Results shown in Figure 25. It is to be noted that the bilateral filter is a good choice for multiscale implementation as it avoids the halo artifacts commonly associated with the classical Laplacian image pyramid.
Image fusion is again categorized into two applications namely (a) flash photography and (b) fusion of multispectral images.
Flash photography: Visually compelling images can be generated in low lighting conditions by combining flash and no-flash photographs. Elmar et. al elmar
used the bilateral filter to decompose the image into detail and large scale. The finer textures(detailed feature) extracted from the flash photography is combined with the large scale components of no-flash image.
Multispectral fusion: Here both the data from infrared spectrum and standard RGB data is used to denoise video frames in low light conditions as per Bennett et al.bennett . A modified bilateral filter (dual bilateral filter) is applied in such condition where the range weight is constituted by a combination of both the visible spectrum and the infrared spectrum given by:
where is a 3-vector representing component at pixel and the intensity of the infrared spectrum at the same pixel .
where and denotes the flash and no-flash pictures. and denotes the domain kernel and denotes the range kernel. Here the edge preserving term although corrupted in the no-flash image is preserved.
4.4.2 Large and small scale component separation
It has been observed that for large range kernel standard deviation , the bilateral filter eliminates the fine texture variations of reflectance while maintaining large discontinuities of illumination changes oh . The crux of the idea is illumination changes occur at a large scale than texture patterns.
4.4.3 Tone Mapping and management
Durand et. al dynamic in the year 2001 demonstrated the significant property of bilateral filter to seclude small-scale signal variations, can be utilized to design a tone mapping process which can map intensities of high dynamic range image to a low dynamic range display. They overcame the limitations of the existing classical approach like gamma reduction, uniform scaling etc. which wipes out the finer details and texture depths of displays. Here the bilateral filter is applied on the log intensities of the high dynamic range images, followed by evenly scaling down the result and summing up filtered residual, resulting in a more visually pleasing display dynamic . The texture of the image is defined by:
where the texture can be varied by adjusting the small scale component visible in the image. For contrast management purpose the bilateral filter is usually applied to the log of the source image as the response of human visual sense is multiplicative. Also logarithm operation helps the standard deviation of range component to act uniformly across different levels of intensity. However using a cubic root which is based on luminance channel of the CIE-Lab color space, handles more efficiently the multiplicative process due to its computational simplicity.
4.5 Depth Reconstruction
Bilateral filter improves in depth reconstruction, i.e recovery of pixel depth value from corresponding pixels point pairs of different views. Q.Yang et. al yang75 constructed a metric called cost volume, which is a 3D volume of depth probability, provided by the probabilistic distribution of depth of input range map image. Then they applied iteratively the bilateral filtering on the cost volume to generate visually compelling high resolution range images with accurate depth estimate exhibiting a very effective but simple formulation. Yang yang76 again in the year 2009 showed the usage of bilateral filter in stereo reconstruction. Among several strategies for depth reconstruction bilateral aggregation turned out to be the most successful one.
4.6 Feature preserving mesh smoothing
Thouis et al.thouis showed a noniterative and robust mesh smoothing technique to remove noise while maintaining the feature using a robust statistical approach and local first order predictor on surface. Predictors have been used based on triangles of mesh on natural tangent planes to the surface. The readers should a the Figure. 28 for clear understanding. A new predicted point is being generated based on the predictor from its nearby spatial triangle. The spatial weight depends on the distance between and the centroid of . The range component depends on the between the prediction and the original source point . Therefore the prediction on a surface is given by:
where the normalizing factor is :
and is the area of the triangles used to weight for the variations of sampling rate of surface.
In Figure 28 the normals as shown are more prone to noise than vertices as they are first-order properties of the mesh as shown in Figure 28b. However the estimation can be increased with mollification huber ; murio , which is achieved by smoothing the normals without changing the positions of the vertices during mollification to preserve the features as few corners might get improperly smoothed by mollification near corners. The results are shown in Figure 29.
5 Connection among various edge preserving filters
There exists a number of existing edge preserving smoothing filters having various contributions in graphics and image processing and among them we find bilateral filter and anisotropic diffusion as the most dominating one and will show their relationships with one another and with other methods as they give similar results.
The bilateral filter acts as a bridge between anisotropic diffusion filtering and other filtering methods which is shown pictorially as in Figure 30.
5.1 Yaroslavsky filter with bilateral filter
Let us refer to an ingeneral equation as
where is the noisy output image, is the pure ideal image and is the additive noise. Yaroslavsky filter averages the pixels with a similar grey level value and belonging to the spatial neighborhood given by
where is a normalizing factor and is a filtering parameter. Later on in 1995 and in 1998 SUSAN filter smith and the bilateral filter tomasi got introduced which instead of considering a fixed spatial neighborhood , weigh the distance to the reference pixel . given by
As per Baudes in 2005 baudes there is no as such difference between bilateral filter(BF) and Yaroslavsky filter(YF). Bilateral filter with a square box window is simply an approximation of the Yaroslavsky Filter which restricts the integral to a spatial neighborhood independent of the position . In other words Yaroslavsky filter is a special case of Bilateral filter with a step function as the spatial weight. As a result we don’t find the term in the Yaroslavsky filter in equation 49 unlike in 50. If the gray level intensity difference is larger than , then both the filters compute the averages of the pixels belonging to the same region of the reference pixel thereby behaves as an edge preserving smoothing filter.
5.2 Relation of bilateral filter with anisotropic diffusion filter
There exists a wise link between anisotropic diffusion filtering and bilateral filter and here we would like to focus on the discrete domain computation.
Fortunately it has been observed by D.Barash barash that adaptive smoothing serves as a link between anisotropic diffusion and bilateral filtering.
Anisotropic diffusion and adaptive filtering:
Give an image , where are the spatial coordinates, and the iteration of the adaptive filtering yields:
where the convolution mask is defined as :
the variance of Gaussian kernel, whereis given by the magnitude of the gradient computed as:
which bears similarity between the convolution kernel of the diffusion coefficient of the anisotropic diffusion.
Considering one dimensional signal of and showing the averaging process as follows:
which can also be written as,
replacing the above equation reduces to:
which can be written as a discrete approximation of the linear diffusion equation as: