Performing edge detection by difference of Gaussians using q-Gaussian kernels

11/11/2013 ∙ by Lucas Assirati, et al. ∙ Universidade de São Paulo 0

In image processing, edge detection is a valuable tool to perform the extraction of features from an image. This detection reduces the amount of information to be processed, since the redundant information (considered less relevant) can be unconsidered. The technique of edge detection consists of determining the points of a digital image whose intensity changes sharply. This changes are due to the discontinuities of the orientation on a surface for example. A well known method of edge detection is the Difference of Gaussians (DoG). The method consists of subtracting two Gaussians, where a kernel has a standard deviation smaller than the previous one. The convolution between the subtraction of kernels and the input image results in the edge detection of this image. This paper introduces a method of extracting edges using DoG with kernels based on the q-Gaussian probability distribution, derived from the q-statistic proposed by Constantino Tsallis. To demonstrate the method's potential, we compare the introduced method with the traditional DoG using Gaussians kernels. The results showed that the proposed method can extract edges with more accurate details.



There are no comments yet.


page 3

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Image processing is designated as any type of signal processing where the input is an image and the output can be another image or a set of features extracted from the input image. Once the computer vision involves the identification and subsequent classification of certain objects in a given image, edges detection is an essential tool in image analysis. When performing edge detection on an image, there is a reduction of the amount of information to be processed because the redundant information (considered less relevant) can be unconsidered.

The segmentation by edge detection is based on two important concepts: similarity and discontinuity. Thus, the algorithms look for points (or curves and contours) of the digital image where the intensity changes abruptly. This sudden change in intensity may occur for various reasons, as example, the orientation discontinuities in a surface and changes in brightness and illumination in a scene. Applications for the edge detection method are found in various fields of science: medicine Gudmundsson et al. (1998), engineering and satellite images Augusto et al. (1984), robotics and machine visionJain et al. (1995).

There are several methods for edge detection, like: Canny, Sobel, Prewitt, and based on Gaussian masks (kernels), as Laplacian of Gaussian (LoG) and Difference of Gaussian (DoG) Gonzalez and Woods (2011). The DoG method generally uses classical Gaussians in its approach. But this work suggests the use of q-Gaussian for the composition of the mask that will be applied to the image to extract its edges. The q-Gaussian probability distribution comes from the q-algebra introduced by Tsallis.

The q-algebra is derived from Tsallis definition of non-extensive entropy. There are some works in the literature that used with success the Tsallis q-entropy into the image processing de Albuquerque et al. (2004); Sathya and Kayalvizhi (2010); Kilic and Kayacan (2012) and image analysis Fabbri et al. (2013, 2012) fields. The q-Gaussian kernels was previously used to noise reduction Soares and Murta (2013). In this work, we present a proposal of compose the DoG filter using the q-Gaussian kernels. The potential of the proposed method is demonstrated by comparing with the traditional method for DoG. One can notice that it is able to perform edge extraction with one more parameter (q), which can make the DoG approach more flexible.

Figure 1: (a)LoG Function, , (b) Difference of Gaussians vs Laplacian of Gaussians 1D.

Ii Laplacian of Gaussian vs Difference of Gaussians

Consider the one-dimensional Gaussian distribution:

with and , where is the mean, is the standard deviation and

is the variance.

If we take the second derivative of the one-dimensional Gaussian function considering , we obtain the Ricker wavelet: .

In the two-dimensional the Gaussian distribution becomes:


The Laplacian of Gaussian LoG is a multidimensional generalization of the Ricker wavelet. To obtain it we need to take the two-dimensional Laplacian of the Gaussian distribution:


However, in practice the Laplacian of Gaussian (LoG), Figure 1, is approximated by the Difference Gaussians (DoG) function since this reduces the computational costs in two or more dimensions. The DoG is obtained by performing the subtraction of two Gaussian kernels where a kernel must have a standard deviation slightly lower than its previous.

Figure 1 compares the LoG function with with the DoG function using kernels with and . The DoG method has lower computational cost, what justifies its use in this study. The convolution of the DOG filter with the input image generates the edge detection for this image.

Iii q-Gaussian

Figure 2: Q-Gaussian Function with .

In 1988, Tsallis proposed the non additive statistical mechanics, entitled “Q-statistic”Tsallis (1988). This theory suggests that different systems require different tools of analysis, appropriated to the particularities of this system. The informational tool entropy, applied to the information theory by ShannonShannon (1948) is defined as: , where is the occurrence probability, and is the total number of probabilities.

The generalization proposed by Tsallis gives the definition of the q-entropy: , where is the occurrence probability, is the total number of probabilities and is an adjustable parameter, freely variable. The correct choice of certain parameters can evidence important characteristics of the system. When , one retrieves the standard entropy.

The q-Gaussian probability distribution comes from the maximization of the Tsallis entropy under appropriate constraintsTsallis (2011). Again, when , one retrieves the Gaussian distribution. The q-Gaussian is defined as:


with and

Figure 2 shows some of the curves generated by the equations of q-Gaussian, compared with the classical Gaussian ():

It is important to note that all the curves have the same parameter , but through the generalization proposed by Tsallis, we gain a second adjustable parameter, . Changes in this parameter are able to promote changes in the traditional Gaussian shape, adapting it to the peculiarities of the problem in which it is applied. When , the one-dimensional q-Gaussian function tends to the Dirac function. Moreover the shape of the q-Gaussian function tends to a straight line when approaches the value .

The same way as the classical Gaussian has a Two-Dimensional version, we can derive the multidimensional generalization to q-Gaussian. The bi-dimensional q-Gaussian is defined by . It is important to note that curves with the same parameter can have its shape changed adjusting the parameter, adapting it to the peculiarities of the problem in which it is employed. Figures 3(a), 3(b) and 3(c) show some representatives of the family of 2D q-Gaussian.

(a) ,
(b) ,
(c) ,
Figure 3: Q-Gaussian Function.

Iv Method

This work introduces the use of DoG method using q-Gaussian kernels as an alternative to traditional use of Gaussian kernels in edge detection. Following the metric proposed by the DoG filter, standard deviations and are setted, with smaller than .

After the filter having the appropriate size, we should set it with the input image in gray scale. After the convolution we identify the edges by using the “zero cross” detector. Figure 4 summarizes the described process.

Figure 4: Algorithm for edge detection using the DoG method with q-Gaussian kernels.

V Results: Gaussian vs q-Gaussian Edge Detection

The results for edge detection using the method DoG with q-Gaussian kernels show up rich in detail when compared to the method DoG with Classic Gaussian kernels because the q-Gaussian probability distribution have the adjustable parameter . This parameter allow us to define the degree of detail that we seek in our detection. Figure 5 shows results obtained from q-Gaussian using and .

Figure 5: DoG with different q-Gaussian kernels. Notice that for , the q-Gaussian is exactly the standard one

Vi Conclusions

The results presented in this work show that using the DoG filter with q-Gaussian kernels proves to be an excellent alternative to the LoG and DoG with classical Gaussian kernels. Compared to the LoG filter, the proposed method has lower computational cost. Constructing the convolution mask from the subtraction of two Gaussian kernels with standard deviations and slightly different is much less costly than take the Laplacian of the q-Gaussian function (which involves calculating derivatives).

Compared to the DoG filter with kernels using the normal distribution of probabilities, we note that we gain in details of edge detection. That is because in addition to the variable parameter

, responsible for more or less blurring (Gaussian blur), we also have the entropic index , variable and responsible for the shape of q-Gaussian, being able to get more details that the traditional approach when both have the same blur.

The extensiveness or not extensiveness of the entropy depends on the system characteristics. Thus, it can be extended for certain values of . In this point, we can apply this concept to our work. By using the q-Gaussian method, the entropic index allow us adjust the function used in the filter to get the details and results that are more relevant.


Lucas Assirati acknowledges the Confederation of Associations in the Private Employment Sector (CAPES) Grant . Núbia R. Silva, Lilian Berton and Odemir M. Bruno are grateful for São Paulo Research Foundation, grant Nos.: 2011/21467-9, 2011/21880-3 and 2011/23112-3. Bruno also acknowledges the National Council for Scientific and Technological Development (CNPq), grant Nos. 308449/2010-0 and 473893/2010-0.



  • Gudmundsson et al. (1998)

    M. Gudmundsson, E. El-Kwae,  and M. Kabuka, “Edge detection in medical images using a genetic algorithm,” Medical Imaging, IEEE Transactions on, 

    17, 469–474 (1998).
  • Augusto et al. (1984) G. Augusto, M. Goltz,  and J. Demísio, “Detecção de bordas em imagens aéreas e de satélite com uso de redes neurais artificiais,” , 1044–1045 (1984).
  • Jain et al. (1995) R. Jain, R. Kasturi,  and B. Schunck, Machine vision (McGraw-Hill, 1995).
  • Gonzalez and Woods (2011) R. Gonzalez and R. Woods, Digital Image Processing (Pearson Education, 2011).
  • de Albuquerque et al. (2004)

    M. P. de Albuquerque, I. A. Esquef,  and A. R. G. Mello, “Image thresholding using tsallis entropy,” Pattern Recognition Letters, 

    25, 1059 – 1065 (2004).
  • Sathya and Kayalvizhi (2010) P. D. Sathya and R. Kayalvizhi, “Pso-based tsallis thresholding selection procedure for image segmentation,” Pattern Recognition Letters, 5, 39 – 46 (2010).
  • Kilic and Kayacan (2012) I. Kilic and O. Kayacan, “Generalized {ICM} for image segmentation based on tsallis statistics,” Physica A: Statistical Mechanics and its Applications, 391, 4899 – 4908 (2012).
  • Fabbri et al. (2013) R. Fabbri, I. Bastos, F. Neto, F. Lopes,  and O. Bruno, “Multi-q pattern classification of polarization curves,”  (2013), arXiv:arXiv:1305.2876 [cs.CE] .
  • Fabbri et al. (2012) R. Fabbri, W. N. Gonçalves, F. J. Lopes,  and O. M. Bruno, “Multi-q pattern analysis: A case study in image classification,” Physica A: Statistical Mechanics and its Applications, 391, 4487 – 4496 (2012).
  • Soares and Murta (2013) I. J. A. Soares and L. O. Murta, “Noise reduction using nonadditive q-gaussian filters in magnetic resonance images,” in Proc. SPIE 8669, Medical Imaging 2013: Image Processing, 86692J (2013).
  • Tsallis (1988) C. Tsallis, “Possible generalization of boltzmann-gibbs statistics,” Journal of Statistical Physics, 52, 479–487 (1988).
  • Shannon (1948) C. E. Shannon, “A mathematical theory of communication,” The Bell System Technical Journal, 27, 379–423–623–656 (1948).
  • Tsallis (2011) C. Tsallis, “The nonadditive entropy sq and its applications in physics and elsewhere: Some remarks,” Entropy, 13, 1765–1804 (2011).