1 Introduction
Extracting edge features from images is a problem that was tackled in many ways by researchers. The proposed solutions vary from classical Sobel [1] or Prewitt [2] to more complex algorithms like Canny [3] or Edge Drawing [4]
. Edges remain a basic yet important feature in the Computer Vision domain.
Edge Drawing ED proposes an edge segment detection algorithm that applies the highlevel cognitive reasoning employed in dottodot boundary completion puzzles to edge segment detection. First, we analyze the effect on the original ED algorithm when we use other first order derivative operators. Afterwards, we propose a scheme for selecting the threshold parameters that is dependent on the image.
For an algorithm such as ED algorithm, in which we can configure parameters (gradient threshold, anchor threshold, scan interval and Gaussian kernel size), we can easily reach over variants to fine tune it on a dataset. The finetuning phase is dependent on the image set and use case. From experience, the setting of parameters can become a lot more facile but it still remains a costly process.
Extensions of the ED algorithm exist in literature where additional steps are included, such as: edge segments are validated by an "a contrario" step due to the Helmholtz principle [5] or edge segments are linked based on the predictions generated from past movements [6]. But for this research we attempt to maintain the realtime benefit of the original algorithm and adopt a less costly additional step. In this direction, we propose a scheme of choosing the ED threshold that is not user dependent but contextual of the image. The Otsu threshold in our opinion is a good starting point to adapt the necessary thresholds of the algorithm. Automated threshold selection phases were added to other edge detection algorithm as we can see in [7, 8, 9].
We consider that an automated threshold choosing algorithm is a important improvement considering the dependencies between the data we process and the gradients obtained in the processing phase.
2 Research background
2.1 First Order derivative Operators
For our experiments, we use the following discrete differentiation operators: the Sobel Operator [1], the Prewitt Operator [10], the Kirsch operator [11] , the Kitchen Operator[12], the Kayalli Operator [13], the Scharr Operator [14], the Kroon Operator [15], and the Orhei Operator [16]. All the used kernels are presented in Figure 1.
Sobel  Prewitt  Kirsch  Kitchen 
Kayyali  Scharr  Kroon  Orhei 
The gradient is a measure of change in a function, and an image can be considered to be an array of samples of some continuous function of image intensity, typically twodimensional equivalent of first derivative. Gradient magnitude is calculated using Formula 1, where is the image and , are the components on and axis [17].
(1) 
2.2 Otsu thresholding
Otsu’s thresholding method corresponds to the linear discriminant criteria, which assumes that the image consists of only two objects: foreground and background [18]
. Otsu’s method determines the threshold value based on the statistical information of the image where the variance of clusters
and can be computed. Otsu’s algorithm tries to find a threshold value (t) which minimizes the weighted withinclass variance given by the relation observed in Equations 2.(2)  
(3) 
Weights and
are the probabilities of the two classes separated by a threshold
and and are variances of these two classes. The class probability is computed from the bins of the histogram.For two classes, minimizing the intraclass variance is equivalent to maximizing interclass variance as we can see in Equation 2, which is expressed in terms of class probabilities and class means , where the class means , and are presented in Equation 4  6.
(4) 
(5) 
(6) 
2.3 Benchmarking the edge operators
For highlighting the results obtained, we use BSDS500 [19] which contains a dataset of natural images that have been manually segmented. The human annotations serve as ground truth for the benchmark for comparing different segmentation and boundary detection algorithms. For evaluating the images generated from algorithms to the ground truth images, the Corresponding Pixel Metric (CPM) algorithm [20] is used.
For each image, two quantities Precision () and Recall () will be computed, as were defined in [21]. Precision, with Formula 7, represents the probability that a resulting edge/boundary pixel is a true edge/boundary pixel. Recall, with Formula 8, represents the probability that a true edge/boundary pixel is detected. In these formulas, (True Positive) represents the number of matched edge pixel, (False Positive) the number of edge pixels which are incorrectly highlighted and (False Negative) the number of pixel that have not been detected. Those two quantities are used to compute Fmeasure (F1score) by applying the Formula 9.
(7) 
(8) 
(9) 
3 Proposed Edge Drawing algorithm
The ED edge detection algorithm presented good results for a traditional edge detection concept. But we are concerned with the amount of variants we need to test for fine tuning the parameters for a given dataset or scenario.
ED proposes an edge segment detection algorithm that applies the highlevel cognitive reasoning employed in dottodot boundary completion puzzles to edge segment detection [4].
The original ED algorithm consists of the following steps: first the image is smoothed using a Gaussian filter [17]; afterwards the horizontal and vertical gradients are calculated using Formula 1; the edge direction map is calculated by using the idea that ; suppression of the so called "weak" pixels is done and anchor points are extracted [4].
Anchors would be located at the peaks of the gradient map. To connect consecutive anchors, we simply go from one anchor to the next by proceeding over the cordillera peak of the gradient map mountain. This process is guided by the gradient magnitude and edge direction maps computed [4].
Otsu is an approach that separates the image into background and foreground, which makes a perfect candidate to take in consideration for choosing our thresholds. In this scenario, we propose that for an image we find the Otsu threshold and afterwards we choose the gradient threshold and anchor threshold accordingly to Equation 10 and Equation 11.
(10) 
(11) 
We consider that the gradient pixels values of the background, which are divided by the Otsu method, vary in a normal curve probability distribution
[22] form. So we can choose the mean value of the distribution as our gradient threshold. It seems appropriate to choose this value as gradient threshold because it will eliminate noise caused by small background changes, or small features in the image. We consider that anchors should be at the margins of the distribution, as peaks of the gradient map mountains, so we choose the anchor threshold as percent of Otsu threshold.Another modification we proposed, that resulted from experiments, is to set the value of to the scan interval. From our observation, varying this parameter does not bring actual benefits in this new concept.
In Algorithm 1, the proposed version of ED algorithm is described. If we look at the necessary input parameters, we can see that if we use this version we just have to set the smoothing kernel size.
We have chosen the weights for gradient and anchor thresholds from Otsu value using the assumption detailed in this section but other values can be experimented. With our proposed modification to the ED algorithm, we consider that the tuning phase of the algorithm is significantly reduced.
4 ED simulation results
All the simulation are done using EECVF  EndtoEnd Computer Vision Framework [23, 24]
, an opensource solution based on Python programming language, by running the module
.To find the optimal parameters for the best results, we vary the parameters as following: the Gaussian kernel size  in the range of using a step of , the gradient threshold  in range of with a step of , the anchor threshold  in the following range with a step of and the scan interval  in the range .
We observe, in Figure 4, that we obtain the best results using the following parameters: gradient threshold value of , anchor threshold value of , Gaussian kernel size of and scan interval of . As stated in the introduction, to be able to find this set of parameters we had to run variants.
Using the found parameters, we changed the operator used in ED algorithm to observe the difference that we obtain, as we can see in Figure 4. ED algorithm using operators like Sobel[1], Prewitt[10], Kroon [15], Orhei [16] have similar results. At the other end we can clearly see that using Kayyali [13], Kirsch [11] or Kitchen[12] produces worse results. We have chosen to vary the parameters with value of so we can obtain more accurate results.
The changing of the operator can have an important contribution to the resulted edgemap but they cannot overcome the wrong selection of parameters. If we look in Figure 4, when changing the operator, we do not see usually a big change, but if we look in Figure 4, every change we did in one of the parameters produced a relevant change in metrics.
In Figure 4 we present the statistical result and in Figure 5 the visual results of the proposed ED algorithm. We can see that we obtain a slight decrease in the metrics, but we think it is an acceptable tradeoff considering the amount of calculation we save by eliminating the finetuning phase.
Actually, for some cases we see a decrease in appeared artifacts (see Figure 5). To some extent, this is an expected outcome when switching from a threshold chosen by a global analysis of the image set to one that is image dependent.
Operator  ED original  ED proposed  
R  P  F1  R  P  F1  
Sobel  0.721  0.523  0.606  0.680  0.515  0.586 
Prewitt  0.720  0.523  0.606  0.680  0.515  0.586 
Kirsch  0.139  0.478  0.216  0.114  0.474  0.266 
Kitchen  0.724  0.420  0.531  0.665  0.414  0.510 
Kayyali  0.000  0.000  0.000  0.000  0.000  0.000 
Scharr  0.723  0.522  0.606  0.681  0.514  0.586 
Kroon  0.723  0.521  0.606  0.682  0.513  0.586 
Orhei  0.723  0.522  0.606  0.681  0.513  0.586 
We presented a comparison of the ED results, original versus proposed, for each operator, in Table 1. The table reveals another important aspect: the proposed threshold finding method does not produce random results. Even if the results are lower, around 0.02 differences in score, the order of the operators remain the same.
5 Conclusion
In this paper we proposed a new Edge Drawing variant of the algorithm by adding an automated threshold choosing schema. This is an important aspect because it will reduce significantly the amount of preparation needed to use the algorithm. The proposed version can be used in different datasets or scenarios without the need of fine tuning first.
The results we obtain with the proposed variant of ED are lower than the best variant found in our experiments. But to obtain the best variant result for one operator we needed more than a thousand tries, so in retrospect we can consider it an acceptable loss in metrics.
References
 [1] I. Sobel and G. Feldman, “A 3×3 isotropic gradient operator for image processing,” Pattern Classification and Scene Analysis, pp. 271–272, 01 1973.
 [2] J. M. Prewitt, “Object enhancement and extraction,” Picture processing and Psychopictorics, vol. 10, no. 1, pp. 15–19, 1970.
 [3] J. Canny, “A computational approach to edge detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI8, pp. 679–698, Nov 1986.
 [4] C. Topal and C. Akinlar, “Edge drawing: a combined realtime edge and segment detector,” Journal of Visual Communication and Image Representation, vol. 23, no. 6, pp. 862–872, 2012.

[5]
C. Akinlar and C. Topal, “Edpf: a realtime parameterfree edge segment
detector with a false detection control,”
International Journal of Pattern Recognition and Artificial Intelligence
, vol. 26, no. 01, p. 1255002, 2012.  [6] C. Akinlar and E. Chome, “Pel: a predictive edge linking algorithm,” Journal of Visual Communication and Image Representation, vol. 36, pp. 159–171, 2016.
 [7] Y.K. Huo, G. Wei, Y.D. Zhang, and L.N. Wu, “An adaptive threshold for the canny operator of edge detection,” in 2010 International Conference on Image Analysis and Signal Processing, pp. 371–374, IEEE, 2010.
 [8] A. Azeroual and K. Afdel, “Fast image edge detection based on faber schauder wavelet and otsu threshold,” Heliyon, vol. 3, no. 12, p. e00485, 2017.
 [9] J. Cao, L. Chen, M. Wang, and Y. Tian, “Implementing a parallel image edge detection algorithm based on the otsucanny operator on the hadoop platform,” Computational intelligence and neuroscience, vol. 2018, 2018.
 [10] J. M. Prewitt, “Object enhancement and extraction,” Picture processing and Psychopictorics, vol. 10, no. 1, pp. 15–19, 1970.
 [11] R. A. Kirsch, “Computer determination of the constituent structure of biological images,” Computers and biomedical research, vol. 4, no. 3, pp. 315–328, 1971.
 [12] L. Kitchen and J. Malin, “The effect of spatial discretization on the magnitude and direction response of simple differential edge operators on a step edge,” Computer vision, graphics, and image processing, vol. 47, no. 2, pp. 243–258, 1989.
 [13] E. KawalecLatała, “Edge detection on images of pseudoimpedance section supported by context and adaptive transformation model images,” Studia Geotechnica et Mechanica, vol. 36, no. 1, pp. 29–36, 2014.
 [14] H. Scharr, Optimal operators in digital image processing. PhD thesis, 2000.
 [15] D. Kroon, “Numerical optimization of kernel based image derivatives,” Short Paper University Twente, 2009.
 [16] C. Orhei, S. Vert, and R. Vasiu, “A novel edge detection operator for identifying buildings in augmented reality applications,” in International Conference on Information and Software Technologies, pp. 208–219, Springer, 2020.
 [17] R. M. Haralick and L. G. Shapiro, Computer and robot vision, vol. 1. Addisonwesley Reading, 1992.
 [18] N. Otsu, “A threshold selection method from graylevel histograms,” IEEE transactions on systems, man, and cybernetics, vol. 9, no. 1, pp. 62–66, 1979.
 [19] P. Arbelaez, M. Maire, C. Fowlkes, and J. Malik, “Contour detection and hierarchical image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, pp. 898–916, May 2011.
 [20] M. Prieto and A. Allen, “A similarity metric for edge images,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 25, pp. 1265– 1273, 11 2003.
 [21] Y. Sasaki, “The truth of the fmeasure.,” tech. rep., School of Computer Science, University of Manchester, 2007.

[22]
J. K. Patel and C. B. Read,
Handbook of the normal distribution
, vol. 150. CRC Press, 1996.  [23] C. Orhei, M. Mocofan, S. Vert, and R. Vasiu, “Endtoend computer vision framework,” in 2020 International Symposium on Electronics and Telecommunications (ISETC), pp. 1–4, IEEE, 2020.
 [24] C. Orhei, S. Vert, M. Mocofan, and R. Vasiu, “Endtoend computer vision framework: An opensource platform for research and education,” Sensors, vol. 21, no. 11, p. 3691, 2021.
 [25] P.S. Liao, T.S. Chen, P.C. Chung, et al., “A fast algorithm for multilevel thresholding,” J. Inf. Sci. Eng., vol. 17, no. 5, pp. 713–727, 2001.
 [26] V. Bogdan, C. Bonchis, and C. Orhei, “Custom dilated edge detection filters,” in International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision, WSCG, Václav Skala  UNION Agency, May 2020.
 [27] C. Orhei, V. Bogdan, and C. Bonchiş, “Edge map response of dilated and reconstructed classical filters,” in 2020 22nd International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC), pp. 187–194, IEEE, 2020.
Comments
There are no comments yet.