TGGLines: A Robust Topological Graph Guided Line Segment Detector for Low Quality Binary Images

02/27/2020 ∙ by Ming Gong, et al. ∙ 4

Line segment detection is an essential task in computer vision and image analysis, as it is the critical foundation for advanced tasks such as shape modeling and road lane line detection for autonomous driving. We present a robust topological graph guided approach for line segment detection in low quality binary images (hence, we call it TGGLines). Due to the graph-guided approach, TGGLines not only detects line segments, but also organizes the segments with a line segment connectivity graph, which means the topological relationships (e.g., intersection, an isolated line segment) of the detected line segments are captured and stored; whereas other line detectors only retain a collection of loose line segments. Our empirical results show that the TGGLines detector visually and quantitatively outperforms state-of-the-art line segment detection methods. In addition, our TGGLines approach has the following two competitive advantages: (1) our method only requires one parameter and it is adaptive, whereas almost all other line segment detection methods require multiple (non-adaptive) parameters, and (2) the line segments detected by TGGLines are organized by a line segment connectivity graph.



There are no comments yet.


page 2

page 4

page 5

page 12

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Line segment detection has been studied for decades in computer vision and plays a fundamental and important role for advanced vision problems, such as indoor mapping [2]

, vanishing point estimation

[14, 23], and road lane line detection for autonomous driving [4]. Many line segment detection methods are developed for natural images [3, 15, 22, 1, 6]. However, those methods do not work well for technical diagram images, such as patent images – see Figure 1 for an example. Clearly the line segment detection challenge remains in technical images, especially in low quality images affected by the local zigzag “noise” introduced by the scanning process.

(a) Input diagram image
(b) What humans see
(c) What machines see
Figure 1: An example of how challenging it is for machines to see line segments present in diagram images (e.g., patent images), especially those with low quality introduced by the scanning process. (a) Input image is a binary pixel-raster image. (b) Line segments (perceived) and annotated in green by humans. (c) Line segments machines see in the input image (using state-of-art line segment detection method linelet [6]).

We present a robust topological graph guided approach for line segment detection in low quality binary (diagram) images. Specifically, our approach combines the power of topological graphs and skeletons to generate a skeleton graph representation to tackle the line segment detection challenges. A skeleton is a central line (1 pixel wide) representation of an object present in an image obtained via thinning [12, 24]. The skeleton emphasizes topological and geometrical properties of shapes [24]. In our approach, the skeleton serves as the essential bridge from pixel image representation to topological graph representation.

We compare our TGGLines approach with four mainstream and state-of-the-art line segment detection methods. The empirical results demonstrate that our TGGLines approach visually and quantitatively outperforms other methods. In addition, our method has two advantages. (1) While the parameters for most other methods are not adaptive, our method is robust as it only requires one parameter, and this parameter is adaptive. (2) As TGGLines detects line segments it organizes the segments into a topological graph. We call this graph the line segment connectivity graph

(LSCG); which stores the topological relations (e.g., turning and junction) among detected line segments. The LSCG can be used by advanced computer vision tasks, such as shape analysis and line segment-based image retrieval.

Line detection performance is often simply evaluated by visualization, which is a non-quantitative evaluation. In this paper, we evaluate our results both qualitatively (Section 5.3) and quantitatively (Section 5.4). One of the most difficult problems for quantitative evaluation is annotating line segments in an image accurately because sometimes even humans will annotate lines differently due to the zigzag “noise” introduced by the scanning process (see the annotation examples provided in Figure 5). Typically for a scientific diagram image, there can be several hundreds of lines that must be annotated (see images #6 and #10 in Table 2 for examples). We provide a simple interface for line segment annotation as well as a quantifiable metric for line detection performance as measured with respect to inherently inexact annotations.

Here, we provide a road map to the rest of the paper. Section 2 covers related work, including existing line segment detection methods, and the topological-based image representations that our method is built on. The core of our paper is Section 3 focusing on our TGGLines approach and Section 4 focusing on algorithms. In Section 5, we present our experiments with qualitative and quantitative evaluations. The paper concludes in Section 6 with a mention of potential applications.

For readability, we provide a list of abbreviations in Appendix A.1. Background on graph theory and computational geometry, is provided in Appendix A.2. These appendices are provided in the supplementary materials.

2 Related work

Existing line segment detection methods are either edge-based or local gradient-based, whereas our TGGLines method does not rely on edge detection or local gradients. Edge-based line detectors include the Hough transform (HT) [3] which often uses the Canny edge detector [5] as pre-processing. The main drawback of HT is that it is computationally expensive and it only outputs the parameters of a line equation, not the endpoints of line segments. The progressive probabilistic Hough transform (PPHT) [15] is an optimization of the standard HT; it does not take all the points into consideration, instead taking only a random subset of points and that is sufficient for line detection. Also, PPHT is able to detect the two endpoints of each line, so it can be used to detect line segments present in images.

Local gradient-based line detectors are successful on natural images, but not diagrams (See Section 5.2). The line segment detector (LSD) [22] is a local gradient-based method that has been tested on a wide set of natural images. EDLines [1] speeds up LSD but according to our experiments and evaluation (Section 5.2), EDLines’ performance is much worse than LSD on the diagram image dataset. The linelet [6] method represents intrinsic properties of line segments in images using an undirected graph which performs well on an urban scene dataset, but does not work well for diagrams. A recent review about line segment detection methods can be found in [8, 19].

Our TGGLines method builds on topological-based methods. TGGLines uses the skeleton graph image representation proposed in [25]. The topological graph is generated automatically from an image skeleton, which can capture the topological and geometrical properties of shapes of objects present in an image [11]. We use the well-known and robust Zhang-Suen thinning algorithm [27] to extract skeletons from images and simplify the geometries in the graph representation using the Douglas-Peucker algorithm [7] because of its simplicity and robustness. We have also tested the Visvalingam’s [21] on our dataset; however, it does not work as well as the Douglas-Peucker algorithm.

3 TGGLines approach

In this section, we elaborate the proposed TGGLines method, including image representation (Section 3.1), concepts (Section 3.2), and workflow illustrated using a simple example (Section 3.3).

3.1 TGGLines image representation

The image representation used in our TGGLines is the skeleton graph, which is a topological graph generated from a image skeleton [25]. We use Zhang-Suen thinning algorithm [27] for image skeleton extraction, as it is well-known and robust.

See Figure 2 below for an illustration of the image representation we use in our TGGLines approach (the handwritten digit image used here is taken from the MNIST dataset[13]). In the skeleton graph, each node represents a pixel in the image skeleton, and each edge indicates that the two pixels it connects are neighbors.

(a) Input image
(b) Image skeleton
(c) Skeleton graph
Figure 2: An example of skeleton graph image representation . Figure 2 (a) shows the input image. Figure 2 (b) shows the image skeleton extracted from the input image. Figure 2 (c) provides the skeleton graph corresponding to the skeleton present in (b).

3.2 TGGLines concepts


Skeleton graph:

A skeleton graph is an embedded graph generated from an image skeleton, where each node represents a pixel in the image skeleton, and an edge betwee two pixel nodes indicate the two pixels are neighbors.


Each path is an embedded graph that is a subgraph of the skeleton graph, which consists of non-salient nodes segmented by salient nodes (e.g., junction nodes and end nodes, defined in Section 3.2.1) and edges connecting the nodes.

Line segment connectivity graph (LSCG):

It is an embedded graph generated from the skeleton graph, where is a set of nodes each representing a path, and is a set of edge representing salient nodes (e.g., junction nodes or end nodes). each node will have an attribute that pointing to its corresponding path it represents. LSCG is used to organized segmented paths.

Simplified LSCG:

It is the LSCG that the paths it organize are simplified.

3.2.1 Node type in skeleton graph

There are three types of salient nodes in our TGGLines. We will use them to segment a skeleton graph to multiple paths for simplification.

  • End node: A (pixel) node that has only 1 neighbor.

  • Junction node: A (pixel) node that has neighbors where n 2

  • Turning node: A (pixel) node that has two neighbors.

3.3 TGGLines workflow

The TGGLines workflow is presented in Figure 7 below visually illustrated by a simple and straightforward example.

(a) Input diagram image
(b) Image skeleton extraction
(c) Skeleton graph: a topological graph generated from skeleton [25]
(d) Skeleton graph (zoomed-in detail)
(e) Skeleton graph (zoomed-in detail)
(f) Detecting salient nodes (zoom-in detail)
(g) Segmenting skeleton graph to paths and generating LSCG
(h) Simplifying paths organized by LSCG (adaptive parameter)
(i) Detected line segments (organized by LSCG)
Figure 3: TGGLines workflow illustrated with a simple and straightforward example. Note that in (b) it is inverted for image skeleton extraction. In (c), the red node indicates junction nodes and blue nodes indicate turning node (definition of salient point types can be found in Section 3.2.1). in (g) LSCG represents line segment connectivity graph. Each node in LSCG represents a path, for example in (g) there are three different paths.

4 TGGLines algorithms

In this section, we provide the algorithms we developed for the TGGLines introduced in Section 3 above.

The overview pseducode for the TGGLines algorithm is provided in Algorithm 1.

We first extract skeleton from an input diagram image. then a skeleton graph is generated from the skeleton, after that, salient points (defined in Section 3.2.1) are detected by counting the incident edges for each pixel node in the skeleton graph. Then the skeleton graph is segmented to multiple paths using the detected salient nodes; meanwhile a line segment connected graph is generated while segmenting skeleton graph to paths. Then we simplyfing paths oragzed in LSCG using the Douglas-Peucker algorithm [7], detailed in Algorithm 2.

Input: A diagram image
Output: An embedded graph representing the simplified line segment connectivity graph from   // the nodes of an embedded graph contain coordinates, therefore the graph can be drawn uniquely on a plane
skeleton() skeletonGraph() /* salient nodes detection for segmenting line segments */
salientNodes ()   // detecting salient nodes
Paths(,)   // segmenting paths
lineSegConnectvityGraph (, ) simplifyingLineSegConnectvityGraph () return
Algorithm 1 TGGLines algorithm

The LSCG for the illustrative example shown in the workflow Figure 7 is given in Figure 4.

Figure 4: Line segment connectivity graph (LSCG) for the example shown in the workflow Figure 7 above. each light blue node represents a path, if two paths are connected by a junction node or turning node, there is an edge between the nodes.
Input: Segmented paths organized by LSCG
Output: An updated LSCG pointing to simplified paths
  // Initialization
1 foreach path  do
       Node () // the nodes of
       num () // the number of nodes
2       if   then
             /* cacluate adaptive Douglas-Peucker parameter */
3             convex hull () area of C / perimeter of C Simplify path using Douglas-Peucker algorithm with parameter simplified append to
return , and update
Algorithm 2 Simplifying paths organized by LSCG

5 Experiments and evaluation

In this section, we provide details about dataset (Section 5.1), experiments and results (Section 5.2), qualitative (Section 5.3) and quantitative (Section 5.4) evaluation.

5.1 Dataset

To evaluate TGGLines, and compare with state-of-the-art methods, we develop a simple interface to manually annotate 10 diagram images taken from the 2000 Binary Patent Image Database developed by Multimedia Knowledge and Social Media Analytics Laboratory (MKLab) [17]. The database contains images from patents maintained by the European Patent Office.

For readability, we take partial images from 10 selected samples (cropped without changing the resolution of the original images), the file size ranges from 53 KB (212x171, #3 in Table 1) to 307 KB (524x566, #7 in Table 2). Our dataset is small yet representative. Samples are carefully selected from the 2000 patent images, by considering this selection criteria: (1) different complexity of line directions (e.g., vertical, horizontal, other arbitrary angles), (2) spacing between line segments (roughly equal spacing, sparse, dense), (3) topological relations (e.g., single line segment, intersection, turning, circular). Figure 5 shows two annotated examples. The images in the experiment, image #01 to #10, are the partial images taken from the following images from the patent image dataset [17]: 01779, 01126, 00501, 00780, 00971, 00575, 00429, 01267, 00811, and 01140.

Figure 5: Annotation examples: Line segments of (a) varying angles and spacing intervals; and (b) even interval. Examples have different resolution.

5.2 Experiments and results

We implement TGGLines in Python using OpenCV, Scikit-image [20], SymPy [16], and NetworkX [9]. We compare TGGLines with state-of-the-art methods: PPHT [15], LSD [22], EDLines [1] and Linelet [6]. Results can be found in the sets of figures organized in Tables 1 and 2. Parameter settings (Table 3) for each method are provided in Appendix A.3, and the computing environment details can be found in Appendix A.4.

Img # Input images Ground truth PPHT LSD EDLines Linelet TGGLines
Table 1: Line segment detection results on images #01 — #05 of the dataset.
Img # Input images Ground truth PPHT LSD EDLines Linelet TGGLines
Table 2: Line segment detection results on images #06 — #10 of the dataset.

5.3 Qualitative comparison

We qualitatively compare the line segment detection methods as presented in the results Tables 1 and 2. Clearly, PPHT detects many tiny line segments and double lines for each true line segment, as it is an edge-based method. LSD and EDLines detect double lines for each line segment in the ground truth; this is not surprising, as they are local gradient-based methods.

Visually, EDLines performs the worst for our experiment dataset, many line segments cannot be detected by EDLines (especially for image #3 and #6 in Table 1 and 2, respectively). LSD detect many tiny lines, similar to PPHT, especially in image #7 and #8 in Table 2.

The performance of Linelet is not consistent. For some images, double lines are detected for each line in the ground truth (see image 7 and 9 in Table 2), whereas for other images, single lines are detected (e.g., image #6 and #8 in Table 2).

Among all methods, visually, our TGGLines performs the best. Another advantage of TGGLines is seen in Table 2 image #10 where multiple crossed lines form a circle intersection. As most methods detect lines based on edge maps, the circles from the original images leave an open circle shape in the results. While TGGLines avoids the open circle, and invariant to line width because we detect the lines based on image skeletons.

5.4 Quantitative comparison

We quantitatively compare the line segments from our TGGLines method and other four methods we benchmark against ground truth. Automatically quantifying the performance of detected line segment results is difficult, because the errors among the methods vary in nature. For example, some methods (like PPHT and LSD) detect most lines as double lines, but sometimes as single lines.

We use line detection accuracy as a simple evaluation metric, by manually counting the true positive line segments detected to calculate the accuracy. True positives are defined using the following criteria (examples of criteria are visually illustrated in Appendix

A.5): (1) for double line cases (e.g., PPHT, LSD), we count a line segment as correct if (a) both line segments are correctly detected compared with the corresponding line segment in the ground truth, so we assign weight 0.5 for each line segment; or (b) if more than half is detected for one segment, we give it weight 0.5 * 0.5 = 0.25; (2) for single line cases (e.g., TGGLines), we count a line segment as correct if (a) it is fully detected correctly, we assign weight 1; or (b) if more than half is correctly detected, we give weight 0.5; (3) if many tiny line segments are detected for a line segment in the ground truth, we view it incorrect, and assign it weight 0.

The accuracy calculation is based on the manually annotated line segments in ground truth. Specifically, , where is the number of correctly detected line segments (true positive), and is the total number of line segments in ground truth. For methods that (e.g., LSD and EDLines) detect double lines for each line in the ground truth, we only count it detected correctly if it detect both line segments. The accuracy of the methods on line segment detection is provided in Figure 6. We can see that the accuracy results match up with the visual comparison. EDLines performs the worst among the images in our dataset, and TGGLines performs the best, while Linelet is not consistent.

While manually counting line detection results, we note that TGGLines is the only method that avoids detecting double lines, and it detects lines with the least break points.

Figure 6: Line segment detection accuracy for different methods.

6 Conclusion

We have introduced a robust topological graph guided line segment detection method, TGGLines, for low quality binary diagram images, using a skeleton graph image representation. Compared with state-of-the-art line segment detection methods (specifically, PPHT, LSD, EDLines, and Linelet), on diagram images taken from a patent image database; empirical results show that our TGGLines approach outperforms other methods, both visually and quantitatively.

Beyond the accuracy of TGGLines compared with other methods, TGGLines has two competitive advantages: (1) it is robust, as TGGLines only requires one parameter and most importantly, this parameter is adaptive; and (2) the line segments detected using TGGLines are organized in a topological graph – we name it line segment connectivity graph (LSCG). Thus the topological relations among the line segments are captured and stored while detecting line segments which can be used for further topological-based image analysis.

The effectiveness and robustness of our method, especially the topological relations among detected line segments, provide an important foundation for many applications, such as text recognition in historical documents and OCR-based applications, converting rasterized line drawings to vectorized images, road lane line detection for autonomous driving, as in real world road scene images often contain similar zigzag “noise” as scanned document images have, because of scenarios such as worn road markings, falling leaves and/or dirt over roads.


  • [1] C. Akinlar and C. Topal (2011) EDLines: a real-time line segment detector with a false detection control. Pattern Recognition Letters 32 (13), pp. 1633–1642. Cited by: §1, §2, §5.2.
  • [2] S. An, J. Kang, L. Lee, and S. Oh (2012)

    Line segment-based indoor mapping with salient line feature extraction

    Advanced Robotics 26 (5-6), pp. 437–460. Cited by: §1.
  • [3] D. H. Ballard (1981) Generalizing the hough transform to detect arbitrary shapes. Pattern Recognition 13 (2), pp. 111–122. Cited by: §1, §2.
  • [4] M. Beyeler, F. Mirus, and A. Verl (2014) Vision-based robust road lane detection in urban environments. In 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 4920–4925. Cited by: §1.
  • [5] J. Canny (1986) A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence (6), pp. 679–698. Cited by: §2.
  • [6] N. Cho, A. Yuille, and S. Lee (2017) A novel linelet-based representation for line segment detection. IEEE Transactions on Pattern Analysis and Machine Intelligence 40 (5), pp. 1195–1208. Cited by: Table 3, Figure 1, §1, §2, §5.2.
  • [7] D. H. Douglas and T. K. Peucker (1973) Algorithms for the reduction of the number of points required to represent a digitized line or its caricature. Cartographica: the International Journal for Geographic Information and Geovisualization 10 (2), pp. 112–122. Cited by: §2, §4.
  • [8] R. F. Guerreiro and P. M. Aguiar (2012) Connectivity-enforcing hough transform for the robust extraction of line segments. IEEE Transactions on Image Processing 21 (12), pp. 4819–4829. Cited by: §2.
  • [9] A. A. Hagberg, D. A. Schult, and P. J. Swart (2008) Exploring network structure, dynamics, and function using NetworkX. In In Proceedings of the 7th Python in Science Conference (SciPy), pp. 11–15. Cited by: §5.2.
  • [10] J. M. Harris, J. L. Hirst, and M. J. Mossinghoff (2008) Combinatorics and graph theory. Vol. 2, Springer. Cited by: §A.2.
  • [11] J. K. Lakshmi and M. Punithavalli (2009) A survey on skeletons in digital image processing. In International Conference on Digital Image Processing, pp. 260–269. Cited by: §2.
  • [12] L. Lam, S. Lee, and C. Y. Suen (1992) Thinning methodologies-a comprehensive survey. IEEE Transactions on Pattern Analysis and Machine Intelligence 14 (9), pp. 869–885. Cited by: §1.
  • [13] Y. LeCun, L. Bottou, Y. Bengio, P. Haffner, et al. (1998) Gradient-based learning applied to document recognition. Proceedings of the IEEE 86 (11), pp. 2278–2324. Cited by: §3.1.
  • [14] J. Lezama, R. Grompone von Gioi, G. Randall, and J. Morel (2014) Finding vanishing points via point alignments in image primal and dual domains. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 509–515. Cited by: §1.
  • [15] J. Matas, C. Galambos, and J. Kittler (2000) Robust detection of lines using the progressive probabilistic hough transform. Computer Vision and Image Understanding 78 (1), pp. 119–137. Cited by: §1, §2, §5.2.
  • [16] A. Meurer, C. P. Smith, M. Paprocki, O. Čertík, S. B. Kirpichev, M. Rocklin, A. Kumar, S. Ivanov, J. K. Moore, S. Singh, et al. (2017) SymPy: symbolic computing in Python. PeerJ Computer Science 3, pp. e103. Cited by: §5.2.
  • [17] (2010) Multimedia knowledge and social media analytics laboratory (MKLab). 2000 binary patent images database. Note: Available online: (accessed on October 10, 2019) Cited by: §5.1, §5.1.
  • [18] M. Needham and A. E. Hodler (2019) Graph algorithms: practical examples in apache spark and neo4j. O’Reilly Media. Cited by: §A.2.
  • [19] P. S. Rahmdel, R. Comley, D. Shi, and S. McElduff (2015) A review of hough transform and line segment detection approaches.. In VISAPP (1), pp. 411–418. Cited by: §2.
  • [20] S. Van der Walt, J. L. Schönberger, J. Nunez-Iglesias, F. Boulogne, J. D. Warner, N. Yager, E. Gouillart, and T. Yu (2014) Scikit-image: image processing in python. PeerJ 2, pp. e453. Cited by: §5.2.
  • [21] M. Visvalingam and J. D. Whyatt (1993) Line generalisation by repeated elimination of points. The Cartographic Journal 30 (1), pp. 46–51. Cited by: §2.
  • [22] R. G. Von Gioi, J. Jakubowicz, J. Morel, and G. Randall (2010) LSD: a fast line segment detector with a false detection control. IEEE Transactions on Pattern Analysis and Machine Intelligence 32 (4), pp. 722–732. Cited by: §1, §2, §5.2.
  • [23] Y. Xu, S. Oh, and A. Hoogs (2013) A minimum error vanishing point detection approach for uncalibrated monocular images of man-made environments. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1376–1383. Cited by: §1.
  • [24] L. Yang, D. Oyen, and B. Wohlberg (2019-06) A novel algorithm for skeleton extraction from images using topological graph analysis. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Cited by: §A.2, §1.
  • [25] L. Yang, D. Oyen, and B. Wohlberg (2019) Image classification using topological features automatically extracted from graph representation of images. In Proceedings of the 15th International Workshop on Mining and Learning with Graphs (MLG), Cited by: §A.2, §2, 2(c), §3.1.
  • [26] L. Yang and M. Worboys (2015) Generation of navigation graphs for indoor space. International Journal of Geographical Information Science 29 (10), pp. 1737–1756. Cited by: §A.2.
  • [27] T. Zhang and C. Y. Suen (1984) A fast parallel algorithm for thinning digital patterns. Communications of the ACM 27 (3), pp. 236–239. Cited by: §2, §3.1.

Appendix A Appendix

a.1 Abbreviations

In this appendix, we provide the abbreviations (ordered alphabetically) of terms we used in the paper.

EDLines Edge drawing lines
HT Hough transform
LSCG Line segment connectivity graph
LSD Line segment detector
OCR Optical character recognition
PPHT Progressive probabilistic hough transform
TGGLines Topological graph guided lines

a.2 Definition of terms used

In this appendix, we provide the definitions to some concepts (ordered alphabetically; referenced [24, 26, 25, 10, 18]) in graph theory and computational geometry we used in our TGGLines.


Convex hull

The convex hull of a finite set of points is the intersection of all half-spaces that contain . A half space in two dimensions is the set of points on or to one side of a line. Computing the convex hull of is one of the fundamental problems of computational geometry.

Embedded graph

An embedded graph is a graph that each node has (planar) coordinates so the graph can be drawn on a plane uniquely without any edge intersection or crossing.


A graph consists of a collection of nodes (also called vertices or points) and a collection of edges that connect the nodes.


The skeleton of a binary image is a central line extraction (1 pixel wide) of objects present in the image through thinning.

Skeleton graph

The skeleton graph of a binary image is an embedded graph generated from the skeleton of , where each node in represents a white (i.e., non-zero) pixel in and each edge in indicates the two pixels represented by the end nodes of are neighbors.


In graph theory, a path is a sequence of distinctive nodes connected by edges. The length of a path is the number of edges traversed. node is reachable from node if there is a path from to . A graph is connected, if there is a path between any two nodes.

a.3 Parameter settings and computational time

This appendix provides the parameter settings for each method in Table 1 and 2.

Table 3 provides the parameters used for each method we compare. Computing environment for the experiments is provided in Appendix A.4 below.

Line segment detection methods Parameters Tools and methods used
PPHT threshold: 10; line_length: 5; line_gap: 3 probabilistic_hough_line function (Scikit-image)
LSD _scale: 0.8; _sigma_scale: 0.6, _quant: 2.0, _ang_th: 22.5, _density_th: 0.7 createLineSegmentDetector function (OpenCV)
EDLines Internal parameters: {ratio: 50, angle_turn: 67.5*np.pi/180, step: 3}; Parameters for Edge Drawing: {ksize: 3, sigma: 1, gradientThreshold: 25, anchorThreshold: 10, scanIntervals: 4 }; Parameters for EDLine: {minLineLen: 40, lineFitErrThreshold: 1.0} EDLines Python implementation (Github code)
Linelet param.thres_angle_diff: pi/8; param.thres_log_eps: 0.0; param.est_aggregation:Kurtosis Matlab code by the linelet authors [6]
TGGLines TGGLines requires only one parameter, and it is adaptive (see line 7 in Algorithm 2). Implemented by the authors of the paper
Table 3: Parameter settings for different line segment detection methods in Table 1 and Table 2. For other parameters not listed specifically here, default parameters provided in the tools are used.

a.4 Computing environment

In this appendix, we provide the computing environment that we ran our experiments. We ran all the experiments on a Windows desktop machine (Windows 10) with Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz (6 cores and 12 logical processors; 32.0 GB RAM).

a.5 Visually illustrated criteria for quantitative evaluation

In this appendix, we provide the criteria for judging the correctness of line segments (Table 4) along with a visual example showing the criteria applied to detected line segments for each method that we compare during our quantitative evaluation (Section 5.4).

Using the criteria provided in Table 4, we judge and calculate how many lines are correctly detected against the ground truth. Then the accuracy is calculated. See Figure 7 for an example of the criteria used to judge the lines detected for each method and how the accuracy is calculated for image #01 from Table 1.

Criteria Weights (marked with color)
Fully correct (same slope as GT) 1.0
A long line in GT detected as multiple split lines in results with all topological relationships well-preserved 1.0
Not fully correct but 50% is correct 0.5
Different slope as GT but most of the line(s) are correct 0.5
One line of a double line case is correct 0.5
Multiple tiny line segments (including connected or not connected): shifted a little bit 0.5
Only half of one line of a double line case is correct 0.25
One line of a double line case is correct, but shifted a little bit among the multiple tiny line segments 0.25
Multiple tiny line segments (including connected or not connected): shifted a lot 0
Table 4: Judging criteria for evaluation. GT refers to ground truth. The line segments in the example in Figure 7 are color-coded by weights as shown here. For double-line cases (see Figure 7 (c) for an example), we only count the segment correct if both lines are correctly detected against the GT. Thus, our scoring process is as follows: each one of the double lines, if they are correctly detected, the weight for each line is assigned 0.5 and thus the score for the “whole line” in GT is 0.5 * 2 = 1.0. Likewise, if only half of one line in a double line case is detected correctly, the score is 0.5 * 0.5 = 0.25.
(a) Ground truth: # of lines = 25
(b) PPHT: # of lines detected correctly = 27 * 0.5 + 23 * 0.25 = 19.25. Accuracy = 19.25 / 25 * 100% = 77%
(c) LSD: # of lines detected correctly = 27 * 0.5 + 23 * 0.25 = 19.25. Accuracy = 19.25 / 25 * 100% = 77%
(d) EDLines: # of lines detected correctly = 1 * 0.5 + 16 * 0.25 = 4.5. Accuracy = 4.5 / 25 * 100% = 18%
(e) Linelet: # of lines detected correctly = 5 * 0.5 + 25 * 0.25 = 8.75. Accuracy = 8.75 / 25 * 100% = 35%
(f) TGGLines: # of lines detected correctly = 17 * 1 + 8 * 0.5 = 21. Accuracy = 21 / 25 * 100% = 84%
Figure 7: Judging criteria illustrated visually for the evaluation of different line segment detection methods for image #01 in Table 1.