Combining contextual and local edges for line segment extraction in cluttered images

11/16/2014 ∙ by Rui F. C. Guerreiro, et al. ∙ 0

Automatic extraction methods typically assume that line segments are pronounced, thin, few and far between, do not cross each other, and are noise and clutter-free. Since these assumptions often fail in realistic scenarios, many line segments are not detected or are fragmented. In more severe cases, i.e., many who use the Hough Transform, extraction can fail entirely. In this paper, we propose a method that tackles these issues. Its key aspect is the combination of thresholded image derivatives obtained with filters of large and small footprints, which we denote as contextual and local edges, respectively. Contextual edges are robust to noise and we use them to select valid local edges, i.e., local edges that are of the same type as contextual ones: dark-to-bright transition of vice-versa. If the distance between valid local edges does not exceed a maximum distance threshold, we enforce connectivity by marking them and the pixels in between as edge points. This originates connected edge maps that are robust and well localized. We use a powerful two-sample statistical test to compute contextual edges, which we introduce briefly, as they are unfamiliar to the image processing community. Finally, we present experiments that illustrate, with synthetic and real images, how our method is efficient in extracting complete segments of all lengths and widths in several situations where current methods fail.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 23

page 24

page 25

page 33

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Line segments provide important information about the geometric content of real-life images. Since most man-made objects are made of flat surfaces, the contours needed for the interpretation of such 2D images as well as 3D world scenes often consist of line segments. Also, many complex shapes accept an economic and simple description in terms of straight lines. This has been used, e.g., for localizing vanishing points Kosecka and Zhang (2002) or to match line segments across distinct views Schmid and Zisserman (1997). Other applications include, e.g., rectangle detection Micusík et al. (2008), the inference of shape from lines Slater and Healey (1996), map-to-image registration Krüger (2001), 3D reconstruction Liu et al. (2011), or even image compression Fränti et al. (1998).

Although automatic line segment extraction has been researched actively in the past decades, current solutions make use of implicit strong assumptions that limit their applicability to simple and mostly unrealistic scenarios. Typical assumptions are that, e.g., line segments are pronounced, thin, occur in small amounts, do not cross each other, are located away from noise or clutter. Some methods also assume that images have few data apart from line segments, such as textures or contours. Since these assumptions often fail in realistic scenarios, many line segments are not detected, or are fragmented to various extents. In more severe cases, extraction can fail altogether. For this reason, which we detail in the sequel, the robust detection of line segments in realistic scenarios remains an open frontier (see Du et al. (2010); von Gioi et al. (2010); Borkar et al. (2011); Ji et al. (2011) for examples of recent advances).

1.1 Overview of methods for line segment extraction

The Hough transform (HT) Hough (1962); Duda and Hart (1972) is the most popular method to detect lines in images. It is a likelihood-based parameter extraction technique that, basically, indicates that the largest accumulation of edge points correspond to image lines. It uses a Hough space, a two-dimensional space where each point represents a line in the image, and each edge point in the image votes on the region of the Hough space that represents the pencil of all the image lines that go through that edge point. By processing all edge points, the votes for each location in the Hough space are accumulated and the locations with larger number of votes correspond to the most likely parameterizations of the lines in the image. Later, the HT was extended for extracting line segments. After obtaining the line parameterizations with the usual HT, the start and end points of the segments are obtained using a gap-and-length method Duprat et al. (2005), the shape of the spread of votes in the Hough space Kamat-Sadekar and Ganesan (1998); Ji et al. (2011) or extra accumulators Teutsch and Schamm (2011).

The success of the HT comes from its global nature, since all points in a line contribute to its detection — in fact, it was proven that it implements a statistically robust estimator for finding lines 

A. and Zeevi (2004). It has, however, three major issues when used to extract line segments in complex images. Firstly, it requires an edge detection scheme such as, e.g., the Canny edge detector Canny (1986), to generate its input edge map. Edge detection is by itself a hard problem, recognized as ill-posed in general Bertero et al. (1988), where delicate balances occur between edge localization and noise reduction, and detecting spurious edge points in noisy or textured areas and missing faint edges. As a consequence, edge detectors typically make use of small (local) filters and high thresholds for accepting an edge point, resulting in partial or complete segment mis-detections. This counteracts the global nature of the HT.

Secondly, since all votes originated by an edge point are wrong except for the one corresponding to the actual segment, the amount of noise in the Hough space is significant for images with many edge points Kim and Krishnapuram (1998). This makes it difficult to identify the most likely parameterization of actual lines in the Hough space. This is particularly critical for short segments, since they originate small peaks that are hard to identify Guil et al. (1995). Accidental alignments of unrelated edge points can also originate false peak detections Kim and Krishnapuram (1998); Guerreiro and Aguiar (2012). An example where Hough space contamination due to poor edge detection and erroneous vote accumulations leads to complete extraction breakdown is shown in Guerreiro and Aguiar (2012). Although Duda and Hart Duda and Hart (1972) argued as early as 1972 that the noise in the Hough space can be reduced by taking connectivity between collinear points into account, this topic received little attention in the past (exceptions are Yuen et al. (1993); Yang et al. (1997); Kim and Krishnapuram (1998); Guerreiro and Aguiar (2012)).

Thirdly, the way in which the HT was adapted to extract line segments, i.e., by taking the output of the HT and obtaining the start and end points of the segments, originates issues of its own. While initially each point in the Hough space accumulated votes supporting the existence of only one image line, now the votes can refer to multiple collinear line segments of various lengths and different start and end points. Since only the number of collinear edge points is stored in a conventional Hough space, the votes of individual line segments cannot be distinguished. This means that a local maximum in the Hough space no longer implies a maximum likelihood that a line or line segments actually exist in the image with such parameterization — therefore, this adaptation of the HT is not a statistically robust line segment extractor. Few papers deal with this topic as well, as in Kim and Krishnapuram (1998); Guerreiro and Aguiar (2012).

Another issue of the HT is that it cannot deal properly with wide line segments Yang et al. (1997), i.e., segments made up of blurred edges, since the highest number of votes corresponds to the diagonal of the segment rather than the segment itself. The limitations of the HT in handling complex images have been pointed out by several authors, e.g., Guil et al. (1995); von Gioi et al. (2010); Guerreiro and Aguiar (2012), and many efforts have been made to alleviate its problems. They include, e.g., the use of the edge direction to reduce the accumulation of spurious votes in the Hough space Illingworth and Kittler (1987a); Zhou et al. (2006), the sequential processing and removal of the strongest peaks in the Hough space Guil et al. (1995); Teutsch and Schamm (2011), sub-sampling of the edge map (randomized HT) Kälviäinen et al. (1995). Other authors addressed storage and computational issues of the HT by proposing a hierarchical scheme Li et al. (1986), multiple accumulator resolutions Illingworth and Kittler (1987b) and a probabilistic formulation Stephens (1991). The thickness of line segments and edge point connectivity is used in Yang et al. (1997) to change the value of each vote in a standard HT. However, none of these methods tackle the fundamental issues of the HT in extracting line segments in complex images.

The other set of popular methods for line segment extraction can be categorized as local methods, due to their reliance on local decisions rather than global ones (see Nevatia and Babu (1980); Burns et al. (1986); Guru (2004); Nguyen et al. (2007); von Gioi et al. (2010) for examples). The majority of local methods use three steps: first, they detect edge points and chain them (using, e.g., the method in Etemadi (1992)); then, a rough estimate of the segment direction is computed using, e.g., total least-squares regression Nguyen et al. (2007); and finally, they refine and extend the segment by including new edge points that approximately fit the line. The final step usually involves alternating between two stages until convergence Nguyen et al. (2007): inclusion of new edge points that are close to the candidate line, according to a distance measure; and re-estimation of the line segment parameters from the new set of edge points. In realistic scenarios, the initial step of detecting edge points and large connected regions belonging to a single segment is hard due to texture, low-contrast regions, crossing segments, and noise. The resulting smaller edge point chains hurt the reliability of the regression step and, finally, as it is typical with this type of alternating methods, a poor initial model for the line segment model may compromise the final refinement and extension results. Variations of the basic method include, e.g., Faugeras et al. (1992), which takes the chained edge points and cuts them into line segments, using a straightness criterion. References Guru (2004); Arras and Siegwart (1997) and Xiao and Shah (2003) bypass the chaining of edge points and fit a line directly to all edge points inside a sliding window. The segment direction is estimated roughly in Wilson et al. (1979) using a so-called local HT and taking the peaks of local orientation histograms, computed at each edge point. Two popular local methods for line segment detection are Burns et al. (1986) and the LSD (Line Segment Detector) of von Gioi et al. (2010). The method in Burns et al. (1986) coarsely quantizes the local orientation angles, chains adjacent pixels with identical orientation labels, and fits a line segment to the grouped pixels. LSD von Gioi et al. (2010) extends this idea by using continuous angles and eliminates false line segment detections by using the Helmholtz principle of Desolneux et al. (2006). These methods result computationally simple but lack robustness to deal with the imperfections that occur in realistic scenarios.

In Guerreiro and Aguiar (2012), we propose a better adaptation of the HT to the extraction of line segments, which retains its global aspect and solves its main issues in dealing with complex images. It starts by computing image derivatives using small filters and obtaining the directions for which there is a predominance of positive or negative derivatives (corresponding to a dark-to-bright transition in the image or vice-versa). Such predominance gives a rough indication that a line segment might be present at that location (it implements the sign test, as we discuss later) and only edge points within areas of strong predominance are allowed to contribute. Then, instead of using a single Hough space, where collinear line segments are indiscernible, we use a local HT for each line segment. Connectivity is incorporated in the voting process, by only accounting for the contributions of edge points whose position and directional content agree with potential line segments. As a consequence, the vast majority of spurious votes are eliminated and the peaks in the local Hough spaces correspond to line segments of maximum length. Unfortunately, the computational complexity of the proposed method is prohibitive for many applications (computation times of 100 to 1000 seconds are reported in Guerreiro and Aguiar (2012)) and thick transitions are not obtained correctly, as in the standard HT. We now believe that the requirements of global methods, of storing comprehensive data in order to eliminate early decisions that compromise robustness, make such methods computationally too complex for most applications.

1.2 Proposed approach

The novel and key aspect of our approach is the combination of what we denote contextual and local edges. Contextual edges are thresholded image derivatives obtained with filters of large footprint, which reduce the influence of noise when identifying image transitions. Although large footprint filters are robust to noise, edge localization is imprecise since every transition is smeared by their large point spread function. On the other hand, local edges are thresholded image derivatives obtained with filters of very small footprint, e.g., Sobel, Prewitt, and Roberts operators. Since these filters are small, edges are localized precisely but noise may originate erroneous detections of multiple directions and amplitudes. Our proposal combines contextual and local edges obtained at the same pixels by taking the sign of the contextual edge and using it to identify so-called valid local edges with the same sign. The edge sign indicates if it is a dark–to–bright transition or the opposite.

In complex images, valid local edges are disconnected with each other, due to noise and clutter. To obtain connected edge maps, we handle connectivity explicitly in the combination process by checking if the valid local edges, along a given direction, are at a distance not greater than a maximum distance threshold from each other. If so, the pixels corresponding to valid local edges and those in between are marked as edge points. The resulting edge detector has the robustness of contextual edges in dealing with noise and the localization of local edges, as idealized by Canny Canny (1986). Since the combination of contextual and local edges originates pixel-thin (connected) regions of the length of the line segment, a simple region growing and rectangle fitting methodology (which we detail later) suffices for the individual thin segments to be combined into line segments of all lengths and widths.

Typical contextual edge detectors handle noise by applying a low-pass filter of large footprint such as, e.g., Gabor or steerable filters Freeman and Adelson (1991)

, followed by a derivation step and binarization with a fixed threshold. However, since other unwanted high-frequency variations due to,

e.g.

, textures and clutter from interfering line segments and non-rectilinear image data, are common in complex images and have unpredictable amplitudes — unlike typical image white noise, which is constant throughout the image — an adaptive threshold is beneficial. For this purpose, we take both the mean and the variance of the two sets of pixels and use a two-sample statistical test to determine if a contextual edge exists. We consider that each set of pixels follows a Normal distribution with the sampled parameters, compute the Total Variation distance between them and threshold the confidence interval for the null hypothesis that both distributions are actually the same.

By using large (yet limited) contextual filters, our semi-global method results simple. Also, by doing away with the issues of local edges in a more effective way than typical global methods, through a combination with contextual edges, it results typically more robust as well in obtaining line segments of all lengths and transition widths. This is demonstrated in the experimental section, where we present a complexity analysis and illustrative results using synthetic and real images to compare our method with other methods: the standard HT Duda and Hart (1972), the state-of-the-art of local methods LSD von Gioi et al. (2010), and our previous HT-based method Guerreiro and Aguiar (2012).

1.3 Paper organization

The organization of the remaining of the paper is as follows. In section 2 we provide a brief introduction to statistical edge detectors and detail our implementation. Section 3 details the combination of contextual and local edges, including the explicit tackling of connectivity. Rectangle fitting is described in 4 and the experimental results are reported in Section 5. Section 6 concludes the paper.

2 Statistical edge detection

Low-pass filtering prior to derivation is used extensively in edge detection schemes to reduce the effect of noise, albeit with small footprints for small localization error. This can be seen roughly as computing the mean of two sets of pixels, comparing them and binarizing the result using threshold . In this scheme, the value of is critical, since a small originates erroneous edges, and a large misses them. In simpler scenarios where noise has a constant variance throughout the image, the optimal threshold between two sets of pixels can be defined optimally for a given confidence interval (e.g.

, the standard error of the mean,

). However, since complex images contain unwanted high-frequency variations of unpredictable variance such as, e.g., textures and clutter from other image data, a fixed threshold is far from optimal. This suggests that a more comprehensive statistical analysis is able to improve edge detection. Such studies were initiated by Bovik et al. Bovik et al. (1986), which used two-sample statistical tests in the context of edge detection. Two-sample tests take two sets of pixel values, illustrated in Fig. 1, and considers that they are samples of two underlying distributions, and . The test then checks the null hypothesis

that the underlying probability distributions are in fact the same 

Sheskin (2007). A contextual edge exists between the samples of and if the distributions are deemed different.

Figure 1: Illustration of two-sample tests. The pixels (left) are samples of underlying distributions and (right). The two-sample test determines if they are the same.

2.1 Typical approaches

The parameters of each distribution can be estimated from the samples by computing and (with ). If the two sets of samples are assumed to be Normally distributed, and , various criteria can then be used.

For the -test, the distributions are the same if their means coincide. Since the sample mean varies with the sample variance and the number of samples through formula , the -test is given by

(1)

Once a

-value is determined, the probability that the test statistic would take a value at least as extreme as the one observed, denoted

-value, is computed, for degrees of freedom. If the calculated -value is below the threshold chosen for statistical significance (usually the 0.10, the 0.05, or 0.01 level), the distributions are deemed different (two-tailed test) or larger than the other (one-tailed test) Sheskin (2007).

If no assumption is made about the distributions of and

, nonparametric tests can be used instead. These tests are more general and robust to outliers 

Sheskin (2007) but also computationally more intensive, since they require expensive sorting operations. Some tests compute empirical cumulative distributions (ecd) from samples and and then compare them using some distance measure. The Kolmogorov–Smirnov test obtains the maximum distance between the ecds and the Fisz–Cramér–Von Mises test integrates the squared difference between the ecds. The Wilcoxon Mann–Whitney is a popular rank order test that mixes the samples of both distributions, sorts and ranks them. The difference between the distributions is assessed by adding the ranks of one distribution and comparing with the added ranks of the other Sheskin (2007).

The success of two-sample statistical tests in contextual edge detection in noisy images is shown in Lim and Jang (2002). Reference Fesharaki and Hellestrand (1994) uses a -test for detecting edges and a mixture of Normal distributions models noisy data in the edge detector of Thune et al. (1997)

. Various statistical tests are used in the neural network approach 

Williams et al. (2006). In our previous line segment extraction method Guerreiro and Aguiar (2012), the number of local positive image derivatives minus the number of negative ones is used to assess the predominance of dark-to-bright transitions or vice-versa, indicating a high chance of having found a line segment. This procedure implements the sign test, a nonparametric paired test that tests the null hypothesis that there is ”no difference in medians” between and . Despite lacking the statistical power of other tests, it has very general applicability, as it makes very few assumptions about the nature of the distributions under test Sheskin (2007).

2.2 Our approach

Since nonparametric tests are computationally expensive, we use a parametric test instead. Although the -test is arguably the most frequently used parametric test, it assumes that coinciding mean values imply identical distributions (this is clear in equation (1)), which is not coherent with the intuition that pixel distributions with the same sample mean but different sample variances can be deemed visually different. To enable this feature, we use the Total Variation (TV) distance between two Normal distributions DasGupta (2010),

(2)

where

represents the probability density function (pdf) of the Normal distribution. The TV distance outputs the integral of the (linear) distance between the pdfs of

and

. As we show below, the linear aspect assures that only the ratio between the sample standard deviations is taken into account, instead of their actual values. Although more comprehensive tests are needed, the higher sensitivity and robustness of the TV distance in detecting the perceived line segments, in contrast with non-linear distances such as the Kullback-Leibler and Hellinger divergences, lead us to believe that this feature of the TV distance best emulates the human visual system in estimating the boundary strength between different areas.

To simplify the calculation of (2), let be the points in which the pdf of distributions and are equal, as illustrated in Fig. 2. Points (with , ) determine where one pdf becomes larger than the other and, thus, helps in dealing with the magnitude operator and enables the use of the cumulative density function of the Normal distribution, ,

(3)

To determine and , we make , which results in , where , , and . If all parameters are identical, then . If only the sample variances are identical, the pdf of both distributions cross at only one point, . For different sample variances, two crossing points are obtained.

Figure 2: Illustration of distribution pdf intersection points and area for TV distance.

To eliminate redundant calculations in the remainder of our method, we compute a look-up table with relevant TV distances. Since the TV distance requires four parameters, , it seems that a four-dimensional look-up table is needed, which is expensive to compute and store. However, since we are computing the linear distance between two distributions, it is not the actual values of and that are needed but how they relate with each other. Similarly, the actual values of and are not needed but only the relation between them as well. Let variables and be

(4)

Assuming that and , equations (4) become

(5)

Replacing (5) in (2) and substituting variable , we have

(6)

which shows that the TV distance depends only on two parameters (analogous demonstration show the same result in any scenario). A two-dimensional look-up table is then filled in for various , with and , using (3). During normal operation, the parameters are normalized using equations (4) and the corresponding TV distance is accessed.

3 Combining contextual and local edges using connectivity

In this section, we compute contextual edges and combine them with local ones to form connected edge maps. Since samples and , the inputs of the two-sample test at the heart of contextual edge extraction, are taken from long and thin footprints (as illustrated in Fig. 1), each test is effective only in a small angular range. Therefore, contextual edges must be computed along multiple directions to span the entire range. For every direction, we use a running average approach to take samples at each point and, assuming that they follow a Normal distribution, obtain their parameters. Then, we take the parameters refering to and , compute the Total Variation distance detailed in the previous chapter and describe each pixel (and direction) as containing a negative, a positive or no contextual edge, using quantization thresholds defined later. If there is a contextual edge, it is used to select valid local edges. Local edges consist of image derivatives obtained by convolving the image with kernels of very small footprint, central difference kernels, whose results are also described as being a negative, a positive or no local edge. Valid local edges then consist of local edges that have the same sign as the contextual one. If the distance between valid local edges does not exceed a maximum distance threshold , those edges and the pixels between them are marked as edge points.

3.1 Computing Normal distribution parameters

The parameters of the Normal distribution are computed for each set of samples starting at pixel , illustrated in Fig. 3. We use pixels at each set of samples and we verified theoretical and experimentally that different uniformly spaced directions leads simultaneously to neglectable angular coverage errors and good computational performance (since every pixel in the perimeter of a semicircular window of radius is used only when , different directions should be used for pixels and perfect angular performance).

Figure 3: Illustration of pixels starting at pixel along direction .

Each direction corresponds to angle , which lies in the horizontal (H) or vertical (V) half of the semicircle , and in quadrant of the semicircle, both illustrated in Fig. 4,

(7)

Figure 4: Illustration of the half of a semicircle, (left), and quadrant, (right).

Our method works by selecting a direction and dividing the image into lines along that direction, as illustrated in Fig. 3. For each point in a line, the parameters of the Normal distribution are computed by taking consecutive pixels, from pixel to pixel , and computing their sample average and variance, and . Since the set of consecutive pixels needed for the next point in the line, , is the same as the set of points for except for the points at the start and end of the set, we use a recursion that simplifies calculations and is used often: the running average. To formalize this idea, for image , we define a mapping that addresses each line along direction ,

(8)

where refers to the rounding operation and integer parameter is chosen so that lies inside the image limits. If , the -th line is made up of the pixels addressed by , with and in an increasing order. Analogously, if , the -th line is addressed by , with increasing . To compute the average and standard deviation in a recursive way, we define linear and quadratic accumulators, and , respectively, which are updated using

(9)

where locates the current pixel, locates the pixel that is exiting the set of pixels, and locates the pixel that is entering the set,

(10)

The parameters of the Normal distribution, for direction , are then

(11)

for each point in the image.

3.2 Combining contextual and local edges using connectivity

To compute local edges, we first capture the local directional content of image by computing its derivatives, through the convolution with four oriented kernels,

(12)

Local orientation has been exploited before and captured by using several types of kernels, see, e.g., Burns et al. (1986); Illingworth and Kittler (1987a); von Gioi et al. (2010). Although kernels with a large support would smooth the noise, we use the same approach as in our previous work Guerreiro and Aguiar (2012) and employ simple central difference kernels, since they enable more precise edge localization, by minimizing the influence of surrounding pixels. Thus, we set

(13)

The combination of contextual and local edges occurs in three steps and is described in detail in the sequel. To summarize, in the first step, the parameters of the Normal distributions, for all pixels and directions (computed in 3.1), and the local image derivatives (computed above) are used to compute contextual and local edges at each pixel. This occurs for each direction and progressively along each line, and the goal is to find a contextual and local edge with matching sign. Once both edge types are matched, at pixel , a second step then checks if the pixels between and contain valid local edges that are sufficiently connected with each other, as should occur in a line segment. We consider that valid local edges are connected if they are not separated by more than a maximum distance threshold . If step 2 is successful, the beginning of a line segment was found. A third step marks those pixels as connected edge points and progressively checks and marks the subsequent pixels along direction which preserve the contextual edge sign and whose valid local edges continue to be sufficiently connected. The output of this process is a set of connected edge points for each direction , which is used in section 4 to fit rectangles.

3.2.1 Search for initial edge

In this step, the pixels along direction are scanned to find a contextual and local edge whose spatial location and sign coincides. For (the adaptation to is analogous), each -th line is scanned at a time () and, within each line, the pixels are scanned for all in increasing order, resulting in coordinate (to simplify the explanation of our method, we define that refers to the -th pixel of the set of pixels currently being tested).

To determine if there is a contextual edge at point along direction , we start by computing the coordinates of the parameters above and below (see Fig. 5 for an illustration), and , and obtain the parameters of the two distributions, , , , . The parameters are normalized using equation (4) and the two-dimensional look-up table containing the TV distances is accessed, . Since the type of transition is important for line segment extraction, i.e., if it is a dark-to-light transition or vice-versa, contextual and local edges should include this information. For this effect, the contextual edge is given by . A contextual edge exists if .

There is a local edge at pixel if . If it has the same sign as the contextual one (i.e., if condition is true), it is denoted as a valid local edge. If a valid local edge was found, the method progresses onto step 2. Threshold is given by , where and . The overall combination of contextual and local edges using connectivity is summarized in Alg. 1, for .

Figure 5: Search for initial contextual (left) and valid local edge (right).
1:  input: directional data , parameters and , maximum distance threshold , contextual and local thresholds, TV distances
2:  
3:  % for every pixel in the image (assuming )
4:  for  do
5:     step = ’search for start’
6:     for  do
7:        % get contextual edge data
8:        
9:        
10:        , , ,
11:        ,
12:        
13:        
14:        % combine contextual and local edges
15:        if step = search for start’ then Alg. 2 else Alg. 3 end if
16:     end for
17:  end for
18:  output: Edge map
Algorithm 1 combination of contextual and local edges using connectivity

3.2.2 Marking of initial area

The contextual and valid local edge found in step 1 indicates that a set of connected edge points may start at that location. That occurs if the pixels between the top and bottom set of samples contain valid local edges that are sufficiently connected to each other. Step 2 verifies this and, if true, marks the pixels as connected edge points.

Pixels are checked sequentially, as illustrated in Fig. 6 (where , if ). Whenever a valid local edge is found, i.e., , gap counter is nulled — otherwise, is incremented. Note that, while in approaches such as Yuen et al. (1993), the distance between binary local edge points is the only criteria to judge whether pixels are connected with each other or not, our approach (as also illustrated in Fig. 6) requires that the sign of the edge points matches the sign of the connected edges. Local edges with opposite signs — even strong ones — are discarded in the same way as non-edges.

If the gap counter exceeds the maximum distance threshold , the gap between two valid edge points is too large and step 2 ends unsuccessfully, returning to step 1 for further searching. If the gap counter never exceeded , the pixels are marked as edge points of sign and the method proceeds to step 3. We allow gaps of up to pixels in this paper. Step 1 and 2 of the combination of contextual and local edges are summarized in Alg. 2, for .

1:  input: signed TV distance , directional data , maximum distance threshold , contextual and local thresholds
2:  
3:  if , ,  then
4:     % found initial edge, implement step 2
5:     while  and  do
6:        if  then
7:           % found valid local edge
8:           
9:        else
10:           
11:        end if
12:     end while
13:     if  then
14:        % does not exceed , mark edge points
15:        for  do
16:           
17:        end for
18:        step = ’mark area’
19:     end if
20:  end if
Algorithm 2 find initial set of connected edge points (assumes )

Figure 6: Illustration of the sign of local edges and gap counter, (using the contextual edge illustrated in Fig. 5). Using (for illustration), the four consecutive local edges of contrary sign makes step 2 end abruptly while analyzing pixel .

3.2.3 Marking of connected area

As the set of connected edge points increase, the contextual and local edges to be checked move progressively along the line. For , this occurs by making and re-computing both types of edges for contextual and local compliance. The contextual edge is computed as before, by accessing the parameters at positions and and computing the new value of . If and has the sign of step 1, the contextual edge is valid.

To determine if there a valid local edge, pixel is checked. If , it a valid local edge and counter is nulled — otherwise, is incremented. If and the contextual edge is valid, pixel is marked as a connected edge point of sign . Otherwise, the set of connected edge points ended. The last non-connected edge points are unmarked and the method returns to step 1, to search for a new set. This is illustrated in Fig. 7 and summarized in Alg. 3.

1:  input: signed TV distance , directional data , maximum distance threshold , contextual and local edge thresholds
2:  if  then
3:     if  then
4:        % found valid local edge
5:        
6:     else
7:        
8:     end if
9:     if  then
10:        % does not exceed , mark as a connected edge point
11:        
12:     else
13:        % unmark non-connected edge points
14:        for  do
15:           
16:        end for
17:        step = ’search for start’
18:     end if
19:  else
20:     step = ’search for start’
21:  end if
Algorithm 3 mark connected edge points (assumes )

Figure 7: Illustration of the marking of connected edge points. The connected edge points are marked in black. Contextual and local edge at pixel are being checked.

4 Rectangle fitting

The edge map that is computed for every contains positive and negative edge points. Due to the handling of connectivity, the edge points of a line segment do not exhibit gaps between them and, consequently, a simple region growing scheme is sufficient to obtain the areas that form a rectangle. This avoids expensive search mechanisms for overcoming gaps, as the costly scheme used in our previous HT-based method, in Guerreiro and Aguiar (2012). This section works by:

- 1. Region growing and labeling - A unique identification number is given to each set of connected edge points with the same sign, as illustrated in Fig. 8.

- 2. Fitting a rectangle to each area - Rectangle fitting can occur in various ways (see von Gioi et al. (2010) for a brief summary). In our method, we start by fitting a line to the upper and lower limits of each area, as illustrated in Fig. 8. Then, using the average angle of both fitted lines, , we obtain the start and end of each line segment.

Figure 8: The multiple connected edges along lines (left), obtained in Section 3.2, are joined into a single area and lines are fit to the upper and lower limits (right).

- 3. Validate rectangles - A validation step is needed to eliminate non-rectilinear structures in the image. Although various criteria can be used, we require only that the upper and lower fitted lines (see above) should have similar angles, , and the average angle should lie inside the permitted range, .

- 4. Store rectangles - Valid rectangles are then stored as angle and the parameters corresponding to the four limiting lines, represented in Fig. 9.

Figure 9: Rectangle limits.

5 Experiments

We single out demonstrative results of our method, which we contrast with the ones obtained with the standard HT Duda and Hart (1972), the state-of-the-art of local methods LSD von Gioi et al. (2010) (the superiority of LSD when compared to several other local methods is thoroughly demonstrated in von Gioi et al. (2010)), and our previous HT-based method, denoted STRAIGHT Guerreiro and Aguiar (2012). We describe experiments with synthetic images, which help characterize the general behavior of the new method. Then, we present results obtained with several real world images, which demonstrate its performance in practical application. Finally, we discuss the computational complexity of these methods.

5.1 Synthetic images

We start by illustrating the behavior of the algorithms when dealing with an image made up of intersecting line segments of multiple lengths and widths, shown on the top left of Fig. 10. By comparing the edges computed by the Canny edge detector Canny (1986) with the line segments that the HT extracts from them, on the top middle and right of Fig. 10, respectively, we conclude that the HT succeeds in correctly extracting the lines from this image. This occurs because line segments are long, not in a large number, and the HT does not require connectivity, therefore being able to overcome the multiple line crossings. On the other hand, the results of the LSD method, shown in the bottom left image of Fig. 10, illustrate that local methods fail to overcome line crossings and splits them. This occurs because local methods require absolute connectivity, i.e., that edge points are perfectly chained together. In the particular case of the LSD, the state-of-the-art of local methods, edge points must have approximately constant direction as well. Although the results of STRAIGHT show that it is able to extract thin line segments regardless of the intersections, it is unable to deal with thick ones, originating multiple erroneous detections. Our semi-global method succeeds in extracting line segments of all lengths and widths, with few errors. A pair of twin segments is extracted for each segment in the original image because both light-to-dark and dark-to-light transitions are detected.

Figure 10: Image with prominent lines. Top left to right: original image, result of Canny edge detector Canny (1986), and standard HT Duda and Hart (1972). Bottom left to right: LSD von Gioi et al. (2010), STRAIGHT Guerreiro and Aguiar (2012), and the proposed method.

We now illustrate the behavior of the algorithms in capturing transitions between differently textured regions. This simulates low signal-to-noise scenarios that occur when using very low thresholds in edge detection, for increased sensibility, where real transitions should be extracted successfully, while avoiding false ones. We use the synthetic images in the left column of Fig. 11, which were generated by adding noise to a piecewise constant map (the top and bottom images were first used in Guerreiro and Aguiar (2012) and are used here to enable a simple comparison between the methods). The top image represents a simpler scenario, where one of the areas involved in the transition is perfectly smooth. The central image represents the same scenario, except that the mean value of both regions is now equal, making the variance the single discriminating factor. The bottom image simulates two smooth objects in a low signal-to-noise image. Due to the effect of noise, the local edge detection in LSD can not produce edges with constant direction along real segments and the LSD fails to extract line segments, except for parts of the top image. The results of STRAIGHT and the proposed method, in the two rightmost columns of Fig. 11

, show that both overcome noise and succeed in extracting the line segments for the top and bottom images (the few short segments correspond to accidental connected alignments in the random texture). By allowing samples with the same mean but different variances to be classified as edges, by using two-sample tests and, in particular, the TV distance, the proposed method is the only one that succeeds in obtaining most of the real segments of the figure in the middle row.

Figure 11: Textured images. From left to right: original image, result of LSD von Gioi et al. (2010), STRAIGHT Guerreiro and Aguiar (2012) and the proposed method.

5.2 Real images

We start by showing a challenging image that was first used in Guerreiro and Aguiar (2012) to demonstrate the ability of STRAIGHT in dealing with the dense packing of line segments of multiple lengths that cross each other. In the top right image of Fig. 12, we display the results of the HT Duda and Hart (1972), showing that extraction fails altogether for not being able to cope with the large number of edge points (this is explained in detail in Guerreiro and Aguiar (2012)). On the middle left image of Fig. 12, we display the results of LSD von Gioi et al. (2010), showing that a subset of the line segments are in fact detected. A closer look reveals that those are only the line segments that do not cross other structures and also that several longer segments are detected as fragmented ones. The results of STRAIGHT are shown in the middle right image of Fig. 12 and we see that it succeeds in extracting the vast majority of the line segments in the image (exceptions are those which exhibit low contrast). Equally good results are obtained by the method we propose, shown in the bottom images of Fig. 12, with the difference that STRAIGHT needed 52 seconds to process this image while our semi-global method needed only about 4 seconds on the same machine. The bottom right image displays only the line segments that have length greater than 50 pixels, illustrating that line segments are not fragmented.

Figure 12: Top left: image. Top right: HT Duda and Hart (1972). Middle left: LSD von Gioi et al. (2010). Middle right: STRAIGHT Guerreiro and Aguiar (2012). Bottom left: our method. Bottom right: our method (longer segments).

Fig. 13 presents another illustrative case. It was obtained by processing an image containing a complex scene of line segments (many of which of low contrast) occluded by a net that is large and out-of-focus. The result of LSD von Gioi et al. (2010) shows that most low contrast segments were not extracted and that others are fragmented in multiple pieces. The fragmentation of line segments is improved in STRAIGHT Guerreiro and Aguiar (2012) but low contrast segments are equally not extracted and the thick lines of the net originate a multitude of erroneous line segments. On the other hand, our proposed method extracts most line segments, including low contrast and thick ones, with little fragmentation. The extraction of low contrast line segments is enabled by the better handling of variable high frequency variation by two-sample statistical tests. Low contrast and blurred line segments occur often in realistic scenarios and are tackled by our method.

Figure 13: Top left: image. Top right: LSD von Gioi et al. (2010). Bottom left: STRAIGHT Guerreiro and Aguiar (2012). Bottom right: our method.

Finally, Fig. 14 presents results of using our method with real images of various kinds. As desired, the vast majority of long line segments are extracted without artificial fragmentation, despite the multiple segment crossings. Also note that, although some of these images have edges that form curves, our method succeeds in approximating these sections in a piecewise linear way, i.e., by a sequence of rectilinear line segments.

Figure 14: Results of our method for several kinds of real images.

5.3 Computational complexity

The most computationally intensive portion of our method is the calculation of contextual and local edges and their combination into continuous edge maps. By using a running average framework, the calculation of contextual edges depends only linearly on the pixel count and the number of directions, . Such linear dependency also occurs in the calculation of local edges and in their combination with contextual ones. This is confirmed in the left of Fig. 15, which shows the computation time111All experiments were performed on an 2.67 GHz machine, with all methods implemented in optimized C# code, except where noted. needed by the proposed method to extract line segments for multiple images, as a function of pixel count and the number of directions that are used, . The trend line for , the number of directions that are used in all experiments, indicates that about 97K pixels are processed at each second (e.g., the line segments in the Lenna image are extracted in about seconds).

The complexity of STRAIGHT Guerreiro and Aguiar (2012) increases linearly with the number of edge points, as the dominating factor in the calculations is the updating of the Hough space of each local HT. In our experiments, the computation time of STRAIGHT was always considerably higher than the proposed method (e.g., the line segments in the Lenna image are extracted in about seconds), which is confirmed on the right of Fig. 15. This figure shows the computation time needed by both methods to extract line segments for multiple images, as a function of pixel count and for directions. Although the performance of STRAIGHT varies non-linearly with the pixel count, the rough trendline that is fitted helps in illustrating the significant computational advantage of the method we propose of about one order of magnitude.

Figure 15: Extraction times (in seconds) as a function of pixel count. Left: proposed method, for number of directions. About 71K pixels per second are processed using directions, 82K pixels/s using , and 97K pixels/s using . Right: proposed method versus STRAIGHT, with . STRAIGHT processes roughly 8K pixels/s.

The standard HT requires filling an accumulator array, which depends linearly on the total number of edge points. In our experiments, the standard HT required approximately 1 to 10 seconds to process each image in this paper, using code.

Reference von Gioi et al. (2010) states that the complexity of the LSD method depends linearly with the image pixel count, as illustrated by a plot therein showing the calculation time needed to extract line segments in various images. In the worst case scenario, i.e., images made up of noise, LSD is able to process about 240K pixels per second. Although the main advantage of local methods such as the LSD is its low computational complexity, with the drawback of only dealing successfully with simple scenarios, the amount of pixels processed by the LSD is only about 2–3 times greater than our proposed method.

Although only STRAIGHT and our proposed method can deal with the complex images that arise in practice, the results above show that the computational complexity of our method is significantly below STRAIGHT. Furthermore, the complexity of our method is comparable with local methods, i.e., it is only about 2–3 times more complex than the LSD method, despite the ability to handle complex scenarios. This indicates that our method is efficient in extracting segments of all lengths and widths in complex scenarios.

6 Conclusion

We have presented a new semi-global method for line segment extraction. Our method combines contextual and local edges, with explicit handling of connectivity. Our experiments show that it outperforms current methods for line segment extraction in challenging situations, e.g., when dealing with complex images containing several crossing segments of multiple widths, and that its computational efficiency is comparable with simple local methods. We use a contextual edge detection scheme based on two-sample statistical tests, which is a robust way to handle noise.

References

  • Kosecka and Zhang (2002)

    J. Kosecka, W. Zhang, Video Compass, in: Proceedings of European Conference on Computer Vision, Springer-Verlag, Copenhagen, Denmark, 657–673, 2002.

  • Schmid and Zisserman (1997)

    C. Schmid, A. Zisserman, Automatic Line Matching across Views, in: Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition, San Juan, Puerto Rico, 666–671, 1997.

  • Micusík et al. (2008) B. Micusík, H. Wildenauer, J. Kosecka, Detection and matching of rectilinear structures, in: Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition, Anchorage, Alaska, USA, 1–7, 2008.
  • Slater and Healey (1996) D. Slater, G. Healey, 3D Shape Reconstruction by Using Vanishing Points, IEEE Transactions on Pattern Analysis and Machine Intelligence 18 (1996) 211–217.
  • Krüger (2001) W. Krüger, Robust and efficient map-to-image registration with line segments, Machine Vision and Applications 13 (2001) 38–50.
  • Liu et al. (2011) J. Liu, Y. Chen, X. Tang, Decomposition of Complex Line Drawings with Hidden Lines for 3D Planar-Faced Manifold Object Reconstruction, IEEE Transactions on Pattern Analysis and Machine Intelligence 33 (2011) 3–15.
  • Fränti et al. (1998) P. Fränti, E. Ageenko, H. Kälviäinen, S. Kukkonen, Compression of line drawing images using Hough transform for exploiting global dependencies, in: Proceedings of the Fourth Joint Conference on Information Sciences, vol. 4, Research Triangle Park, North Carolina, USA, 433–436, 1998.
  • Du et al. (2010) S. Du, B. J. van Wyk, C. Tu, X. Zhang, An improved Hough transform neighborhood map for straight line segments, IEEE Transactions on Image Processing 19 (2010) 573–585.
  • von Gioi et al. (2010) R. von Gioi, J. Jakubowicz, J. Morel, G. Randall, LSD: A Fast Line Segment Detector with a False Detection Control, IEEE Transactions on Pattern Analysis and Machine Intelligence 32 (4) (2010) 722–732.
  • Borkar et al. (2011) A. Borkar, M. Hayes, M. Smith, Polar randomized Hough Transform for lane detection using loose contraints of parallel lines, in: Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing, Prague, Czech Republic, 1037–1040, 2011.
  • Ji et al. (2011) J. Ji, G. Chen, L. Sun, A novel Hough Transform method for line detection by enhancing accumulator array, Pattern Recognition Letters 32 (11) (2011) 1503–1510.
  • Hough (1962) P. Hough, Method and means for recognizing complex patterns, U.S. Patent 3.069.654, 1962.
  • Duda and Hart (1972) R. O. Duda, P. E. Hart, Use of the Hough Transformation to detect lines and curves in pictures., Communications of the ACM 15 (1) (1972) 11–15.
  • Duprat et al. (2005) O. Duprat, B. Keck, C. Ruwwe, U. Zölzer, Hough Transform with weighting Edge-maps, in: Fifth IASTED International Conference on Visualization, Imaging, & Image Processing, Benidorm, Spain, 2005.
  • Kamat-Sadekar and Ganesan (1998) V. Kamat-Sadekar, S. Ganesan, Complete description of multiple line segments using the Hough Transform, Image and Vision Computing 16 (9-10) (1998) 597 – 613.
  • Teutsch and Schamm (2011) M. Teutsch, T. Schamm, Fast Line and Object Segmentation in Noisy and Cluttered Environments using Relative Connectivity, in: Proceedings of the Conference on Image Processing, Computer Vision, and Pattern Recognition, Las Vegas, USA, 2011.
  • A. and Zeevi (2004) A., A. Zeevi, The Hough Transform estimator, Annals of statistics 32 (2004) 1908.
  • Canny (1986) J. Canny, A Computational Approach To Edge Detection, IEEE Transactions on Pattern Analysis and Machine Intelligence 8 (1986) 679–698.
  • Bertero et al. (1988) M. Bertero, T. Pogcio, V. Torre, Ill-posed problems in early vision, in: Proceedings of the IEEE, 869–889, 1988.
  • Kim and Krishnapuram (1998) J. Kim, R. Krishnapuram, A robust Hough Transform based on validity, in: IEEE International Conference on Fuzzy Systems, vol. 2, Anchorage, USA, 1530 – 1535, 1998.
  • Guil et al. (1995) N. Guil, J. Villalba, E. Zapata, A fast Hough Transform for segment detection., IEEE Transactions on Image Processing 4 (11) (1995) 1541–1548.
  • Guerreiro and Aguiar (2012) R. Guerreiro, P. Aguiar, Connectivity-Enforcing Hough Transform for the Robust Extraction of Line Segments (to appear), IEEE Transactions on Image Processing .
  • Yuen et al. (1993) S. Yuen, T. Lam, N. Leung, Connective Hough transform, Image and Vision Computing 11 (5) (1993) 295–301.
  • Yang et al. (1997) M. Yang, J. Lee, C. Lien, C. Huang, Hough Transform Modified by Line Connectivity and Line Thickness, IEEE Transactions on Pattern Analysis and Machine Intelligence 19 (8) (1997) 905–910.
  • Illingworth and Kittler (1987a) J. Illingworth, J. Kittler, A survey of efficient Hough Transform methods, in: Proceedings of Alvery Vision Club Conference, Cambridge, 319–326, 1987a.
  • Zhou et al. (2006) J. Zhou, W. Bischof, A. Sanchez-Azofeifa, Extracting Lines in Noisy Image Using Directional Information, in: Proceedings of the 18th International Conference on Pattern Recognition, vol. 2, IEEE Computer Society, Washington, DC, USA, 215–218, 2006.
  • Kälviäinen et al. (1995) H. Kälviäinen, P. Hirvonen, L. Xu, E. Oja, Probabilistic and non-probabilistic Hough Transforms: overview and comparisons, Image and Vision Computing 13 (1995) 239–252.
  • Li et al. (1986) H. Li, M. Lavin, R. Le Master, Fast Hough Transform: A hierarchical approach, Computer Vision, Graphics, and Image Processing 36 (1986) 139–161.
  • Illingworth and Kittler (1987b) J. Illingworth, J. Kittler, The adaptive Hough Transform, IEEE Transactions on Pattern Analysis and Machine Intelligence 9 (1987b) 690–698.
  • Stephens (1991) R. Stephens, Probabilistic approach to the Hough transform, Image and Vision Computing 9 (1) (1991) 66 – 71.
  • Nevatia and Babu (1980)

    R. Nevatia, K. Babu, Linear feature extraction and description, Computer Graphics and Image Processing 13 (3) (1980) 257 – 269.

  • Burns et al. (1986) J. Burns, A. Hanson, E. Riseman, Extracting straight lines, IEEE Transactions on Pattern Analysis and Machine Intelligence 8 (1986) 425–455.
  • Guru (2004)

    D. Guru, A simple and robust line detection algorithm based on small eigenvalue analysis, Pattern Recognition Letters 25 (1) (2004) 1–13.

  • Nguyen et al. (2007) V. Nguyen, S. Gächter, A. Martinelli, N. Tomatis, R. Siegwart, A comparison of line extraction algorithms using 2D range data for indoor mobile robotics, Autonomous Robots 23 (2007) 97–111.
  • Etemadi (1992) A. Etemadi, Robust Segmentation of Edge Data, in: Proceedings of IEEE International Conference on Image Processing and its Applications, Maastricht, The Netherlands, 311–314, 1992.
  • Faugeras et al. (1992) O. Faugeras, R. Deriche, H. Mathieu, N. Ayache, G. Randall, Parallel image processing, chap. The depth and motion analysis machine, World Scientific Publishing Co., Inc., River Edge, NJ, USA, 143–175, 1992.
  • Arras and Siegwart (1997) K. Arras, R. Siegwart, Feature Extraction and Scene Interpretation for Map-Based Navigation and Map Building, in: Proceedings of SPIE, Mobile Robotics XII, vol. 3210, 42–53, 1997.
  • Xiao and Shah (2003) J. Xiao, M. Shah, Two-Frame Wide Baseline Matching, in: Proceedings of IEEE International Conference on Computer Vision, Nice, France, 603–609, 2003.
  • Wilson et al. (1979) L. Wilson, N. Chen, R. Kelley, J. Birk, Image Feature Extraction Using Diameter Limited Gradient Direction Histograms, IEEE Transactions on Pattern Analysis and Machine Intelligence 2 (1) (1979) 228–235.
  • Desolneux et al. (2006) A. Desolneux, L. Moisan, J. Morel, Gestalt theory and image analysis, a probabilistic approach, 2006.
  • Freeman and Adelson (1991) W. Freeman, E. Adelson, The Design and Use of Steerable Filters, IEEE Transactions on Pattern Analysis and Machine Intelligence 13 (1991) 891–906.
  • Bovik et al. (1986) A. Bovik, T. Huang, D. Munson, Nonparametric tests for edge detection in noise, Pattern Recognition 19 (3) (1986) 209–219.
  • Sheskin (2007) D. Sheskin, Handbook of Parametric and Nonparametric Statistical Procedures, Chapman & Hall/CRC, 4 edn., 2007.
  • Lim and Jang (2002) D. Lim, S. Jang, Comparison of two-sample tests for edge detection in noisy images, Journal of the Royal Statistical Society: Series D (The Statistician) 51 (1) (2002) 21–30.
  • Fesharaki and Hellestrand (1994) M. Fesharaki, G. Hellestrand, A new edge detection algorithm based on a statistical approach, in: Proceedings of International Symposium on Speech, Image Processing and Neural Networks, vol. 1, Hong Kong, China, 21 – 24, 1994.
  • Thune et al. (1997) M. Thune, B. Olstad, N. Thune, Edge detection in noisy data using finite mixture distribution analysis, Pattern Recognition 30 (5) (1997) 685 – 699.
  • Williams et al. (2006) I. Williams, D. Svoboda, N. Bowring, E. Guest, Improved statistical edge detection through neural networks, in: 10th Conference on Medical Image Understanding and Analysis, Manchester, UK, 56–60, 2006.
  • DasGupta (2010) A. DasGupta, Asymptotic theory of statistics and probability, Springer, 2010.