Traffic Sign Timely Visual Recognizability Evaluation Based on 3D Measurable Point Clouds

10/10/2017 ∙ by Shanxin Zhang, et al. ∙ Xiamen University NetEase, Inc 0

The timely provision of traffic sign information to drivers is essential for the drivers to respond, to ensure safe driving, and to avoid traffic accidents in a timely manner. We proposed a timely visual recognizability quantitative evaluation method for traffic signs in large-scale transportation environments. To achieve this goal, we first address the concept of a visibility field to reflect the visible distribution of three-dimensional (3D) space and construct a traffic sign Visibility Evaluation Model (VEM) to measure the traffic sign visibility for a given viewpoint. Then, based on the VEM, we proposed the concept of the Visual Recognizability Field (VRF) to reflect the visual recognizability distribution in 3D space and established a Visual Recognizability Evaluation Model (VREM) to measure a traffic sign visual recognizability for a given viewpoint. Next, we proposed a Traffic Sign Timely Visual Recognizability Evaluation Model (TSTVREM) by combining VREM, the actual maximum continuous visual recognizable distance, and traffic big data to measure a traffic sign visual recognizability in different lanes. Finally, we presented an automatic algorithm to implement the TSTVREM model through traffic sign and road marking detection and classification, traffic sign environment point cloud segmentation, viewpoints calculation, and TSTVREM model realization. The performance of our method for traffic sign timely visual recognizability evaluation is tested on three road point clouds acquired by a mobile laser scanning system (RIEGL VMX-450) according to Road Traffic Signs and Markings (GB 5768-1999 in China), showing that our method is feasible and efficient.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 3

page 4

page 5

page 8

page 12

page 13

page 14

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Traffic signs include a number of important traffic information, such as speed restrictions, driving behavior restrictions, changes ahead of road conditions and other information; the timely provision of this information to drivers increases the likelihood that the drivers will respond in a timely manner, to ensure safe driving, and to avoid traffic accidents [1, 2, 3]. Detecting and reading a roadside on-premise sign by a driver involves a complex series of sequentially occurring events, both mental and physical. They include message detection and processing, intervals of eye and/or head movement alternating between the sign and the roadway environment, and finally, active maneuvering of the vehicle (such as lane changes, deceleration, and turning into a destination) as required in response to the stimulus provided by the sign [4]

. In these complex procedures, it is of paramount importance that traffic signs be clearly visible to the driver. However, some signs are damaged by humans or nature and some signs are occluded by other objects in the traffic environment. This may lead to a sign being of low visibility or invisible, thereby decreasing its visual recognizability and increasing the probability of a traffic accident. An efficient traffic sign timely visual recognizability evaluation method is needed to judge whether a traffic signs can be recognized in the driving process.

There are many facts that affect a traffic sign’s visual recognizability for a given traffic sign in a given traffic environment. We summarize these into two aspects: objective factors and subjective factors. For objective factors, a traffic sign’s size, placement, mounting height, aiming, depression angle, shape damage degree, traffic sign occlusion degree, road curve, road surface up and down, and visual continuity in the surrounding environment are the factors that affect the driver’s ability to achieve visual recognition. As is known, a reasonable and legitimate traffic sign can provide the driver with good retinal imaging to help with visual recognizability. As for the subjective factors, the driver’s vehicle speed, sight direction, and Viewer Reaction Time (VRT) [4] are the factors that affect visual recognizability. The driver’s Geometric Field Of View (GFOV) decreases progressively with increasing vehicle speed [5]. The direction of the line of sight determines whether the traffic sign falls within the GFOV. Obviously, because of occlusion, the frequency of visual continuity being interrupted is also one of the factors that affects the traffic signs being recognized by drivers [6]. The sign cannot be recognized by humans when the actual maximum recognizable distance is less than the Viewer Reaction Distance (VRD) [4].

Of course, there are facts included by objective factors and subjective factors that also affect traffic signs’ visual recognizability and were not mentioned above. For example, the weather condition [7], light influence caused by solar elevation angle, age and sight of driver [8], and cognitive burden of traffic density [9], among others. We unify these as other factors. An interface is left in the TSTVREM model regarding other factors’ influence to research them in the future.

Although promising results have been achieved in the areas of traffic sign detection and classification [10, 11, 12]

, so far, only a few works have focused on computing the visibility of traffic signs from a given viewpoint for smart driver assistance systems or transportation facilities maintenance purposes. Of those works, most of them are based mainly on sign images and videos using computer vision methods

[13, 14, 15]; some works research the occlusion of traffic signs based on point clouds acquired by mobile laser scanning (MLS). With the limitation of lighting conditions, and a fixed viewpoint position and view angle of getting images from cameras, there is no way to get all the information to compute a traffic sign’s visual recognizability from any position above road surface. Therefore, it is not feasible to use the image or video to calculate the visual recognizability in any position of the traffic sign. The development of MLS technology makes it possible to evaluate the traffic sign’s visual recognizability from 3D measurable point clouds. Unlike optical imaging, in addition to taking pictures, MLS can provide a complete point cloud of the entire roadway scene without limitations of lighting conditions. One can extract all of the information from a point cloud needed to compute a traffic sign’s visual recognizability. Its appearance provides a new way to research visibility and recognizability in a measurable and real traffic environment. Fig. 1 illustrates some examples of traffic signs that with low visual recognizability caused by object occlusion, plants occlusion, tilt, wrong depress angle and height, respectively. Fig. (a)a and (b)b are the same place from different viewpoints above the road. Of course, the height influence for traffic sign visibility is not obvious, but the influences it brings do exist.

(a) Object occlusion
(b) Object occlusion
(c) Plants occlusion
(d) Tilted signs
(e) Wrong angle
(f) Wrong height
Fig. 1: Traffic signs with low visual recognizability.

In this paper, we present an automatic traffic sign timely visual recognizability evaluation model based on human visual cognition theory using 3D measurable point clouds acquired by an MLS system. We summarize our main contributions as follows:

  1. []

  2. We addressed the conception of a visibility field to reflect the visible distribution in a 3D space. For a viewpoint in a 3D space, we presented a VEM model based on human visual cognition theory to compute a traffic sign’s visibility. The VEM combines the principle of retinal imaging with the actual human driving situation to measure the clarity of traffic signs for a given viewpoint.

  3. Comparing with the VEM, we addressed the conception of a visual recognizability field to reflect visual recognizable distribution in a 3D space. For a viewpoint in a 3D space, we established a VREM model to measure a traffic sign’s visual recognizability for a given viewpoint.

  4. To evaluate a traffic sign’s visual recognizability, we propose the TSTVREM model by combining the VREM, the actual maximum continuous visual recognizable distance, and traffic big data.

  5. We presented an automatic algorithm to realize the TSTVREM. It includes extracting a traffic sign point clouds, segmented surrounding point clouds in front of traffic signs on the right of roadway, and computing viewpoints according the different lanes based on extracted road marking point clouds.

The pipeline of traffic sign timely visual recognizability evaluation is illustrated in Fig. 2. Firstly, we segment MLS point clouds into ground point clouds and nonground point clouds and extract the traffic sign point clouds from nonground point clouds. Secondly, we split surrounding point clouds in front of the traffic sign from nonground point clouds according to the designed visual cognition distance of the roadway. Thirdly, we extract the road marking point clouds from the ground point clouds and then get the viewpoints in different lanes. Finally, we use our algorithm to realize the TSTVREM model based on the extracted point clouds and viewpoints to get the visual usability of a traffic sign.

Fig. 2: Pipeline of traffic sign timely visual recognizability evaluation.

This paper is organized as follows. A review of the previous work is given in Section II. Sections III and IV describe the definition of the TSTVREM model and its implementation, respectively. Section V shows experiments and Section VI concludes this paper.

Ii Related Work

With the rapid development of laser radar, especially the application of MLS systems that are able to collect accurate and reliable 3D point clouds, these point clouds provide geometric and radiometric information for infrastructure facilities. This makes it more simple and efficient for people to survey an urban or a roadway environment. Wen et al. [16] state the attributes of the MLS system and its data acquisition process. According to the main focus of this paper, we divide the related work into three categories: traffic sign detection and classification, road marking detection and classification, and visibility research.

Ii-a Traffic Sign Detection and Classification

The goal of traffic sign detection and classification is to find the locations and types of traffic signs. Most of the existing traffic sign detection and classification methods are based on extracting color and shape information from images or videos. The color-based method mainly uses the color space design to segment the sign candidate area, and then uses the shape feature or the edge feature to extract the traffic sign [17, 18]. The shape-based detection methods include: shape matching [19], Hough transform[20], HOG feature and SVM classification [21]

, HOG feature and Convolutional Neural Networks (CNNs)

[22], among others. However, the detection performance of these methods is heavily effected by weather conditions, illumination, view distance and occlusion.

Recently, researchers have developed various methods to detect traffic signs in point clouds. Yang et al.[23] proposed a method to extract urban objects (include traffic signs) from MLS over-segmentations of urban scenes based on supervoxels and semantic knowledge. Lehtom et al. [24]

took spin images and LDHs into account to recognize the objects (including traffic signs) in a roadway environment using a machine learning method. Wen et al.

[16] presented a spatial-related traffic sign detection process for inventory purpose. Some researchers also proposed the detected method combining a 3D point cloud and 2D images for getting a good detecting performance [12, 11, 25, 26].

Ii-B Road Marking Detection and Classification

Road markings on paved roadways, as critical features in traffic management systems, have important functions in providing guidance and information to drivers and pedestrians. Guan et al. [27] proposed an algorithm to extract road markings using the point-density-dependent multi-threshold segmentation and morphological closing operation. [28],[29]

mentioned a method for rapidly extracting road marking by generating 2D georeferenced images from 3D point clouds. The method of extracting road marking by converting 3D point clouds into 2D georeferenced feature images will lead to incompleteness and incorrectness in the feature extraction process, Yu et al.

[30]

extracted the road marking directly from the 3D point clouds relying on the reflective properties and classified them using deep learning.

Ii-C Visibility Research

Doman et al. [31]

proposed a visibility estimation method for traffic signs as part of nuisance-free driving safety support systems by preventing the provision of too much information to a driver. They improved their method by considering temporal environmental changes in

[32] and integrated both the local and global features in a driving environment with [32] in [14]. They use different contrast ratio and distance counted by pixel numbers in different area of a image to compute visibility of traffic sign. This way is limited by the position of viewpoint and weather condition and they did not consider the traffic sign placement, occlusion, road curve, and subjective factor mentioned in Section I in their model.

Katz et al. proposed a Hidden Point Removal (HPR) operator to get visible points from a given viewpoint [33], applied it for improving the visual comprehension of point sets in [34], and researched what properties should transformation the function of an HPR operator satisfy in [35]. Based on the HPR operator, traffic signs occlusion detection from a point cloud has been researched in [36]. This paper measured the occlusion by occluded distribution index and occlusion gradient index. However, other factors including occluded area proportion, the influence of driver speed on vision, road curve, number of lanes, and others have not been considered. Those factors are important for a traffic sign and influence the visibility and recognizability of the traffic sign. Beside this, the HPR operator only can detect the surface points which occluded the traffic sign, and it cannot detect all the occlusion point clouds when the occlusion point cloud is composed by many objects or plans.

Iii Definition of TSTVREM Model

The framework of TSTVREM model as shown in Fig. 3. In this paper, the phrase “viewpoint visibility” means the visibility of a traffic sign for a given viewpoint, and the phrase “viewpoint recognizability” means the recognizable degree of a traffic sign for a given viewpoint. The framework can be divided into three parts: visibility field definition and VEM model (Section 3.1), visual recognizability field definition and VREM model (Section 3.2), and traffic sign timely visual recognizability evaluation (Section 3.3).

Fig. 3: The framework of TSTVREM model.

Iii-a Definition of Visibility Field and VEM Model

Visibility field definition: for a given 3D environment around a target object, the visibility of each viewpoint in 3D space around the object constitutes a visibility field. It reflects the visible distribution about a target object in 3D space. The visibility field can be divided into the actual visibility field and ideal visibility field according to an actual traffic sign set in a real road environment and its corresponding ideal traffic sign in an ideal road environment, respectively. Take a traffic sign as an example; the hemispherical visibility field for the traffic sign is shown in Fig.4; the environment with occlusion or not is the actual visibility field or ideal visibility field separately; the traffic sign is yellow in (a)a; white indicates visibility 1 and black indicates visibility 0 in Fig. (b)b and (c)c.

(a) viewpoints
(b) Actual visibility field
(c) Ideal visibility field
Fig. 4: Hemispherical visibility field of a traffic sign.

Iii-A1 Actual traffic sign visibility field and VEM model

For a traffic sign in an actual traffic environment, the visibility of each viewpoint, which has a fixed height above on the road surface of driving direction lanes in front of the traffic sign, constitutes an actual traffic sign visibility field. It is related to the orientation of traffic sign’s panel, traffic sign’s height, observation distance, road curve, road surface up and down, viewpoint position, degree of occlusion, and sight line deviation, among others. We call the influence of traffic sign’s panel orientation, traffic sign’s height, observation distance, road curve, road surface up and down, viewpoint position as geometric factor. The VEM model is constructed by geometric factor , occlusion factor , and sight line deviation factor . The viewpoint visibility of a traffic sign can be defined as follows:

(1)

In the following part, we describe how to evaluate the geometric factor, occlusion factor, and sight line deviation factor, respectively.

  • Geometric factor evaluation

    In order to make the visibility calculation consistent with the human visual recognition theory. We use the principle of retinal imaging to consider the impact of geometric factor. The evaluation of geometric factor is given as blow.

    (2)
    • : the retinal imaging area of the traffic sign observed from a given viewpoint.

    • : the standard retinal imaging area of its corresponding ideal traffic sign for a given viewpoint, which is at the normal of ideal traffic sign panel and passes through the center of the panel and has a fixed standard distance to the panel. In order to make , we take the standard distance less than m. Because the traffic sign has disappeared in driver’s view field, there is no meaning to compute visibility of a traffic sign when the observation distance less than meter. Different types of traffic signs have different standard retinal imaging areas.

    Obviously, is inversely proportional to the angle between the line connecting the viewpoint to the traffic sign panel center and the normal passing through the traffic sign panel center (orientation factor), observation distance, and height difference between the viewpoint and traffic sign panel center.

  • Occlusion factor evaluation

    We also use the principle of retinal imaging to consider the impact of the occlusion area ratio. Apart from that, we introduce the occlusion distribution factor as [36] did. The evaluation of the occlusion degree is given as blow.

    (3)
    • : the retinal imaging occluded area ratio. is the retinal imaging area of the occluded traffic sign region for a given viewpoint.

    • : the occlusion distribution. is the distance between the center point of occluded traffic sign region and traffic sign panel center point . is the maximum length from to each vertex of the boundary polygon of the traffic sign panel.

    • , : weights. They should satisfy the condition: .

    After adding the punishment item , the evaluation of the occlusion factor is given as below.

    (4)

    In order to make sure nearly to zero when is nearly one, should meet the conditions . Obviously, when the degree of occlusion is constant, decreases as the penalty parameter increases; when the penalty parameter is constant, decreases as the degree of occlusion increases. Therefore, Formula 4 meets our expectations that the effect on visibility decreases when the degree of occlusion increases.

  • Sight line deviation factor evaluation

    For an object we see in a fixed viewpoint, it is more clear we look at it in the front view than we when look at it in an oblique view. This factor reflects how the different imaging positions on the retina may lead to different visibilities. Furthermore, a driver on the road at different drive speeds will lead to him having different GFOVs [37]. Combine with those above, the sight line deviation factor evaluation is established as below.

    (5)
    • : sight line deviation angle between the line of sight and the line connecting the viewpoint to .

    • : the actual GFOV. It depends on the actual -percentile speed [38] which comes from traffic big data. can be denoted by a function with parameter as .

    • : It is used as punishment item when sight line deviation angle is bigger than half of the actual GFOV.

To sum up all factors discussed above, For given a traffic sign and a viewpoint in the lane , its visibility equals:

(6)

Iii-A2 Traffic Sign Ideal Visibility Field and VEM model

For a traffic sign in an ideal traffic environment, the visibility of each viewpoint that has a fixed height above the surface of driving direction lanes in front of the traffic sign constitutes an ideal traffic sign visibility field. An ideal traffic environment is an environment that has an ideal traffic sign installed specified in a suitable placement beside the straight horizontal roadway according to traffic design installation rules, and meets the condition that there is no other object around the roadway in addition to the traffic sign. Therefore, the VEM model in an ideal environment degenerates into the product of the geometric factors and sight line deviation factors . The formula for viewpoint visibility in an ideal traffic environment is shown below:

(7)
  • : the evaluation of geometric factor in an ideal traffic environment.

  • : the evaluation of sight line deviation factor in an ideal traffic environment.

(8)
  • : the retinal imaging area of the traffic sign observed from a given viewpoint in an ideal traffic environment.

(9)
  • : sight line deviation angle between the line of sight and the line connected viewpoint to in an ideal traffic environment.

  • : the ideal GFOV. It depend by the design speed of the road and .

  • : It is used as punishment item when sight line deviation angle big than half of the ideal GFOV.

Iii-B Definition of Visual Recognizability Field and VREM Model

Although the VEM model uses retinal imaging area to estimate viewpoint visibility in accordance with the natural recognition process, it is still a difficult problem to determine the viewpoint recognizability only by viewpoint visibility. For example, for two viewpoints have the same visibility to the same traffic sign, the near viewpoint with occlusion, the far viewpoint without occlusion. The sign cannot be recognized from the near viewpoint, for too much effective information has been lost to occlusion, while the sign may be recognized from the far viewpoint from the blurred silhouette. In order to conquer this problem, we introduced corresponding viewpoint visibility in the ideal traffic environment as the standard to evaluate viewpoint recognizability. Beside this, the advantage of introducing ideal viewpoint visibility is that it can be used as the standard to judge whether traffic signs meet the design and installation specifications or not, and then evaluate whether the traffic sign is recognizable or not from a given viewpoint.

Visual recognizability field definition: for a given 3D environment around a target object, the visual recognizability of each viewpoint in 3D space around the object constitutes a visual recognizability field. It reflects the visual recognizable distribution about an object in 3D space.

In the VREM model, the viewpoint recognizability is related to the actual viewpoint visibility and the corresponding ideal viewpoint visibility, and the other factors mentioned in Section I. We denote the intersection point that the polyline made up by viewpoints intersects with the perpendicular line to the driving direction and passing through the traffic sign center as . Remember the intersection point which the line perpendicular to driving direction and passing traffic sign center intersects with the right road marking outline with . Remember the distance from viewpoint to along the polyline made up by viewpoints and the distance from to with and respectively. The corresponding viewpoint in an ideal traffic environment has the same and as the viewpoint in an actual traffic environment.

The visual recognizability is given as follows:

(10)
  • : weights. and meet the condition . In this paper, we leave the research of other factors influence to cognition for the future, that is .

  • : threshold. It is used to judge whether the traffic sign at a viewpoint is can be recognized or not.

Iii-C Traffic Sign Timely Visual Recognizability Evaluation

According to the Manual on Uniform Traffic Control Devices (MUTCD in the United States) [38] or Road Traffic Signs and Markings (GB 5768-1999 in China) [39], Sight Distance (SD) is a length of road surface from the point which a driver can see traffic sign with an acceptable level of clarity to the traffic sign. SD is given based on a driver’s ability for visual recognition under a designed vehicle speed. We use the visual recognizability field composed by the viewpoints within SD length of forward direction area of road surface to evaluate visual recognizability of a traffic sign.

The traffic sign timely visual recognizability is evaluated according to different lanes in the forward direction area of a road surface. It is not only related to the actual maximum continuous visual recognizable distance , which the maximum continuous length of the viewpoints can be recognized along a lane, but also to the actual vehicle speed on the roadway and VRT. VRT is simply the time necessary for a driver to detect, read, and react to the message displayed on an approaching on-premise sign that lies within his or her cone of vision [4]. Once VRT is ascertained, VRD for a given sign location, or the distance which a vehicle travels during the VRT interval, can be calculated; therefore, . The relationship between and determines whether the traffic sign has enough time to recognize or not. The evaluation of traffic sign visual recognizability is given as below. 1{condition} return 1 when condition is ture, else return 0.

(11)

Iv TSTVREM Model Implementation

We construct an automatic algorithm to implement the TSTVREM model in point clouds acquired by an MLS system. The input is a road’s point clouds and its trajectory, the output is a visibility field, recognizability field, and traffic sign timely visual recognizability in each lane. First, we detect and classify the traffic signs and extracted road marking from the input point clouds. This step is called preliminary work. Second, we abbreviate the surrounding point clouds on the right side above the roadway in front of a traffic sign as traffic sign surrounding point clouds and segment traffic sign surrounding point clouds and road marking point clouds along the roadway according to SD for each traffic sign. This step is called the segment process. Then, the viewpoints position above road surface forward direction region is computed form the segmented road marking point clouds. This step is called viewpoints computing. In the end, we use the traffic sign panel point cloud, the traffic sign surrounding point clouds, and the viewpoints together to extract information to compute traffic sign timely visual recognizability using TSTVREM model. This step is called traffic sign timely recognizability computing. The following part is to introduce these processes separately. All symbols used to describe the TSTVREM model implementation are illustrated in Fig. 5.

(a) Actual environment
(b) Ideal environment
Fig. 5: Illustration of TSTVREM model implementation.

Iv-a Preliminary Works

For input point clouds of a road, we adopt the method proposed by Wen et al. [16] for traffic sign detection and classification. The output is all a traffic sign’s panel point clouds and its type. Using the sign type and speed limit of the roadway, we can get the SD of each traffic sign according to the country’s traffic signs design specifications. The extracted traffic signs’ panel point clouds combine with SD are used to segment traffic sign surrounding point clouds.

We adopt the algorithm presented by Yu et al. [30] for road markings detection and classification. The extracted road markings are segmented by extracted traffic signs position and SD for every traffic signs. The type of road markings is used to distinguish road surface forward direction region and different lanes for each segmented road marking point clouds.

Iv-B Segment Process

Using the traffic sign panel point clouds, we can compute the traffic sign center and find the nearest trajectory point to

. The vector from

to is denoted as .

From the road markings point clouds we extracted, we use the different length of clusters along the roadway direction to distinguish solid and dashed lines. Its length threshold is the smallest SD meters in [39]. If the solid line is not continuous or partially missing because of the low reflectivity, we use its attribute that it approximately parallels with trajectory line to complete it. For solid lines in the right of , we sorted the distances from solid line to perpendicular to the driving direction. The solid line which has the maximum distance is the right roadside outline . For solid lines in the left of , we sorted distance from the solid line to perpendicular to the driving direction. The solid line which has the nearest distance is the left outline of the road surface forward direction region. The vector intersects with point . The vector intersects with point . Remember the distance from to with .

If the road surface forward direction region is not detected because of road markings wear and tear or no road marking, we get and through left and right move the trajectory and minus the height of the MLS device for every trajectory point.

The method to get and from solid lines is as follows. In the plane, select the nearest point to the traffic sign in the trajectory. The two points near the in trajectory are denoted by and . Segment the solid lines in a slice along the and compute the centers of the sliced point clouds for every cluster. The and is selected by the distance and side from centers to line to .

Using and as start points, constantly cutting pieces of road marking clusters along the trajectory by interval [27] and remember the intersection with and respectively, compute the center of and denoted as , until the distance accumulated in is bigger than the traffic sign’s SD. Remember the point which is in the line from last point to second last point in , and its position at last point minus the extra length longer than SD as . The last point in and should be adjustment by the ratio of in line from last point to second last point in .

We construct two rectangles in a vertical plane to segment out the traffic sign surrounding point clouds and solid road marking point clouds. The rectangles have a horizontal edge always perpendicular to , and their horizontal length and vertical height of edges can change according to our will. The octree segment method is used to segment out the traffic sign environment point cloud and solid road marking point cloud along . The vector from to denoted as , the horizontal unit vector perpendicular to can be denoted as . Remember the coordinate of is . The four coordinates of the rectangle used to segment out the traffic sign environment point cloud we constructed are , then , , , and . We use the same method to get the rectangle for segment solid road markings. The four coordinates of the rectangle used to segment out the road marking point cloud we constructed are , then , , , .

Iv-C Viewpoints Computing

In order to reduce the effect of reflectivity on the extraction of traffic marks, we use solid road marking lines to calculate the number of lanes. If the lane standard width is denoted as , then we get the number of lanes . The actual lane width is computed by formula .

The next step is to get all the lane dividing lines between and . We denote the dividing lines as from right to left, as , as . The and the is known and depressed by arrays and respectively. The unit vector to is be denoted by firstly. Then we get the split points in lane dividing lines. The point in is and .

When we get every lane dividing line expressed in arrays, we can use interpolation or a sampling method as above to get any point in a lane. As the method interpolation or sampling is determined, we got a column points (maybe more column points) along a lane. The point

is used to denote point in the column between and . We get the viewpoint by plus observation height to the coordinate of . The observation height is usually set to meters above the road surface [40]. The result of calculated viewpoints is shown in Fig. 6.

(a) Top view
(b) Side view
Fig. 6: Viewpoints computation result.

Iv-D Traffic Sign Timely Visual Recognizability Computing

Through those three steps above, for a traffic sign, we get a traffic sign panel point cloud, a traffic sign environment point cloud, and viewpoints along the lanes. The remaining work is how to use them to extract information as the input of the TSTVREM model to compute traffic sign timely visual recognizability. Its details include: compute retinal imaging area of a point cloud for a given viewpoint, get the occlusion point cloud projected in traffic sign panel, get the angle of sight line deviation, set the ideal traffic environment.

Iv-D1 Retinal Imaging Area Computing

The first aspect of computing the retinal imaging area is to rotate the coordinates of point clouds to the human view. The line from the traffic sign panel point cloud center to the viewpoint is denoted as . For a group of traffic sign panel point clouds, a traffic sign surrounding point cloud and a viewpoint, we rotate their z-axis coordinates to the line using quaternions rotation method [41], and move the origin of coordinate system to the rotated . Remember the coordinate transformed traffic sign panel center as and . Remember the coordinate transformed as , the coordinate-transformed traffic sign panel point cloud and coordinate-transformed traffic sign environment point cloud are denoted as and , respectively. Next, project the to its and denote it as . Then, the edges of are computed by the alpha shape algorithm [42]. The parameter alpha of this algorithm is remembered . Remember the polygon composed by those edges as . At last, we can use polygon area formula to compute edge area and map the area to retinal imaging area using human retinal imaging principle [43]. The distance from the center of eye’s entrance pupil to the retina is set to millimeters.

Iv-D2 Occlusion Point Cloud Obtaining

Compute the distances from to every vertex of and select out the vertex that has the largest distance . Remember the line vector from to as g and the line vector from to as . The angle between g and equals . The vector from to a point in is labeled as . The symbol is the angle between g and the line and . For every point in , if , then compute the intersection point of with xOy plane, add the intersection point into point cloud and add into occlusion point cloud . For every point in , if it inside the polygon , then add it into the occluded point cloud , else delete its corresponding point in . The retinal imaging area of is computed by the alpha shape algorithm and human retinal imaging principle.

Iv-D3 Sight Line Deviation Computing

The sight line of a driver in point is the line defined by neighboring points from to . The sight line deviation angle is the angle from to g, and .

Iv-D4 Ideal Traffic Environment Setting

For all kinds of traffic signs included in a state traffic system, we need to build a traffic sign panel point cloud library of which each class of traffic signs contains one. We choose one traffic sign for each class from actual traffic environment point clouds to constitute the library. Coordinates of every traffic sign in the library are transformed such that their normal vector is parallel with y-axis and their center is the origin O.

As the traffic sign design manual of a roadway, the height above the road surface, depression angle , the angle in the direction road users are to pass , road shoulder width are specified. The coordinate system of ideal traffic environment is set as follows. According to whether the traffic sign can be detected, it is divided into two parts.

If the road marking is detected. We use the origin O as , the y-axis as . The normal vector of the traffic sign panel is . If the traffic sign in the right of roadway, the traffic sign panel center equals . If the traffic signs hanging above the roadway, coordinates of with ( and with are known, then the distance between and in xOy plane is computed by formula , coordinate of is . Rotate the corresponding traffic sign panel point cloud according to and using the quaternions rotation method and coordinate translation transformation. The coordinate of the corresponding viewpoint of is . Among these, is the distance in xOy plane from to , and is the accumulated distance from to .

If the road marking is not detected. We see the traffic sign in the ideal place. We only change the coordinate into , the left work is the same as the situation of detected road marking.

V Experiments and Discussions

V-a MLS System and Datasets

A RIEGL VMX-450 MLS system is used in this study to acquire the datasets in the area within Xiamen Island. This system integrated two laser scanners, four high-resolution digital cameras, a GNSS, an IMU, and a DMI [44]. Two laser scanners are installed with “X” configuration pattern and rotate to emit laser beams with maximum valid range of 800 m at a measurement rate of 550,000 samples/s. The accuracy of scanned point cloud data is within 8 mm. Four cameras are installed in four corners to get high-resolution pictures of the surroundings.

In order to prove the practicality of our models and algorithm both in urban roads and mountain roads, one survey including three roads were performed with the MLS to obtain the data required for this research. There are Zengcuoanbei Road (ZCABR), Longhushan Road (LHSR), and Wenping Road (WPR). The ZCABR and LHSR are urban roads, the WPR is a mountain road. The three roads information is presented in table I. The taxi travel records for the last year is extracted from the traffic big data library of Xiamen China. We use the driving speed of taxi car to estimate the actual driving speed on the road.

Dataset Points Length Speed limit Actual speed
CABR 131681009 770.06 m 30 mph 25.0 mph
LHSR 187751087 1571.747 m 40 mph 61.1 mph
WPR 130862580 1956.14 m 40 mph 42.2 mph
TABLE I: Descriptions of the two mobile lidar datasets

V-B Parameter Sensitivity Analysis

For parameters in the geometric factor evaluation, once viewpoint, traffic sign, , and are ascertained, the viewpoint visibility is inversely proportional to the size of the parameter .

For parameters in occlusion factor evaluation, under the circumstances of no occlusion, and nearly equals to 0 under the circumstances of half occlusion. The relationship among the parameters , , and in the occlusion factor evaluation part of Function 6 is shown in Fig. 7. The upper three lines are generated under an occlusion ratio of 0.01 and occlusion distribution equals 0.2. From the figure, we can see that with the increment of occlusion ratio weight, decreases gradually. This is because the occlusion distribution weight is increasing as occlusion ratio weight decreases. decreases with the increase in when other parameters ascertained and it nearly is equal to 1. Those all shown that the parameter sets do meet the demand of our model. Seemingly, to the upper three lines, the lower three lines are generated under the occlusion ratio equals 0.5 and the occlusion distribution equals 0.8, nearly is equal to 0; this meets our model demand too. Fig. 8 shows that decreases when the occlusion ratio increases and occlusion distribution near to the center of traffic sign under conditions , , and .

Fig. 7: Occlusion value with different weights of occlusion ratio and punishment item.
Fig. 8: Occlusion value with different occlusion ratio and occlusion distribution.

These parameters were used to test in the three datasets as Table II. The in this table is the viewpoint interval along the road, its unit is the number of trajectory points. It’s about 2 m when trajectory points number equals 40 at vehicle speed 40 mph. Some research [37, 5] shows that most of traffic signs fall in the GFOV equals to can be recognized accurately in the 60 mph velocity, and most of traffic signs fall in the GFOV equals to can be recognized accurately in the 30 mph velocities. Linear interpolation was used to calculate the GFOV at different velocities. The value of is changed by the value of the sight line deviation angle under as shown in Fig. 9. When the angle is less than the GFOV angle, , else the diver needs to turn his head to see the traffic sign, so drastically reduced. If the angle is greater than , the vehicle has passed the traffic sign, so it equals to 0. The middle values of SD for different design speeds listed in [39] are used as in the experiments. In the Chinese traffic sign setting standard, SD is the same when the types change. The mounting height of the road side traffic signs and the height of overhead signs are set to the middle value in the state standard [39]. VRT for vehicles traveling under 35 mph in less than three-lane environments can be estimated as 8 s; for vehicles traveling over 35 mph in a more complex four- to five-lane environment, at 10 s. Considering that the driving maneuver can be made after the sign location, the is set to 4 s and 5 s for vehicles speed 30 mph and 40 mph, respectively [4]. From an experiment of observing the 100 images with actual viewpoint visibility and ideal viewpoint visibility within the VRT time of 20 students, 16 men, and 4 women. We got the threshold that all students can recognize all images.

Parameters ZCABR LHSR WPR
2 m 2 m 2 m
0.8 0.8 0.8
0.2 0.2 0.2
6 6 6
6 6 6
1 1 1
0 0 0
0.71 0.71 0.71
0.1 m 0.1 m 0.1 m
mm mm mm
1.2 m 1.2 m 1.2 m
2 m 2 m 2 m
40 40 40
0.5 m 0.5 m 0.5 m
2/4.75 m 2/4.75 m 2/4.75 m
SD 45 m 60 m 60 m
4 s 5 s 5 s
25.0 mph 61.1 mph 42.2 mph
TABLE II: Descriptions of parameters of datasets
Fig. 9: Sight line deviation value with different angle

V-C Calculation Result and Discuss

An example of the calculated results of viewpoint visibility for a traffic sign is illustrated by Fig. 10. The yellow point cloud is a traffic sign panel. The green point cloud is the traffic sign surrounding objects and road surface point cloud. The common vertex of the blue line cluster is the viewpoint. In Fig. (c)c, the closed yellow line is the traffic sign panel point cloud edges, the pink closed line in traffic sign panel is the occlusion part which is occluded by billboard.

(a) Top view
(b) Side view
(c) Occlusion area
Fig. 10: Occluded point cloud obtaining result.

The visibility field results are saved as text, as the Fig. 11 shows. The viewpoint line is composed by column viewpoints along the road.

Fig. 11: Visibility field results

The visual recognizability field results are saved in text, as the Fig. 12 shows. The “CognitiveDouble” in this figure is the ratio of actual viewpoint visibility to ideal viewpoint visibility. In a normal situation, the “CognitiveDouble” value is smaller than 1, but sometimes, “CognitiveDouble” value will be bigger than 1 because the road curve or road surface up and down may lead to the aiming, distance, and sight line from the viewpoint to the traffic sign in the actual traffic environment being better than in the ideal traffic environment. It does not matter for discriminating the recognizability of a viewpoint, if “CognitiveDouble” value is bigger than , its recognizability equals to 1, else equals to 0.

Fig. 12: Visual recognizability field results

Traffic sign timely visual recognizability results are shown in Fig. 13. The “maxCognitiveDistance” and “minCognitiveDistance” in this figure are and respectively. From this figure, we can see the recognizability of the viewpoint line 3 of traffic sign 1, of which the viewpoint visibilities shown in Fig. 11 and viewpoint recognizability shown in Fig. 12 are 0.

Fig. 13: Traffic sign timely visual recognizability result

The Fig. 14 shows a graphical interface to the evaluation of traffic sign timely visual recognizability generated in our algorithm. It is a section of Wenping Road and includes the detected traffic sign (yellow) viewpoint visibility result (mesh plane with color from red to green), viewpoint recognizability result (mesh polygon with color black), occlusion point cloud (red), and some pictures have a same position with corresponding mesh in actual environment. The color of mesh planes change from red to green means that the values of viewpoint visibility change from big to small. The “X” symbol in a polygon means that the traffic sign cannot been recognizable in the polygon plane.

Fig. 14: Graphical interface of traffic sign timely visual recognizability results

The proposed traffic sign timely visual recognizability model was implemented using C++ running on an Intel (R) Core (TM) i5-4460 computer. The computing times in each processing step in every section were recorded in Table III. As seen from Table III, among traffic sign detection and classification computing time (STime), road marking detection and classification computing time (MTime) and traffic sign timely visual recognizability evaluation computing time (RTime), traffic sign detection and classification computing time is the least. Road marking detection and classification takes more computing time than traffic sign detection and classification. The computing time of TSTVREM model is related to the number of signs. The time complexity of our proposed method is fast enough to meet the demand of large-scale implementation. This is benefited from the segmentation along the right road outline step, which dramatically reduces the quantity of the point cloud data to be processed. Taking the sec2 of the WPR dataset with approximately 44 million points and a length of 453.5 m of road, which have 6 traffic signs as an example. The total computing time of our proposed method took 81.339 s to evaluate traffic sign timely visual recognizability from the raw MLS point clouds. Therefore, our method is efficient capable of rapid implementation in a large-scale transportation environment.

Roads Sections Length PointsNumber STime MTime RTime TotalTime
ZCABR sec1 770.06 m 131681009 101.562 s 1331.77 s 23.4443 s 1456.78 s
LHSR sec1 734.051 m 91212406 34.9641 s 462.631 s 11.0206 s 508.615 s
sec2 593.169 m 69915170 24.2266 s 178.086 s 3.50449 s 205.817 s
sec3 244.527 m 26623511 8.99354 s 16.3995 s 1.11341 s 26.5064 s
WPR sec1 453.526 m 44234114 15.2498 s 0.131961 s 0.00027 s 15.3821 s
sec2 632.114 m 51933545 19.9543 s 54.6281 s 6.75658 s 81.339 s
sec3 870.5 m 34694921 28.8413 s 58.7381 s 5.59397 s 93.1734 s
TABLE III: Computing time of different datasets

Vi Conclusion

This paper presented a traffic sign timely visual recognizability evaluation model for traffic sign inventory and management purpose based on human visual cognition theory and traffic big data using measurable point clouds collected by an MLS system. In the process of building the model, we considered a number of factors, such as traffic sign’s size, position, placement, mounting height, panel aiming, depression angle, shape damage, occlusion, actual vehicle speed, sight line deviation, GFOV, VRT, road curve, road surface gradient, and different lanes. The conception of a visibility field is addressed to reflect the traffic sign visibility distribution in 3D space. A visibility evaluation model is presented based human visual cognition theory to compute a traffic sign’s visibility for a given viewpoint. Comparing with the concept of a visibility field, we addressed the conception of a visual recognizability field to reflect the visual recognizable distribution in 3D surface and proposed the visual recognition evaluation model to compute a traffic sign’s visual recognizability from a given viewpoint. In order to evaluate a traffic sign timely visual recognizability in different lanes, we propose a TSTVREM model by combining the visual recognizability field with the actual maximum continuous cognitive distance and traffic big data. Finally, we constructed an automatic algorithm to implement the TSTVREM model. The algorithm includes: traffic sign and road marking point clouds extraction and classification; traffic sign surrounding point clouds segmentation; viewpoints computation in different lanes; and TSTVREM model realization. In addition, not only for traffic signs, but also for other traffic devices, the timely visual recognizability can be evaluated by our model based on the MLS point cloud. For example, the traffic light. The only different form traffic sign is that it is needed to detect the traffic light from point cloud and think of it as a plane.

Our model is based on three key ideas. First, we evaluate the viewpoint visibility from the angle of human retinal imaging, GFOV changes at different driving speeds and roadway condition, and occlusion degree as other works. Second, we get the actual driving speed from traffic big data to evaluate the timely visual recognizability of the traffic sign. Third, we use the latest equipment obtained the measurable point clouds of roadway environment and use the latest point cloud processing algorithm to enable the implementation of our model.

In the future, other facts may be considered for our model. For example: the light influence caused by solar elevation angle, background influence, and the cognitive burden of traffic density, among others.

Acknowledgment

We would like to thank the anonymous reviewers for their valuable comments. This work is supported by grants from Natural Science Foundation of China (No.61371144 and No.U1605254) and Natural Science Foundation of Xizang Autonomous Region of China (No.2015ZR-14-16).

References

  • [1] B. Liu, Z. Wang, G. Song, and G. Wu, “Cognitive processing of traffic signs in immersive virtual reality environment: An erp study,” Neuroscience letters, vol. 485, no. 1, pp. 43–48, 2010.
  • [2] E. Kirmizioglu and H. Tuydes-Yaman, “Comprehensibility of traffic signs among urban drivers in turkey,” Accident Analysis & Prevention, vol. 45, pp. 131–141, 2012.
  • [3] T. Ben-Bassat and D. Shinar, “The effect of context and drivers’ age on highway traffic signs comprehension,” Transportation research part F: traffic psychology and behaviour, vol. 33, pp. 117–127, 2015.
  • [4] A. Bertucci, “Sign legibility rules of thumb,” United States Sign Counc, 2006.
  • [5] R. R. Mourant, N. Ahmad, B. K. Jaeger, and Y. Lin, “Optic flow and geometric field of view in a driving simulator display,” Displays, vol. 28, no. 3, pp. 145–149, 2007.
  • [6] G. Tieri, E. Tidoni, E. Pavone, and S. M. Aglioti, “Mere observation of body discontinuity affects perceived ownership and vicarious agency over a virtual hand,” Experimental brain research, vol. 233, no. 4, pp. 1247–1259, 2015.
  • [7] R. Belaroussi and D. Gruyer, “Impact of reduced visibility from fog on traffic sign detection,” in Intelligent Vehicles Symposium Proceedings, 2014 IEEE.   IEEE, 2014, pp. 1302–1306.
  • [8] J. Rogé, T. Pébayle, E. Lambilliotte, F. Spitzenstetter, D. Giselbrecht, and A. Muzet, “Influence of age, speed and duration of monotonous driving task in traffic on the driver’s useful visual field,” Vision research, vol. 44, no. 23, pp. 2737–2744, 2004.
  • [9] M. Costa, A. Simone, V. Vignali, C. Lantieri, A. Bucchi, and G. Dondi, “Looking behavior for vertical road signs,” Transportation research part F: traffic psychology and behaviour, vol. 23, pp. 147–155, 2014.
  • [10] Z. Zhu, D. Liang, S. Zhang, X. Huang, B. Li, and S. Hu, “Traffic-sign detection and classification in the wild,” in

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    , 2016, pp. 2110–2118.
  • [11] M. Soilán, B. Riveiro, J. Martínez-Sánchez, and P. Arias, “Traffic sign detection in mls acquired point clouds for geometric and image-based semantic inventory,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 114, pp. 92–101, 2016.
  • [12] Y. Yu, J. Li, C. Wen, H. Guan, H. Luo, and C. Wang, “Bag-of-visual-phrases and hierarchical deep models for traffic sign detection and recognition in mobile laser scanning data,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 113, pp. 106–123, 2016.
  • [13] Á. González, M. Á. Garrido, D. F. Llorca, M. Gavilán, J. P. Fernández, P. F. Alcantarilla, I. Parra, F. Herranz, L. M. Bergasa, M. Á. Sotelo et al., “Automatic traffic signs and panels inspection system using computer vision,” IEEE Transactions on intelligent transportation systems, vol. 12, no. 2, pp. 485–499, 2011.
  • [14] K. Doman, D. Deguchi, T. Takahashi, Y. Mekada, I. Ide, H. Murase, and U. Sakai, “Estimation of traffic sign visibility considering local and global features in a driving environment,” in Intelligent Vehicles Symposium Proceedings, 2014 IEEE.   IEEE, 2014, pp. 202–207.
  • [15] M. Khalilikhah and K. Heaslip, “Analysis of factors temporarily impacting traffic sign readability,” International Journal of Transportation Science and Technology, vol. 5, no. 2, pp. 60–67, 2016.
  • [16] C. Wen, J. Li, H. Luo, Y. Yu, Z. Cai, H. Wang, and C. Wang, “Spatial-related traffic sign inspection for inventory purposes using mobile laser scanning data,” IEEE Transactions on Intelligent Transportation Systems, vol. 17, no. 1, pp. 27–37, 2016.
  • [17] J. Marinas, L. Salgado, J. Arróspide, and M. Nieto, “Detection and tracking of traffic signs using a recursive bayesian decision framework,” in Intelligent Transportation Systems (ITSC), 2011 14th International IEEE Conference on.   IEEE, 2011, pp. 1942–1947.
  • [18] J. Lillo-Castellano, I. Mora-Jiménez, C. Figuera-Pozuelo, and J. L. Rojo-Álvarez, “Traffic sign segmentation and classification using statistical learning methods,” Neurocomputing, vol. 153, pp. 286–299, 2015.
  • [19] H. Li, F. Sun, L. Liu, and L. Wang, “A novel traffic sign detection method via color segmentation and robust shape matching,” Neurocomputing, vol. 169, pp. 77–88, 2015.
  • [20] K.-h. Qin, H.-y. WANG, and J.-t. ZHENG, “A unified approach based on hough transform for quick detection of circles and rectangles,” Journal of Image and Graphics, vol. 1, p. 020, 2010.
  • [21] J. Greenhalgh and M. Mirmehdi, “Real-time detection and recognition of road traffic signs,” IEEE Transactions on Intelligent Transportation Systems, vol. 13, no. 4, pp. 1498–1506, 2012.
  • [22] Y. Yang, H. Luo, H. Xu, and F. Wu, “Towards real-time traffic sign detection and classification,” IEEE Transactions on Intelligent Transportation Systems, vol. 17, no. 7, pp. 2022–2031, 2016.
  • [23] B. Yang, Z. Dong, G. Zhao, and W. Dai, “Hierarchical extraction of urban objects from mobile laser scanning data,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 99, pp. 45–57, 2015.
  • [24] M. Lehtomäki, A. Jaakkola, J. Hyyppä, J. Lampinen, H. Kaartinen, A. Kukko, E. Puttonen, and H. Hyyppä, “Object classification and recognition from mobile laser scanning point clouds in a road environment,” IEEE Transactions on Geoscience and Remote Sensing, vol. 54, no. 2, pp. 1226–1239, 2016.
  • [25] M. Tan, B. Wang, Z. Wu, J. Wang, and G. Pan, “Weakly supervised metric learning for traffic sign recognition in a lidar-equipped vehicle,” IEEE Transactions on Intelligent Transportation Systems, vol. 17, no. 5, pp. 1415–1427, 2016.
  • [26] C. Ai and Y. J. Tsai, “An automated sign retroreflectivity condition evaluation methodology using mobile lidar and computer vision,” Transportation Research Part C: Emerging Technologies, vol. 63, pp. 96–113, 2016.
  • [27] H. Guan, J. Li, Y. Yu, C. Wang, M. Chapman, and B. Yang, “Using mobile laser scanning data for automated extraction of road markings,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 87, pp. 93–107, 2014.
  • [28] H. Guan, J. Li, Y. Yu, Z. Ji, and C. Wang, “Using mobile lidar data for rapidly updating road markings,” IEEE Transactions on Intelligent Transportation Systems, vol. 16, no. 5, pp. 2457–2466, 2015.
  • [29] M. Soilán, B. Riveiro, J. Martínez-Sánchez, and P. Arias, “Segmentation and classification of road markings using mls data,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 123, pp. 94–103, 2017.
  • [30] Y. Yu, J. Li, H. Guan, F. Jia, and C. Wang, “Learning hierarchical features for automated extraction of road markings from 3-d mobile lidar point clouds,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 8, no. 2, pp. 709–726, 2015.
  • [31] K. Doman, D. Deguchi, T. Takahashi, Y. Mekada, I. Ide, H. Murase, and Y. Tamatsu, “Estimation of traffic sign visibility toward smart driver assistance,” in Intelligent Vehicles Symposium (IV), 2010 IEEE.   IEEE, 2010, pp. 45–50.
  • [32] ——, “Estimation of traffic sign visibility considering temporal environmental changes for smart driver assistance,” in Intelligent Vehicles Symposium (IV), 2011 IEEE.   IEEE, 2011, pp. 667–672.
  • [33] S. Katz, A. Tal, and R. Basri, “Direct visibility of point sets,” in ACM Transactions on Graphics (TOG), vol. 26, no. 3.   ACM, 2007, p. 24.
  • [34] S. Katz and A. Tal, “Improving the visual comprehension of point sets,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2013, pp. 121–128.
  • [35] ——, “On the visibility of point clouds,” in Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 1350–1358.
  • [36] P. Huang, M. Cheng, Y. Chen, H. Luo, C. Wang, and J. Li, “Traffic sign occlusion detection using mobile laser scanning point clouds,” IEEE Transactions on Intelligent Transportation Systems, 2017.
  • [37] C. D. A. Parkes, “Geometric field of view manipulations affect perceived speed in driving simulators,” 2010.
  • [38] F. H. Adminstration, “Manual on uniform traffic control devices,” 2009.
  • [39] J. Yang and H. Liu, “Gb 5768-1999, road traffic signs and markings,” 1999.
  • [40] J. H. Banks, Introduction to transportation engineering.   McGraw-Hill New York, 2002, vol. 21.
  • [41] J. B. Kuipers et al., Quaternions and rotation sequences.   Princeton university press Princeton, 1999, vol. 66.
  • [42] H. Edelsbrunner, D. Kirkpatrick, and R. Seidel, “On the shape of a set of points in the plane,” IEEE Transactions on information theory, vol. 29, no. 4, pp. 551–559, 1983.
  • [43] P. K. Kaiser, “Calculation of visual angle,” The Joy of Visual Perception: A Web Book, York University. Available from http://www. yorku. ca/eye/visangle. htm, 2007.
  • [44] H. Guan, J. Li, Y. Yu, M. Chapman, and C. Wang, “Automated road information extraction from mobile laser scanning data,” IEEE Transactions on Intelligent Transportation Systems, vol. 16, no. 1, pp. 194–205, 2015.