Pole-like Objects Mapping and Long-Term Robot Localization in Dynamic Urban Scenarios

03/24/2021 ∙ by Zhihao Wang, et al. ∙ The Chinese University of Hong Kong 0

Localization on 3D data is a challenging task for unmanned vehicles, especially in long-term dynamic urban scenarios. Due to the generality and long-term stability, the pole-like objects are very suitable as landmarks for unmanned vehicle localization in time-varing scenarios. In this paper, a long-term LiDAR-only localization algorithm based on semantic cluster map is proposed. At first, the Convolutional Neural Network(CNN) is used to infer the semantics of LiDAR point clouds. Combined with the point cloud segmentation, the long-term static objects pole/trunk in the scene are extracted and registered into a semantic cluster map. When the unmanned vehicle re-enters the environment again, the relocalization is completed by matching the clusters of the local map with the clusters of the global map. Furthermore, the continuous matching between the local and global maps stably outputs the global pose at 2Hz to correct the drift of the 3D LiDAR odometry. The proposed approach realizes localization in the long-term scenarios without maintaining the high-precision point cloud map. The experimental results on our campus dataset demonstrate that the proposed approach performs better in localization accuracy compared with the current state-of-the-art methods. The source of this paper is available at: http://www.github.com/HITSZ-NRSL/long-term-localization.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 6

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Simultaneous Localization and Mapping (SLAM) can provide location and map information for unmanned vehicles in unknown environments. Many classical SLAM methods [1, 2, 3] have been proposed and applied in static environments. Because traditional point feature descriptors, such as SIFT, SURF and BRIEF, are robust to uniform changes in light intensity and slight changes in viewpoint. However, the real urban environments are time-varying, such as, lighting changing, weathers and seasons varing, and the movement of objects (e.g., cars parked on both sides of the road may be driven away after a while), etc. And most descriptors do not take the large environmental changes into account. Thus, the dynamic environments introduce difficulties for a unmanned vehicle to localize itself or place recognition when it re-enters a known and mapped environment. Some literatures [4] rely on regularly updating high-definition point cloud maps to deal with the problem of dynamic environments, which is time and computation consuming.

To accomplish the long-term place recognition in dynamic environments, many approaches [5] tried to take multi-experiences into the localization framework. These approaches revealed inherent drawbacks because they need to capture various conditions for the same place as a prior to increase the size of the database of experiences. Contrast to visual appearance, the physical structure of a place rarely changes over time. Hence, utilizing structural information is beneficial for long-term localization [6]. In that sense, a single observation using LiDAR could represent a canonical characteristic of a place, eliminating the need of multiple experiences for robust localization. In this line of research, handcrafted descriptor-based [7] and learning-based [8]

methods for place recognition over point clouds have been widely proposed. However, these studies hardly captured the long-term localization requirements, including a slow but massive structural variance (e.g., construction and demolition) and unexpected viewpoint from the road topology change.

Inspired by the work of [9], pole-like objects, such as street lamps, poles of building and tree trunks, etc., are ubiquitous in urban areas; they are long-term stable and invariant under seasonal and weather changes, and their geometric shapes are well-defined. These advantages make pole-like objects suitable as landmarks for accurate and reliable relocalization. Thus, an approach based on semantic cluster is presented for long-term relocalization in urban dynamic environments, relying on pole/trunk landmarks extracted from mobile LiDAR data. We are not the first one to propose this kind of technique in long-term problem. However, to the best of our knowledge, this work is the first one that only uses 3D LiDAR to extract pole-like objects for place recognition and localization in long-term scenarios. To summarize, the main contributions of this paper are three-fold.

  • A method to extract semantic cluster of pole-like objects from raw 3D LiDAR points and create a robust semantic cluster map is proposed to resolve the long-term challenge.

  • A semantic cluster association algorithm based on geometric consistency is proposed to relocalize unmanned vehicles in long-term scenarios.

  • A long-term and real-time localization system is developed based on the robust semantic cluster relocalization module.

Ii Related Works

Thanks to the rapid development of deep learning, some methods

[10] based on image semantic segmentation have been proposed to overcome environmental changes. Compared with the low-level point features, it is a more robust way to describe the environmental changes by using the high-level semantic features. Since geometric properties of the environments are typically more stable than its photometric characteristics, a geometry matching method [11] is proposed to achieve long-term camera localization by extracting object-level features in the environment. Compared with the change of visual characteristics, the structure of the environment is stable, so many solutions based on laser sensors have been researched. Scan Context Image(SCI) [12] is proposed to encode the top view of LiDAR point clouds into a three-channel color image, convert the place recognition problem to a classification problem.

Although many methods have been proposed for localization, few empirical studies showed the effectiveness of LiDAR descriptors on long-term localization capability in urban areas. As the high-level semantic information is robust to describe the environmental changes, a novel semantic graph [13] is proposed for point clouds by reserving the semantic and topological information of the raw point clouds. Thus, place recognition is modeled as graph matching problem. However, this work has not been verified on the long-term place recognition problem. A global localization algorithm [14] is proposed by using only a single 3D LiDAR scan at a time, which relies on learning-based descriptors of point cloud segments. But its performance on long-term dataset is underperformance and not stable.

To address above problems, a long-term localization algorithm is proposed on the semantic cluster map of pole/trunks objects. The semantic cluster map is built by extracting the pole-like objects from raw 3D LiDAR points by CNN. Then, the semantic cluster association algorithm based on geometric consistency vertification is proposed to obtain the pose transformation between the local semantic cluster map and the global semantic cluster map. Finally, the long-term localization system utilized the pose transformation to correct the drift of LiDAR odometry in real time for high accuracy localization.

Iii System Overview

The experimental unmanned vehicle system is shown in Fig.1(a). The RS-Ruby-Lite 3D LiDAR with 80 laser-beam has a vertical field of view (FOV) of and a horizontal FOV of . And the inertial sensor MTi-100 is also equipped with the unmanned vehicle MR1000. Global Navigation Satellite System (GNSS)/Inertial Navigation System(INS) are used as the ground truth for comparison. The proposed system is validated on the laptop with Intel Core i7-7820HK CPU@2.90GHz on Robot Operating System (ROS).

(a) Research platform
(b) System overview
Fig. 1: Platform and system overview of long-term localization algorithm

The framework of the long-term localization system is shown as Fig.1

(b). The semantic clusters of pole-like objects are extracted by the semantic lables of the point clouds, which are obtained directly through CNN. Then, the semantic clusters of pole-like objects are registered in the global semantic cluster map using the six-degree-of-freedom(6-DOF) pose obtained by SLAM module. When the unmanned vehicle re-enters the environment where the global semantic cluster map has been established, the local semantic cluster map is built by using the pose from SLAM module and the semantic clusters of the current frame. Then, a semantic cluster association algorithm based on the semantic labels and the geometric structure of semantic clusters is proposed to calculate the pose transformation between the local and the global semantic cluster map. The relocalization module includes the semantic clusters registration and association. And the relocalization module continuously runs and stably outputs the global pose at 2Hz to correct the drift of the LiDAR odometry. The fine 6-DOF pose is published in real time after a drift correction. The high accuracy localization of the unmanned vehicle is realized in the long-term scenarios without maintaining a high precision point cloud map.

Iv Method

Iv-a Semantic Cluster Extraction

The convolutional neural network RangNet++ [15] is used for semantic segmentation of point clouds. The semantic labels of LiDAR point clouds are annotated on self-made campus dataset and the network of RangNet++ is retrained for infering the semantic labels of point clouds. The range-image based method[16] is used to segment the point clouds into clusters based on the semantic labels. For each cluster, the label of cluster is voted by the statistical number of the point labels in the cluster, by considering the imperfection of point cloud semantic segmentation. In the end, two kinds of semantic clusters, i.e., trunk and pole, are obtained.

Since the semantic cluster map and each semantic cluster need to be visited frequently, designing a semantic cluster manager is necessary to efficiently manage various attributes and operations of clusters. The manager includes 3D centroid points, 2D centroid points and several attributes, such as label, ID, point clouds and memory address. The memory structure is shown in the Fig.2.

Fig. 2: Memory structure of semantic cluster manager. The three dots in each row indicate continuous memory blocks. The 3d centroid is the average position of all points of a cluster in 3D, and the 2D centroid is obtained by projected the 3d centroid into XY plane.

Iv-B Semantic Cluster Registration

The registration of semantic clusters will build a map, and the map includes two types: global semantic cluster map and local semantic cluster map. The global map represents the geometric information of pole-like objects in history environments, such as the environment of several months ago. The local map represents the geometric information of current environment. As the global map need register the clusters of each frame from to into a global map, where and are the timestamps of the begin and end frame. The local semantic cluster map only uses the clusters of current frame at for association. This section only describes the registration of the global cluster map in details, while the registration of local map is similar.

Let denote the semantic cluster, and denote the global cluster map built on frames from to . contains clusters, defined as

(1)

The global map is initialized by adding all the semantic clusters of the frame at . After initialization, the clusters of current frame at need to be registered into the global map. Specifically, for each cluster in current frame at , the position is transformed into the coordinate system of the global map by using LIO SAM [17]. The closest neighbor cluster of will be searched in the map using

nearest neighbor(KNN)

[18]. And the label of is set as the same with the searched closest neighbor cluster. And if the closest neighbor cluster of is not found, is directly inserted into the global map as a new cluster .

The registered 3D semantic cluster map is shown in Fig.3(a). The clusters mainly include poles and trunks, and they are perpendicular to the horizontal plane in the environment. Therefore, the 2D centroid points are able to represent the geometric information of the clusters, and are registered into the cluster map. The registered 2D semantic cluster centroids are the XY-plane projection of 3D centroids, as shown in Fig.3(b).

Fig. 3: An illustration of the long-term relocalization approach. The 3D semantic cluster map is shown in (a), each color represents a semantic cluster. The 2D semantic cluster map is the top view of 3D cluster map in (b), and each color represents 3D centroid point of a semantic cluster. In (c), gray dots represent the clusters of the global semantic cluster map, pink dots represent the clusters of the local semantic cluster map. In (d), the green lines show the matched pairs between the local and global map. The upper figure shows the matched pairs of the 2D centroid points, while the lower figure shows the matched pairs of the 3D centroid points.

Iv-C Semantic Cluster Association

The geometric distribution of neighbor clusters in a certain neighborhood around each cluster is utilized for clusters matching. The geometric information of the cluster to be matched is described as the relative positions and angles between the cluster and its neighbors. And all the geometric positions are simplified on the

plane for reducing computation cost.

Let denote the local semantic cluster map of current frame with clusters. The map registered by IV.B is loaded as the global semantic cluster map , which contains clusters. And the definations are shown as follows:

(2)

In the local map , the neighbor clusters of within the search radius are denoted as . Similarly, denotes the neighbor clusters of within the search radius in . And the number of neighbor clusters is and , respectively. and are denoted as follows:

(3)

The edges are the lines between and its neighbor clusters in , and these edges belong to the local cluster edge set . Similarly, denotes the cluster edge set between and its neighbor clusters in . The expressions are shown as follows:

(4)

The principles for matching two clusters are the same labels and enough matched edges. Then, the matching problem of semantic clusters is transformed into the matching of edges in the cluster edge sets. and are successfully matched if more than pairs of edges can be matched in and , denotes the minimum number of matched edges. Otherwise, they are not matched.

Iv-C1 Edge Association

For the edge matching process of in , the length of each edge in are calculated, and the top edges whose length are nearest to the length of are selected as candidate edges of , denoted as , and is a constant(default: 5). denotes the candidate edge pair, and the distance of the candidate edge pair is , which will be defined later. Thus, the problem of edge matching is formulated as follows:

(5)

where is the maximum tolerance distance for successful matching of edge pairs.

For matching , not only the length and label of edge should be the same, but also the geometric relationships between the edge and the neighbors of should also be consistant. Therefore, sub-edge is introduced for verifying the neighbor geometric relationships. The sub-edges of is the edges in edge set except , and composed the sub-edge set . Similarly, the edges in except are called sub-edge set of :

(6)

The distance of the candidate edge pair is defined as the average distances of successfully matched sub-edge pairs between the sub-edge sets and . And the matching of sub-edge pairs is in the following subsection.

Iv-C2 Sub-Edge Association

The sub-edge of the matching edge in includes elements , where is the length of , and is the clockwise angle between and . As the same, denotes the elements of sub-edge in , where is the length of , and is the clockwise angle between and . A cluster association example is shown in Fig.4.

Fig. 4: Cluster associaton diagram. (a) shows the neighbor semantic clusters of in local semantic cluster map, and (b) shows the neighbors of in the global cluster map. Brown dots and green dots represent trunks and poles, respectively. Thick solid line with arrows in (a) represents the edge to be matched, and thin solid lines with arrows represent the sub-edges. represent the element of sub-edge. The thick dotted lines in (b) indicate the candidate edges of the edge to be matched in (a).

The matching success of two sub-edges and should satisfy four conditions: (a) the semantic cluster labels of two sub-edges are the same; (b) The length error between two sub-edges is under a certain threshold; (c) The angle error between two sub-edges is under a certain threshold; (d) The distance between two sub-edges, denoted as , is also under a certain threshold. And the distance [19] is defined as follows:

(7)

The four conditions of sub-edges matching are mathematically formulated as follows:

(8)

where denotes the neighbor cluster label; indicates the threshold of distance error between sub-edges; indicates the threshold of the angle error of sub-edges; indicates the threshold of the distance of sub-edges.

Iv-C3 Semantic Cluster Association

If more than sub-edges of the candidate edge pair are satisfied with (8), the candidate edge pair is matched successfully. Then, the distance of candidate edge pair can be computed as the first line of (9). If the candidate edge pair is not matched, the distance is given infinite .

(9)

where indicates the number of sub-edge pairs successfully matched, indicates the number of all sub-edges in , and is a threshold. As shown in (9), the distance considered the distances of all the matched sub-edges and the proportion of the matched sub-edges to all sub-edges in . Considering these two factors is to avoid the false matching caused by small average distance of sub-edges and the low proportion of successful matched sub-edges pairs.

From (5) and (9), the best candidate edge pair is formulated as follows:

(10)

where is the distance of the best candidate edge pair, denotes the threshold of the distance.

After matched the candidate edge pairs, the labels and the matched edge pairs numbers of the two semantic clusters to be matched should also be considered. Thus, the matching of clusters are denoted as follows:

(11)

where is the matched edges pairs numbers, and represents the minimum number of edge pairs successfully matched. Algorithm 1 shows the algorithm of semantic cluster association. An matching result example of semantic cluster association is shown in Fig.3(c) and (d).

Input: Two clusters that need to be matched , and their respective cluster map ,
Output: Result of the two clusters associated True/False
1 if  then
2       return False;
3 end if
4Search nearest neighbor clusters and get neighbor edges of , by (3) and (4). for  do
5       FindNEdgePairCandidates
6       for  do
7            
8             for  do
9                   for  do
10                         Compute sub-edge matching result by (8) and (7)
11                         if  is True then
12                              
13                         end if
14                        
15                   end for
16                  
17             end for
18            Compute edge pair distance by (9)
19            
20       end for
21      Compute matching result by (10)
22       if  is True then
23            
24       end if
25      
26 end for
return ((11))
Algorithm 1 Semantic Cluster Association

Iv-D Long-term Relocalization

Algorithm 1 is utilized to match the semantic clusters in the local map with the global map, and provides pairs of semantic cluster matched pairs. Let denote a matched pair of semantic clusters, where and . Semantic cluster matching pairs obtained from semantic cluster association algorithm are coarse correspondences. Thus, a geometric consistency method [20] will be used to eliminate the false positive matched pairs, and finally fine correspondences are remained.

For two matched pairs , if their euclidean distance error in the local and global map is under a certain threshold, is geometric consistent, formulated as follows:

(12)

where indicates the maximum tolerance euclidean distance error; represents the euclidean distances of and in , and represents the euclidean distances of and in .

Then, the 6-DOF pose transformation between the local map and the global map is computed by matching the corresponding semantic cluster centroid points and semantic cluster point clouds. Firstly, the 3D centroid points pairs are extracted from semantic cluster matched pairs, and the RANSAC[21] algorithm is used to filter out wrong matched pairs. Secondly, the coarse coordinate transformation matrix from the local map to the global map is calculated by using iterative closest point (ICP) algorithm with (13).

(13)

where is the number of the filtered semantic cluster matched pairs. and are the 3D centroid points of the clusters and .

Thirdly, the local and global point clouds of the matched cluster pairs are respectively extracted from cluster manager. Using the coarse pose transformation as the initial value, the fine transformation matrix is obtained by the ICP algorithm again with the extracted point clouds instead of the 3D centroid points, shown as follows:

(14)

where , are the extracted point clouds of the clusters and ; represents the global pose of the local map in the global map.

Iv-E Long-term Localization

Fig. 5: long-term localization process. denotes the world frame, denotes the LiDAR frame.

The long-term localization algorithm is shown in the Fig.5, the global pose , stably output at 2Hz by the relocalization module, is used to correct the accumulated drift of the LiDAR odometry in real time. The global pose at is re-expressed as . The LiDAR odometry outputs poses continuously from to before the next global pose is updated, denoted as . During to

, the global pose estimation of the unmanned vehicle is calculated as follows:

(15)

where . So far, long-term localization has been finished by correcting the drift of LiDAR odometry with the real-time global pose of relocalization module.

V Experiements

To evaluate the proposed relocalization and localization algorithms in long-term scenarios, several experiments were performed on the self-made campus dataset. It is worth noted that Carlevaris-Bianco[22] proposed NCLT dataset in long-term scene, but the Velodyne32 LiDAR in NCLT dataset is installed upside down, which makes the actual LiDAR point cloud sparsity equal to Velodyne16 LiDAR. While the proposed method is based on semantic segmentation of dense LiDAR point clouds, NCLT dataset cannot meet our requirements. Therefore, the proposed method is compared on the self-made dataset with the SCI algorithm[12], which is the state-of-the-art long-term LiDAR-based localization system.

V-a Benchmark Dataset

Fig. 6: Dataset Environment. The left image is several months earlier than the right image, the differences are the removed cars, the trees with different shapes, and the lighting conditions.

The challenge of long-term localization is the environmental changes. So the self-made dataset recorded the data in the university town environment with high traffic volume and dense vegetation. The environment is shown in Fig.6.

Fig. 7: Example of annotated semantic points.

To use RangeNet++ to infer the semantic label, the self-made dataset is annotated with more than 1500 frames of 3D point clouds generated by LiDAR scanning, which covered a geographic area of 76,000 and included 120 million three-dimensional semantic points. And the marked labels with 10 categories included buildings, highways, sidewalks, bicycles, green plants, poles, tree trunks, vehicles, pedestrians, and other objects. An example of the annotated semantic point clouds by the labeling tool PointLabeler[23] is shown in Fig.7. The organization of the semantic point clouds is labeled and organized according to the SemanticKitti[23]. Four datasets were collected, and Table I showed the details of datasets. Although the datasets were collected within a month, some places of the environment have changed significantly due to the movement of cars and growing of trees; furthermore, different numbers of clusters in the global map were randomly removed to simulate the significant changes of environment. We will also continue collect more datasets and update the results in the future.

Date Length(km) Time Weather
2020-10-12 1.31 Afternoon Sunny
2020-10-18 1.28 Morning Cloudy
2020-10-25 1.53 Evening Cloudy
2020-11-05 1.4 Afternoon Sunny
TABLE I: List of campus data sets.

V-B Evaluation of Long-term Relocalization

To evaluate the success rate of relocalization in long-term scenarios, the earliest dataset 2020-10-12 was used to build the global semantic cluster map, and the other three datasets were used to create the local semantic cluster maps for relocalization with the global map. The defination of successful relocalization is expressed as follows:

(16)

where and represent the estimated position of relocalization and the ground truth by GNSS; denotes the maximum tolerance distance between the estimated position and the ground truth when the relocalization is successful, and set to 10 m.

The success rate of relocalization of the SCI algorithm are divided into two categories, namely the success rate of the highest(top 1) scoring frame and the top 5 scoring frames. 120 positions were selected for the relocalization experiments on each dataset, and distributed as shown in Fig.8. And the parameters of clusters registration and association are shown in Table II. And the success rate of relocalization was shown in Table III. The left number of each cell represents the number of successful relocalizations in 120 experiments, and the right number of each cell represents the success rate. The results showed the success rate of the proposed relocalization algorithm higher than the SCI algorithm on the self-made dataset.

Fig. 8: The positions distribution of the relocalization experiments. Red squares represent the selected positions.
Parameter Description Setting
SR search radius 50 m
threshold of subedge distance error 0.3 m
threshold of subedge angle error
threshold of subedge distance 0.2 m
threshold of subedge distance 0.25 m
minimum matched pairs of subedges 5
minimum matched pairs of edges 5
TABLE II: The parameters of clusters registration and association
Dataset SCI(TOP1) SCI(TOP5) Proposed
2020-10-18 99/82.5 114/95.0 120/100.0
2020-10-25 97/80.8 108/90.0 116/96.6
2020-11-05 103/85.8 112/93.3 118/98.3
TABLE III: The Success rate Comparison of SCI and the proposed method.

Moerover, to vertify how far unmanned vehicles have travelled when relocalization is successful, the travelled distance of successful relocalization were collected and sorted from 116 experiments, and the probability curve result is shown in Fig.

9(a). When the unmanned vehicles travelled 24 m, the relocalization is done successfully with the probability of 90; And with the success probability of 95 and 99 when the unmanned vehicles travelled 30 m and 43 m. The result illustrated that, by using the proposed relocalization algorithm in the long-term scenarios, the travelled distance of successful relocalization is relatively short compared with the scale of the whole map(almost 1.4 km).

Fig. 9: (a) The result of relocalization success probability vs the travelled distance. The cyan solid curve denotes the relocalization success probability, and the dot lines denote different probability metrics. (b) The result of relocalization success probability vs the travelled distance under different cluster densities. C/m is the unit of density, denoting the number of semantic clusters of per meter.

To evaluate the performance of the proposed relocalization algorithm under large changes in the environment, different numbers clusters in the global semantic map were randomly selected to associate with local semantic cluster maps. Specifically, 827 semantic clusters were extracted from the built semantic map using the dataset 2020-10-12. When using the dataset 2020-11-05 for relocalization, a series of sub-global semantic cluster maps were randomly created by retaining 100, 80 and 60 of the total clusters of the global map by dataset 2020-10-12, i.e., 827, 661 and 496 clusters retained, respectively. And the relocalization experiments were compared between above three sub-global maps with different cluster numbers by the dataset 2020-10-12 and the local cluster maps by the dataset 2020-11-05. The number of semantic clusters of per meter along the unmanned vehicle trajectory is defined as the semantic clustering density , formulated as follows:

(17)

where denotes the number of semantic clusters used in global map, and denotes the travelled distance.

For the three sub-global maps, the semantic cluster densities are calculated by (17) to be 0.59, 0.473 and 0.354 Clusters/m, respectively. 116, 68, and 62 times of the experiment were performed on the dataset 2020-11-05 to calculate the travelled distance when successfully relocated, the result was shown in Fig.9(b). It can be seen that the larger density of semantic cluster map brings the shorter travelled distance. And when the semantic clustering density is not less than 0.473 Clusters/m, the unmanned vehicle only needs to travel no more than 50 m when the relocalization is successful with probability 90.

V-C Long-term Localization Analysis

The evaluation of long-term localization experiment is mainly to compare the position estimation accuracy of the LiDAR odometry. The dataset 2020-10-05 was used to build the global semantic cluster map, and the other three datasets were used to create local semantic cluster maps for long-term localization. The metrics of Root Mean Square Error(RMSE) is used for evaluation, and the results of the proposed long-term localization system and LOAM are compared with the ground truth. The results were shown in Table IV. The proposed approach does not need to utilize the IMU data for relocalization and localization, and therefore it was compared with the LOAM but not LIO-SAM.

Dataset LOAM(m) Proposed(m)
2020-10-18 42.41 0.29
2020-10-25 44.32 0.34
2020-11-05 40.48 0.24
TABLE IV: RMSE of long-term localization.

Since LOAM only relies on the LiDAR odometry, the RMSE was relatively large because of the accumulated error of the odometry. For the proposed method, the relocalization algorithm based on the semantic cluster map was used to correcting the accumulated error, and the position estimation between frames of the laser odometry is accurate, and therefore the RMSE of the proposed long-term localization system is smaller. The trajectories of ground truth, LOAM and the proposed method are shown in Fig.10(a) by using the dataset 2020-11-05 as an example, and the x,y,z position error are shown in Fig.10(b).

Fig. 10: (a) Trajectories of the ground truth, LOAM and the proposed long-term localization system. (b) Position errors along x-y-z axes within a time window.

Vi Conclusion

A novel relocalization method based on semantic cluster map is proposed to realize high accuracy relocalization and real-time localization in long-term environments. A robust semantic cluster map is built by extracting pole-like objects from raw 3D LiDAR points to resolve the long-term challenge. A semantic cluster association algorithm on the basis of geometric consistency vertification is proposed to obtain the pose transformation between the local cluster map and the global cluster map. And a long-term and real-time localization system is developed based on the relocalization module and LiDAR odometry. Finally, several experiments are performed to illustrate the effectiveness of the proposed approach. In the future work, we will extend to utilize more objects in long-term environments and make it more general rather than relying heavily on pole-like objects.

References

  • [1] Raúl Mur-Artal and Juan D. Tardós.

    ORB-SLAM2: an open-source SLAM system for monocular, stereo and RGB-D cameras.

    IEEE Transactions on Robotics, 33(5):1255–1262, 2017.
  • [2] Ji Zhang and Sanjiv Singh. Loam: Lidar odometry and mapping in real-time. In Robotics: Science and Systems, volume 2, 2014.
  • [3] Tixiao Shan and Brendan Englot. Lego-loam: Lightweight and ground-optimized lidar odometry and mapping on variable terrain. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 4758–4765. IEEE, 2018.
  • [4] François Pomerleau, Philipp Krüsi, Francis Colas, Paul Furgale, and Roland Siegwart. Long-term 3d map maintenance in dynamic environments. In 2014 IEEE International Conference on Robotics and Automation (ICRA), pages 3712–3719. IEEE, 2014.
  • [5] W. Churchill and P. Newman. Experience-based navigation for long-term localisation. The International Journal of Robotics Research, 32:1645 – 1661, 2013.
  • [6] Y. Ye, Titus Cieslewski, Antonio Loquercio, and D. Scaramuzza. Place recognition in semi-dense maps: Geometric and learning-based approaches. In BMVC 2017, 2017.
  • [7] G. Kim and A. Kim. Scan context: Egocentric spatial descriptor for place recognition within 3d point cloud map. 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 4802–4809, 2018.
  • [8] R. Dubé, D. Dugas, E. Stumm, J. Nieto, R. Siegwart, and C. Cadena. Segmatch: Segment based place recognition in 3d point clouds. In 2017 IEEE International Conference on Robotics and Automation (ICRA), pages 5266–5272, 2017.
  • [9] A. Schaefer, D. Büscher, J. Vertens, L. Luft, and W. Burgard. Long-term urban vehicle localization using pole landmarks extracted from 3-d lidar scans. In 2019 European Conference on Mobile Robots (ECMR), pages 1–7, 2019.
  • [10] Mans Larsson, Erik Stenborg, Lars Hammarstrand, Marc Pollefeys, Torsten Sattler, and Fredrik Kahl. A cross-season correspondence dataset for robust semantic segmentation. In

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition

    , pages 9532–9542, 2019.
  • [11] Tim Caselitz, Bastian Steder, Michael Ruhnke, and Wolfram Burgard. Matching geometry for long-term monocular camera localization. In ICRA Workshop: AI for long-term Autonomy, 2016.
  • [12] Giseop Kim, Byungjae Park, and Ayoung Kim. 1-Day Learning, 1-Year Localization: Long-Term LiDAR Localization Using Scan Context Image. IEEE Robotics and Automation Letters, 4(2):1948–1955, 2019.
  • [13] Xin Kong, Xuemeng Yang, Guangyao Zhai, Xiangrui Zhao, Xianfang Zeng, Mengmeng Wang, Yong Liu, Wanlong Li, and Feng Wen. Semantic graph based place recognition for point clouds. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 8216–8223. IEEE, 2020.
  • [14] S. Ratz, M. Dymczyk, R. Siegwart, and R. Dubé.

    Oneshot global localization: Instant lidar-visual pose estimation.

    2020 IEEE International Conference on Robotics and Automation (ICRA), pages 5415–5421, 2020.
  • [15] A. Milioto, I. Vizzo, J. Behley, and C. Stachniss. RangeNet++: Fast and Accurate LiDAR Semantic Segmentation. In IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2019.
  • [16] I. Bogoslavskyi and C. Stachniss. Fast range image-based segmentation of sparse 3d laser scans for online operation. In Proc. of The International Conference on Intelligent Robots and Systems (IROS), 2016.
  • [17] Tixiao Shan, Brendan Englot, Drew Meyers, Wei Wang, Carlo Ratti, and Rus Daniela. Lio-sam: Tightly-coupled lidar inertial odometry via smoothing and mapping. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 5135–5142. IEEE, 2020.
  • [18] R.B. Rusu and S. Cousins. 3d is here: Point cloud library (pcl). In Robotics and Automation (ICRA), 2011 IEEE International Conference on, pages 1 –4, May 2011.
  • [19] Zhichen Pan, Haoyao Chen, Silin Li, and Yunhui Liu. ClusterMap Building and Relocalization in Urban Environments for Unmanned Vehicles. pages 1–22, 2019.
  • [20] Mattia G Gollub, Renaud Dubé, Hannes Sommer, Igor Gilitschenski, and Roland Siegwart. A partitioned approach for efficient graph–based place recognition. In Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst./Workshop Planning, Perception Navigat. Intell. Veh., 2017.
  • [21] Konstantinos G Derpanis. Overview of the ransac algorithm. Image Rochester NY, 4(1):2–3, 2010.
  • [22] Nicholas Carlevaris-Bianco, Arash K Ushani, and Ryan M Eustice. University of michigan north campus long-term vision and lidar dataset. The International Journal of Robotics Research, 35(9):1023–1035, 2016.
  • [23] Jens Behley, Martin Garbade, Andres Milioto, Jan Quenzel, Sven Behnke, Cyrill Stachniss, and Jurgen Gall.

    Semantickitti: A dataset for semantic scene understanding of lidar sequences.

    In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9297–9307, 2019.