I Introduction
Nowadays, autonomous driving has become one of the most attractive and cutting edge topics. Although there is still a lot of work to do before the arrival of fully autonomous driving, semiautonomous driving is already accomplished and will be widely introduced in the near future. For semiautonomous driving vehicles, it is necessary to have the ability of avoiding obstacles to ensure the driving safety. The surrounding vehicles’ locations and orientations detection is very important in collision avoidance.
Light Detection And Ranging (LiDAR) has been widely used for detecting surrounding objects such as bicycles, vehicles and pedestrians, due to its large field of view, lighting invariance, high data accuracy and relatively low price. A common approach to processing LiDAR data is to segment the data into different clusters of points, from which meaningful features like line segments, rectangles, and circles can be extracted [1]. These features are then associated with a static map or tracked targets and used to update the target state through tracking methods such as Multiple Hypotheses Tracking (MHT) [2, 3] or its advanced version which integrates a RaoBlackwellized Particle Filter (MHTRBPF) [4, 5].
Another solution is similar to the approach widely used in computer vision by extracting handcrafted features and training classifiers. Imagebased object detection is very popular in current autonomous driving research, such as road obstacles detection
[6], mobility aids [7] and vehicle detection [8]. However, the sparse point data from a 2D LiDAR are usually insufficient for reliable object identification using this kind of method within a single scan. Although several solutions are proposed, such as relying on sensor fusion [9], multilayered sensor combinations [10], [11], or temporal integration from tracking, they often come with higher computational cost and complexity.In this paper, we propose a highly efficient searchbased LShape fitting algorithm for detecting the vehicle’s position and orientation. LShape fitting is often treated as a complex optimization problem. However, our approach addresses this problem by decomposing it into two steps: LShape vertexes searching and LShape corner localization. It is extremely important to ensure the real time performance of vehicle detection and to save time for highly computational tasks such as highlevel path planning and decision making tasks. Our method is demonstrated to be effective and efficient through experiments with a productiongrade 2D laser scanner.
The remainder of this paper is organized as follows. In section II an overview of related research work is described. The searched based LShaped fitting method is presented in section III. In section IV, we provide the experimental results to evaluate the LShape fitting approach. Section V presents our conclusion from the experimental results.
Ii Related Work
In the past decade, the wellknown DARPA grand challenge has proved the realizability and demonstrated the technical frameworks for autonomous driving. Supported by NSFC (the National Natural Science Foundation of China), China’s event named the Intelligent Vehicle Future Challenge (IVFC), which is similar to DARPA urban challenge, started from 2009. During the last eight years, over thirty universities and many companies have participated in this annual challenge, which is now recognized as the most influential event of autonomous driving in China. As a latecomer in IVFC, Tongji Intelligent Electric Vehicle (TiEV) project funded by the Tongji University started in 2015 (see the Fig 1). Soon afterwards, TiEV took part in IVFC 2016, 2017 and managed to complete most of the tasks such as simulated traffic driving, going through tunnels and blockage avoiding without any human intervene.
In the competition, the detection of curbs and tracking of vehicles were made possible using the equipped sensors, such as cameras and laser scanners. However, from laserbased range sensing, we can only detect the parts of the object’s contour that faces towards the sensor. Since the contour of an object may not be fully observed by range sensors, these occlusions make the perception task of the autonomous vehicle even harder.
To address these difficulties, the vehicular shape model is widely used for detecting the position and orientation of the vehicle, which is often assumed to be a box, an L shape, or two perpendicular lines [12, 13, 14]. Based on the vehicular LShape model, several fitting methods have been proposed. In [13]
, a weighted leastsquares method is used to get rid of outliers and fit an incomplete contour to a rectangle model. Because of the occlusion problem, both a right angle corner fitting and a line fitting are represented in
[13]. In [12], the information of the scanning sequence is exploited to segment the points efficiently into two disjoint sets, then two perpendicular lines corresponding to the two edges of the vehicle are fitted by each of the two segmentations of points respectively. More specifically, a pivot is detected based on the scanning sequence of all these 2D range points, and then point of this pivot yield those two disjoint sets, i.e., the set of points scanned before the pivot and the set of points scanned after it. In [15], the laser scanning sequential information is not utilized for LShape fitting. This method is based on the optimal fitting angle searching for the LShape, and in [15] three criteria were proposed to detect the best LShape fitting.Also some other approaches were developed using volumetric data with 3D LiDARs, among which some choose sequential projections of point clouds [16], [17]
, others choose to train up neural networks that can cope with unordered point cloud data with abstract feature learning, like in VoxelNet and PointNet. However, these approaches consume considerable computational resources and need a largescale labeled data set for training, not to mention the sensors themselves are much more expensive than those for 2D ranging.
Compared with the methods above for vehicular shape fitting, we proposed a different approach to address the problem. There are four main contributions in this paper.

We proposed an approach that innovatively decomposes the LShape fitting problem into two steps: LShape vertexes searching and LShape corner point locating.

The proposed approach is highly computationally efficient due to its minimized complexity, outperforms other methods and obtains stateoftheart results.

The proposed approach is robust enough and able to accommodate various situations.

Our method does not depend on the laser scanning sequential information, which means data fusion can be easily achieved from multiple laser scanners.
Iii LShape Fitting for Laser Scanning Data
Since the correspondence of the scanning data of the objects in the real world is usually complex, we first segment the data points into different clusters after getting the scanning data of the environmental objects using 2D LiDARs. These clusters typically correspond to bicycles, pedestrians, buildings, or vehicles and can be classified into separated categories. In this paper, we are only interested in LShape fitting for vehicles. Based on the assumption of an LShape vehicle model, for each segmented range data cluster, we first find the 2 vertexes (not including the corner point) of LShape and then localize the corner points based on a prespecified criterion. After that, we obtain the optimized fitted rectangle following the 3 vertexes and contain all the points in this segmentation. Fig. 2 shows the flowchart of our approach.
Iiia Segmentation
The laser scan data needs to be segmented into different clusters before performing LShape fitting. There are several classical clustering algorithms for this segmentation work. For this work, we evaluate two classical clustering methods: meanshift clustering (MeanShift) [18]
and density based spatial clustering of applications with noise (DBSCAN). The meanshift algorithm considers the input as a probability density function and the objective of the algorithm is to find the modes of this function
[18]. These modes represent the centers of the discovered clusters. The input points are fed to the kernel density estimation and then the gradient ascent method is applied for the density estimate. The density estimation kernel uses two inputs: the total amount of points and the bandwidth or the size of the window
[19]. The DBSCAN algorithm uses density based spatial clustering for applications with noise. For each point, the associated density is calculated by counting the number of points in a search area of specified radius, , around the point. The points with density higher than the specified threshold value, MinPts, are classified as core points while the rest are classified as noncore points.By comparing the segmentation results in Fig. 3, we can see that both the DBSCAN algorithm and the meanshift algorithm are able to find the clusters of arbitrary shapes. However, the meanshift algorithm is not capable of ignoring the influences of outliers. Furthermore, its iterative nature and density make the meanshift algorithm slower than some alternative clustering algorithms. For these reasons, we used the DBSCAN algorithm to perform the segmentation due to its lowcomplexity, fast execution time and robust nature. It is worth mentioning that a graphbased index structure can be used to speed up the segmentation operation with the DBSCAN algorithm.


IiiB LShape Fitting
Since the two perpendicular lines of LShape can be defined as and , a typical way to evaluate the fitting performance is least squares, which covers the following optimization problem:
(1) 
in which the optimization task is to find out two best partitions () for the clustered preprocessed data and the optimal parameters for two orthogonal lines (). The means the norm, and are the scanning points’ quantity for the partitions ().
Nevertheless, the above optimization problem turns out to be very difficult to solve due to the combinatorial complexities in partition since the order/sequence of points of the segmented cluster of is not accessible.
To address this computational problem, a basic idea is to implement RANSAC algorithm, since an LShape can be described with 3 key points. However, this original RANSAC algorithm also consumes plenty of time due to the considerable possibilities. To improve the algorithm’s performance based on the 3 points theory mentioned above, we decompose the LShape fitting problem into two steps. LShape vertexes searching and LShape corner point localizing.
IiiB1 Detecting Two Vertexes
As the first procedure, we proposed an algorithm to obtain two vertexes of LShape from clustered scanning data. The algorithm is presented in the Alg. 1. The input of this algorithm is a specific cluster scanning points and the output are two target vertexes of LShape. It’s worth mentioning that for improving the robustness of vertexes searching algorithm, the results are not actual several scanning points but the geometric center of specific points.
Firstly, we sort the points by their X and Y coordinates, and subsequently, we select several points from the front end and rear end of the sorted sequence and calculate the geometric center as candidate vertexes of LShape. After that, we based on a predefined standard to obtain two target vertexes. In some scenes, the first or last several points may have a large variance in the horizontal or vertical direction (as in Fig.
4). Under these circumstances, we can directly select the two calculated candidate vertexes of orthogonal direction as the LShape’s two vertexes to reduce the computational cost.IiiB2 Localizing Corner Point
When the “Vertexes Searching” procedure is completed, the second step for LShape fitting is to localize the corner points. Once this optimal corner point is obtained, the LShape feature for vehicle tracking is almost determined. A classical standard to evaluate fitting result has been presented at the beginning of “LShape Fitting” section, minimizing the squared error.
As the two vertexes have been determined, a basic idea is to traverse all the scanning points to localize the optimal corner point. Note that the optimal corner point can usually form an angle of approximately with the two given vertexes obtained from the Alg. 1. Therefore, a prejudgment procedure can be implemented for the scanning points to filter out some candidate corner points, before the localizing algorithm implemented to the candidate points.
The detailed algorithm is showed in Alg. 2. The input of this algorithm are two vertexes and the corresponding points cluster, . The output is the optimal corner point and the points’ amount of two disjunctions which were partitioned by the two vertexes and the optimal corner point.
IiiC Shape Fitting
Since there is no ideal range data point, most angles formed by two vertexes obtained from Alg. 1 and the optimal corner point acquired from Alg. 2 is actually not a real right angle. A logical idea is to select an edge which has more scanning points to determine the LShape’s direction. As the Alg. 2 can return the two edge’s points amount and the optimal corner point, with this information and the two vertexes obtained from Alg. 1 the LShape’s direction can be easily determined.
We use a rectangle oriented in that direction which contains all the scanning points to represent the LShape. Once this rectangle is obtained, the vehicle’s pose can also be handily extracted. Since a rectangle is formed by four edges and every edge can be presented in the form of , if these parameters are determined the Shape is acquired. The Alg. 3 shows steps about rectangle fitting in detail. The input of this algorithm are two vertexes obtained from Alg. 1, corner point and two partition points’ amount acquired from Alg. 2. The output of this algorithm are the parameters for four edges of the target rectangle.
Iv Experimental Results
In this section, we provide the experimental results to evaluate the correctness and efficiency of our algorithms. The experiments were tested on Tongji’s autonomous vehicle research and test platform “TIEV” (in Fig. 1), and the 2D LiDAR is mounted on the front end of the test platform and about 15 cm above the ground with an elevation angle of about . Under this circumstance, most of the vehicles in the measurement range are scanned as LShape. It is important to note that the scanning order/sequential information is not used for the experiments here.
Iva Rectangle Fitting
Before performing LShape fitting, the laser scan data needs to be partitioned into different clusters. Fig 5 shows the segmentation result of 1 single scan data with the DBSCAN algorithm. After the laser scan data been segmented into clusters, we use the fitting algorithms to search for the optimal LShape to fit the data points. Two different clusters which represent two different vehicles in two separate orientations are shown in Fig. 6. Through the Alg. 1 and Alg. 2 the key points are presented in Fig. 6(a) and (b). The blue stars stand for the possible vertexes for LShape, and the red diamonds are the best corner points for each LShape in the circumstance of blue stars as vertexes of the LShape. With these key points’ information and other results obtained from Alg. 1 and Alg. 2 the optimal LShape for each cluster can be acquired from Alg. 3. In Fig. 6(c) and (d) the blue rectangle presents the optimal fitted LShape. Fig. 7 shows the LShape fitting results of 1 single laser scan data and vehicle pose estimation. Each blue box is the bestfitted LShape obtained by fitting algorithms corresponding to each vehicle, and the directions of these rectangles are the orientation of vehicles.
It should be noted that the small clusters, with less than four points, are ignored in the implementation. Since these clusters are impossible to correspond to vehicles.




IvB Efficiency Evaluation
The efficiency of the algorithm is evaluated by the computational time. There are approximately 3000 laser scans in the data set collected by the tested vehicle. Each laser range scan points is segmented into clusters and fitting algorithms are carried out on each cluster. The computational time is presented in Table I. The calculations are implemented in MATLAB and run on a Windows laptop equipped with an Intel Core i5 CPU. The computational performance of the algorithm could be much better if it is implemented with a more efficient programming language such as C/C++ or on a more powerful platform.
Method  Average (ms)  Standard Deviation (ms) 

Our approach  6.20  0.20 
CMU’s method [15] ^{1}^{1}1Due to the difference of testing platform, the computation time is different from [15].  6.04  0.23 
V Conclusion
In this paper, we proposed a searchbased LShape fitting approach. The algorithm can efficiently detect the optimal LShape fitting with 2D LiDAR data by finding the three key points of an LShape, that is two vertexes and one corner. The proposed approach does not need the scan’s ordering/sequential information, therefore it allows fusions of raw laser data from multiple laser scanners. Furthermore, this approach is capable of accommodating to various criteria, which means the approach is not only suitable for different fitting demands but also extensible for future applications. The experimental results show the correctness and efficiency of our algorithm.
References
 [1] M. Christoph, N.S. L. E., M. Robert, R. Paul, S. Aaron, S. Arne, U. Christopher, V. Nicolas, H. Martial, T. Chuck, D. David, and G. Jay, “Moving object detection with laser scanners,” Journal of Field Robotics, vol. 30, no. 1, pp. 17–43.
 [2] S. S. Blackman, “Multiple hypothesis tracking for multiple target tracking,” IEEE Aerospace and Electronic Systems Magazine, vol. 19, no. 1, pp. 5–18, 2004.
 [3] I. J. Cox and S. L. Hingorani, “An efficient implementation of reid’s multiple hypothesis tracking algorithm and its evaluation for the purpose of visual tracking,” IEEE Transactions on pattern analysis and machine intelligence, vol. 18, no. 2, pp. 138–150, 1996.
 [4] D. Schulz, D. Fox, and J. Hightower, “People tracking with anonymous and idsensors using raoblackwellised particle filters,” in IJCAI, pp. 921–928, 2003.
 [5] G. Grisetti, C. Stachniss, and W. Burgard, “Improved techniques for grid mapping with raoblackwellized particle filters,” IEEE transactions on Robotics, vol. 23, no. 1, pp. 34–46, 2007.
 [6] P. Merdrignac, E. Pollard, and F. Nashashibi, “2d laser based road obstacle classification for road safety improvement,” in Advanced Robotics and its Social Impacts (ARSO), 2015 IEEE International Workshop on, pp. 1–6, IEEE, 2015.
 [7] C. Weinrich, T. Wengefeld, C. Schroeter, and H.M. Gross, “People detection and distinction of their walking aids in 2d laser range data based on generic distanceinvariant features,” in Robot and Human Interactive Communication, 2014 ROMAN: The 23rd IEEE International Symposium on, pp. 767–773, IEEE, 2014.
 [8] S. Sivaraman and M. M. Trivedi, “Looking at vehicles on the road: A survey of visionbased vehicle detection, tracking, and behavior analysis,” IEEE Transactions on Intelligent Transportation Systems, vol. 14, no. 4, pp. 1773–1795, 2013.
 [9] L. Spinello and R. Siegwart, “Human detection using multimodal and multidimensional features,” in Robotics and Automation, 2008. ICRA 2008. IEEE International Conference on, pp. 3264–3269, IEEE, 2008.
 [10] O. M. Mozos, R. Kurazume, and T. Hasegawa, “Multipart people detection using 2d range data,” International Journal of Social Robotics, vol. 2, no. 1, pp. 31–40, 2010.
 [11] L. Spinello, K. O. Arras, R. Triebel, and R. Siegwart, “A layered approach to people detection in 3d range data.,” in AAAI, vol. 10, pp. 1–1, 2010.
 [12] X. Shen, S. Pendleton, and M. H. Ang, “Efficient lshape fitting of laser scanner data for vehicle pose estimation,” in Cybernetics and Intelligent Systems (CIS) and IEEE Conference on Robotics, Automation and Mechatronics (RAM), 2015 IEEE 7th International Conference on, pp. 173–178, IEEE, 2015.
 [13] R. MacLachlan and C. Mertz, “Tracking of moving objects from a moving vehicle using a scanning laser rangefinder,” in Intelligent Transportation Systems Conference, 2006. ITSC’06. IEEE, pp. 301–306, IEEE, 2006.
 [14] A. Petrovskaya and S. Thrun, “Model based vehicle detection and tracking for autonomous urban driving,” Autonomous Robots, vol. 26, no. 23, pp. 123–139, 2009.
 [15] X. Zhang, W. Xu, C. Dong, and J. M. Dolan, “Efficient lshape fitting for vehicle detection using laser scanners,” in 2017 IEEE Intelligent Vehicles Symposium (IV), pp. 54–59, June 2017.

[16]
D. Maturana and S. Scherer, “Voxnet: A 3d convolutional neural network for realtime object recognition,” in
Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on, pp. 922–928, IEEE, 2015.  [17] P. Ondruska, J. Dequaire, D. Z. Wang, and I. Posner, “Endtoend tracking and semantic segmentation using recurrent neural networks,” arXiv preprint arXiv:1604.05091, 2016.
 [18] D. Comaniciu and P. Meer, “Mean shift: A robust approach toward feature space analysis,” IEEE Transactions on pattern analysis and machine intelligence, vol. 24, no. 5, pp. 603–619, 2002.
 [19] G. Hinz, G. Chen, M. Aafaque, F. Röhrbein, J. Conradt, Z. Bing, Z. Qu, W. Stechele, and A. Knoll, “Online multiobject trackingbyclustering for intelligent transportation system with neuromorphic vision sensor,” in Joint German/Austrian Conference on Artificial Intelligence (Künstliche Intelligenz), pp. 142–154, Springer, 2017.
Comments
There are no comments yet.