I Introduction
With the increasing number of vehicles and the shortage of parking spaces, the traditional parking lot is forced to be upgraded. In addition to the existing mechanical threedimensional parking lot, the fully autonomous parking robot also plays a fundamental role. Our contribution to this trend is the setup of a new electric Xbywire parking robot car, characterized by comprising two lateral spreading device deployment apparatus and a transverse, with the aim of adapting to the wheelbase and track of different vehicles. The parking robot is designed to carry vehicles from one position to a given spot without human intervention, with capability of environment perception, selflocalization and path planning. The design sketch of our autonomous parking robot is shown in Fig 2.
In indoor parking lots, however, it is not a straightforward task for robot to localize itself due to the absence of GPS caused by signal attenuation through construction materials, especially when high precision localization is needed. The existing approaches for selflocalization indoors include techniques based on WiFi, Radio Frequency Identification Device (RFID), Ultra Wideband (UWB), Bluetooth, etc . Most of these localization approaches suffer from defect of high cost, unstableness or low precision. Here we adopt a novel localization solution using squarelike landmarks detected by a single line scanning LiDAR, the advantages of which lie in its low cost but high precision. Moreover, it is also workable in outdoor parking lots as long as the required landmarks are sufficient. For example, outdoor charging piles can be designed as available landmarks for our work.
This paper is organized as follows: After discussing related works in Section II, an overview about the presented localization approach is given in Section III. The following Section IV and V focus on the detailed implementation of the approach. Then a simulation test and its results are illustrated in Section V. A conclusion in the last Section ends this work.
Ii Related Work
In the context of landmarkbased selflocalization for mobile robot, different techniques have been proposed in the literature. The main task of a landmarkbased localization is to collect and extract distinct features from either artificial or natural landmarks. Since the focus of our work primarily lies on the localization in a specific area — a parking lot, we can further narrow down the scope of the literature review by concentrating mainly on strategies that is practicable in this case.
Vision based localization methods are given in many works. Among these methods, the natural landmarkbased approach suffers limitations due to its sensitivity to variable environmental conditions and requirement for high computational power for image processing [2]. Compared with natural landmarks, artificial landmarks tend to be easier and faster to be sensed and recognized with a better resistance to noise. In a parking lot, QRcode is likely to be an applicable artificial landmark. In papers [3][4][5] , localization approach using QRcode is proposed and tested. Ceiling landmarks [6][7] are also workable in this situation. These approaches, however, require relatively more effort to set up the artificial landmarks, which is not an easy task in a large parking lot. Moreover, image landmarks including QRCode or celling mark tend to get blurry gradually due to dust or erosion, resulting in increasing error or even failure of the localization.
For LiDARbased localization techniques in this context, polelike landmark [8][9] is a suitable option, but it is not common in indoor parking lots, which limits its application to an outdoor place surrounded by a certain number of pole items like trees, street lamps or traffic signs. Another shortcoming of such approach is that it needs to use multiline LiDAR to distinguish the pole items from others and extract their geometric features, considering that horizontal size of the lamps or signs is generally small and the irregular geometric features of trees limit its positioning accuracy.
In our localization approach based on square landmarks, several presented methods are applied. In order to represent a square landmark with four corners, we refer to the LShape Fitting method proposed in [10]
. The following position estimate stage employs Particle Filter (also known as Monte Carlo Localization)
[11].Iii overview
An overview of the developed localization framework is illustrated in Fig. 4. The central component of the framework is Particle Filter, which utilizes and fuses the data from different sensors to deduce a continuously updated estimate of the robot pose. The pose initialization of the particles is accomplished by a GPS or a start area inside the parking lot, and IMU serves as information source of motion update.
In the measurement update step, we apply a novel landmark—squarelike structures—to associate with a corresponding offline map to estimate the current pose of the robot. The squarelike structures, including square pillars and charging piles, are common in parking lots. The wall corners inside the parking lot can also fulfil the requirements of such landmarks. Moreover, It is also feasible to artificially arrange some square pillars in the parking lot if necessary. These squarelike structures enjoy the advantages of being timeinvariant, robustly redetectable and having distinct characteristics, leading to its usability as a reliable landmark for selflocalization.
In this context, a single line scanning LiDAR is employed to perceive and extract the square landmark from surroundings. Compared with the multiline LiDAR commonly used on current smart vehicles or robots, the single line LiDAR enjoys much lower cost and better reliability with higher scanning speed and angular resolution. Although it can only perceive a single layer of 3D information, it is fully applicable for the detection of squarelike landmarks, the needed feature of which lies in the corners of its cross section. To complete this task, an LShape Fitting approach [10] is utilized, which will deduce the coordinates of the four corner points for each detected square item. Furthermore, these produced corner coordinates will be filtered and used in the measurement update of Particle Filter, associating with an offline map containing the ground truth coordinates of the landmarks.
In order to verify the effectiveness of our approach, a simulation environment based on VREP was built and the result of which will be shown in section VI.
Iv Squarelike Landmarks extraction
In order to utilize specific squarelike structures as landmarks for Localization, other irrelevant objects, such as vehicles or some temporarily stacked square items, should be avoided in case of distraction. For this purpose, the single line scanning LiDAR is installed at a higher position of the parking robot, 2 meters above the ground. In this case, most vehicles in a parking lot will not be detected by the LiDAR. After acquiring the scanning data of the environmental objects, segmentation is firstly employed, separating data points into different clusters. Afterwards, we follow the Lshape fitting approach proposed in [10], the consequence of which represents all eligible point cloud shape with a rectangle. To apply this result for localization, we calculate the corners of all deduced rectangles. These corner coordinates can be utilized in both mapping and realtime positioning.
Iva Segmentation
The laser scan data need to be segmented into different clusters before performing Lshape fitting. There are several classical clustering algorithms capable of this segmentation work. In [10], an adaptive segmentation algorithm is employed, however, this algorithm is likely to result in an increase of noise clusters, because isolated points are not excluded.
For accurate segmentation to the range points, we adopt the DBSCAN algorithm (Density Based Spatial Clustering of Applications with Noise) [12], the basic idea of which is to divide the range points into different clusters based on a preassigned distance threshold. The input for the segmentation algorithm is the 2D coordinates of n range scanning points, , which are relative to the pose of the LiDAR. The output of the algorithm is a set of segmented clusters, each of which potentially corresponds to an object in the real world.
The produced clusters of DBSCAN consist of core points and noncore points. The core point means that there are at least
points within distance threshold, while the noncore point signifies that this point is directly reachable from a core point within distance threshold. Points unreachable from any core points are outliers. In the Fig.
6, the red points belong to core points and the green points are noncore points, while the blue points are outliers.The main procedure is that we first detect the core points of range data, and then from these core points explore the noncore points. Note that this segmentation algorithm is adaptive due to the proportional distance threshold between lidar and objects. This is justified by the underlying fact that the laser scanning longitude resolution grows with the range distance [10]. Furthermore, if the scanning sequence is available, points’ Neighbors within the threshold distance can be more efficiently found. And a graphbased index structure can be used for speeding up neighbor search operations [13].
IvB LShape Fitting
In order to acquire the four corners of given square point clouds, a rectangle that fit best to the clusters should be found. In [10], several LShape Fitting methods are proposed and compared, from which we apply the SeachBased Rectangle Fitting using closeness criterion. Here is a brief illustration of the approach.
The basic idea is that all the possible directions of the rectangle are iterated and a rectangle containing all the scan points is found. Thereafter we can obtain the distances of all the points to the rectangle’s four edges. The pseudocode of the algorithm is shown in Alg. 1.
By using the closeness criterion, emphasis is laid on how close the points are to the two edges of the right corner. In the projected 2D plane, we can find and
which specify the boundaries in one direction for all points, and distance vectors
and record all the points’ distance to the two boundaries, so that a closer boundary as well as the corresponding distance vector can be confirmed. The distance vector of the other direction is defined in the same way. Then we calculate the closeness score, which is defined as , with being the point’s distance to its closer boundary. By maximizing this score function, we finally work out the best square fitting based on the closeness criterion.IvC Corner Points and Landmark Map
After fitting square clusters into rectangles, we acquire four lines . By calculating their intersection, we obtain four coordinates , which will be directly utilized in localization stage. These corners are stored in a digital map in form of vector. Moreover, the perception of square structures in realtime is carried out in the same way.
To construct a digital map for positioning, the ground truth coordinates of the square structures need to be attained. For an indoor parking lot, we can refer to the layout design, where the exact position of every pillar is explicit. For an outdoor parking lot, it is also feasible to generate the map with the aid of highprecision GPS and a LiDAR, using the algorithm presented above. It is noted that the relative position of landmarks need to be different to some extent, so that they can be distinguished in localization stage. In the parking lot our robot to work in, the newly placed charging piles is designed nonequidistant. In addition, if the current environment of a parking lot is not sufficient to meet the positioning requirements, some square pillars can be set artificially as landmarks for robot localization, such as square pillars hanging from the ceiling or standing on the ground.
V Localization
The objective of the localization of the parking robot is to deduce a continuous and precise estimate of its global pose in the parking lot. It is inevitable that all forms of perception systems have inherent measurement inaccuracies, the accurate state of the observed system could not directly recreate by sensors including IMU, GPS or LiDAR, especially when low cost is one of the system design goals. Hence, it is indispensable to employ stochastic approaches to generate precise pose approximation of automated robot. In our work, Monte Carlo Localization is implemented, in which a set of particles from time k and their associated weights are employed to maintain the pose estimate. The detailed implementation of the filter and how to associate it with the previous extracted corners are shown as follows.
Va Particle State and Initialization
The state of each particle is represented by Universal Transverse Mercator (UTM) coordinates, which comprises three components: easting , Northing , and an orientation angle towards the east axis . If the application scenario is limited in a specific parking lot, we can narrow down the scope by setting the reference frame to a similar but smaller UTM cell using the same coordinate definition. Therefore, the state vector of particle at time step is defined as
To perform Monte Carlo Localization, all particles should be distributed around the configuration space within the parking lot. For an outdoor parking lot, a normal GPS receiver give the estimate of the particle origin. While for an indoor parking lot, initialization areas are set to provide the initial position of particles, due to the absence of GPS signal indoor. With an initial pose estimate,
particles are generated, the pose of which is subject to normal distributions. The variance of the normal distribution is determined by the uncertainty of the initial pose.
VB Motion Update
Before an update of the particle weights is performed, the hypothesized robot state should be predicted to account for the motion of the robot within a time span. The odometry or IMU equipped on the parking robot can measure the speed and yaw rate of the robot, afterwards prediction over is carried out using Bicycle Model. The well known dead reckoning formulation is shown as follows:
(1)  
(2)  
(3)  
(4) 
Corresponding scene in VREP at the moment 

VC Measurement Update
Whenever a frame of LiDAR data is obtained, we merge it with the previous few frames to enhance the performance of the square landmark extraction mentioned in section IV. After the extraction approach is carried out, a vector of corner coordinates is acquired as the observed landmarks. Then the observed landmarks are associated with the truth value stored in the offline map. At this stage, some filtering algorithms are performed to exclude unreliable pairs. According to the matching result, the Euclidean distance between the landmark coordinates and particles will be calculated and determine the updated weight of each particle.
VC1 Landmark Association
In order to associate the observed landmarks with the digital map, coordinate system conversion needed to be performed firstly. For each particle , we can easily convert the given observation coordinates from robot to map coordinate system:
(5) 
For each converted observation coordinate , the closest landmark coordinate in digital map is to be found. Here we apply the 2D Nearest Neighbor Search based on KDTree. In the first place, a KDTree is constructed with the converted observation. Thereafter, ground truth landmarks will be selected from digital map according to the previous pose estimation and the given LiDAR range. We then traverse all the selected landmarks(truthvalue) to find the nearest node in KDTree(observation). In the end, each cloest pair will share a same landmark ID.
To filter out wrong matches, the Euclidean distance of all the pairing landmarks have to be less than a threshold. In this way, those corners that are incorrectly detected during the Landmark perception are excluded. In addition, every selected landmark will not associate with more than one observation, considering the scale requirement for landmarks when constructing the digital map.
Here is an example to illustrate this filtering process. The bottom left picture in Fig. 9 shows the LiDAR point cloud and its rectangle fitting result. And the right one tells the groud truth corner position(in green) and estimate position(in red). The big rectangle is actually a stairwell in the parking lot. This situation occurs when the robot senses part of stairwell’s outer wall, but a pillar happens to stand between the robot and the wall, which blocks the detection of laser, resulting in a partial gap in the point cloud of the wall(shown in Fig. 9(a)). In this case, the LShape fitting method will get two rectangles, which means 8 corners are included in the observation set. After the filtering process, however, only the two points closest to the top left and bottom left are retained due to the distance threshold(corresponding to the radius of the green marks in Fig. 9(b)). There might be two points (in the bottom left) meet the distance requirement simultaneously, but only the closest one left. Consequently this landmark won’t be counted twice in the weight update step.
VC2 Particle Weight Update
In this stage, we calculate how well the set of converted observations matches the stored landmark map . It is noted that the association has been employed within these two sets. Let and represent the lateral and longitudinal Euclidean distance between the converted observations and landmarks respectively, and be the uncertainty of Landmark measurement in different directions. Similar to the approach in [14], the likelihood of all combinations of transformed observations and landmark is calculated as follows:
(6) 
with:
(7) 
The weight of particle is now defined as:
(8) 
In this way, each particle acquires its weight, which virtually indicates the reliability of the particle pose and serves as the basis of resampling.


Detection Range  30m & 270° 

Accuracy  
Angular Resolution  0.25° 
VD Resampling and State Estimation
A discrete probability distribution is generated according to the weight vector of particles. Based on distribution, random integer
on the interval is selected, representing the particle, and the probability of each individual integer to be selected is defined as:(9) 
Once a random integer is produced, a newly particle is added to the resampled particle list. After iterations for N times, a new particle list is generated according to weight. In this way, the particles is moving in the direction of increasing weight during every measurement step. At the last of each measurement step, we select the best particle with highest weight to estimate the robot pose. Thus the selflocalization of the robot is done.
Vi Evaluation
In this section, localization system is tested in a simulation environment built on VREP. The detailed configuration of the simulation environment and the employed sensors is illustrated in subsection A, after which the results of the test are shown and analyzed as well.
Via Simulation Environment
An indoor parking lot is built in robot simulator VREP, with 23 pillars and 18 charging piles in it. In order to get closer to the real parking scene, the layout of the parking lot is designed based on an existing parking lot with a number of parked vehicles in it. Fig. 12 shows the simulation environment.
All sensor data are provided by the simulation environment. For position initialization, it offers a initial position with noise, the accuracy of which is 5 meters for the position and 2° for the heading. IMU data is updated at 100Hz, while LiDAR data at 5Hz. The parameters of the LiDAR are presented in Table. I.
ViB Experimental Results
The estimate trajectory and the ground truth route are showed in Fig. 14. The total length of the driving route measures 1500 meters. In Fig. 18, the error for position as well as heading during the entire path is shown. The average lateral, longitudinal and orientation errors are listed in Table. II. It is evident, that the localization using square landmarks is able to maintain position error below 0.2 m and an orientation error less than 1 during the whole drive.
Direction  Error 

Longitudinal  0.098 m 
Lateral  0.085 m 
Orientation  0.46 



Vii Conclusion
In this paper, a localization approach based on squarelike landmark using a digital map is presented. We demonstrate that squarelike structures can serve as reliable landmarks in a parking lot for autonomous parking robot cars. The realized accuracy is sufficient for the parking robot to locate itself and carry out its task. The next step is to conduct a field test of the approach with our parking robot. Applying this method to a commercial vehicle is a future goal. It is also a practical idea to combine other landmarks including poles, lanes or pavements with square landmarks, making it a commonly applicable localization method.
References
 [1] F. Zafari, A. Gkelias, and K. Leung, “A survey of indoor localization systems and technologies,” arXiv preprint arXiv:1709.01015, 2017.
 [2] S. Se, D. G. Lowe, and J. J. Little, “Visionbased global localization and mapping for mobile robots,” IEEE Transactions on robotics, vol. 21, no. 3, pp. 364–375, 2005.
 [3] S.J. Lee, G. Tewolde, J. Lim, and J. Kwon, “Qrcode based localization for indoor mobile robot with validation using a 3d optical tracking instrument,” in Advanced Intelligent Mechatronics (AIM), 2015 IEEE International Conference on, pp. 965–970, IEEE, 2015.
 [4] H. Zhang, C. Zhang, W. Yang, and C.Y. Chen, “Localization and navigation using qr code for mobile robot in indoor environment,” in Robotics and Biomimetics (ROBIO), 2015 IEEE International Conference on, pp. 2501–2506, IEEE, 2015.
 [5] P. Nazemzadeh, D. Fontanelli, D. Macii, and L. Palopoli, “Indoor localization of mobile robots through qr code detection and dead reckoning data fusion,” IEEE/ASME Transactions on Mechatronics, vol. 22, no. 6, pp. 2588–2599, 2017.
 [6] W.T. Huang, C.L. Tsai, and H.Y. Lin, “Mobile robot localization using ceiling landmarks and images captured from an rgbd camera,” in Advanced Intelligent Mechatronics (AIM), 2012 IEEE/ASME International Conference on, pp. 855–860, IEEE, 2012.

[7]
A. Rusdinar and S. Kim, “Visionbased indoor localization using artificial landmarks and natural features on the ceiling with optical flow and a kalman filter,”
International Journal of Fuzzy Logic and Intelligent Systems, vol. 13, no. 2, pp. 133–139, 2013.  [8] M. Sefati, M. Daum, B. Sondermann, K. D. Kreisköther, and A. Kampker, “Improving vehicle localization using semantic and polelike landmarks,” in Intelligent Vehicles Symposium (IV), 2017 IEEE, pp. 13–19, IEEE, 2017.
 [9] R. Spangenberg, D. Goehring, and R. Rojas, “Polebased localization for autonomous vehicles in urban scenarios,” in Intelligent Robots and Systems (IROS), 2016 IEEE/RSJ International Conference on, pp. 2161–2166, IEEE, 2016.
 [10] X. Zhang, W. Xu, C. Dong, and J. M. Dolan, “Efficient lshape fitting for vehicle detection using laser scanners,” in 2017 IEEE Intelligent Vehicles Symposium (IV), 2017.
 [11] S. Thrun, D. Fox, W. Burgard, and F. Dellaert, “Robust monte carlo localization for mobile robots,” tech. rep., CARNEGIEMELLON UNIV PITTSBURGH PA DEPT OF COMPUTER SCIENCE, 2000.
 [12] M. Ester, H.P. Kriegel, J. Sander, X. Xu, et al., “A densitybased algorithm for discovering clusters in large spatial databases with noise.,” in Kdd, vol. 96, pp. 226–231, 1996.
 [13] K. M. Kumar and A. R. M. Reddy, “A fast dbscan clustering algorithm by accelerating neighbor searching using groups method,” Pattern Recognition, vol. 58, pp. 39 – 48, 2016.
 [14] H. Deusch, J. Wiest, S. Reuter, D. Nuss, M. Fritzsche, and K. Dietmayer, “Multisensor selflocalization based on maximally stable extremal regions,” in Intelligent Vehicles Symposium Proceedings, 2014 IEEE, pp. 555–560, IEEE, 2014.