Piecewise Linear De-skewing for LiDAR Inertial Odometry

Light detection and ranging (LiDAR) on a moving agent could suffer from motion distortion due to simultaneous rotation of the LiDAR and fast movement of the agent. An accurate piecewise linear de skewing algorithm is proposed to correct the motion distortions for LiDAR inertial odometry (LIO) using high frequency motion information provided by an Inertial Measurement Unit (IMU). Experimental results show that the proposed algorithm can be adopted to improve the performance of existing LIO algorithms especially in cases of fast movement.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

02/22/2022

Robust and Online LiDAR-inertial Initialization

For most LiDAR-inertial odometry, accurate initial state, including temp...
07/06/2020

Multi-Sensor State Estimation Fusion on Quadruped Robot Locomotion

In this paper, we present a effective state estimation algorithm that co...
01/18/2021

Deep Inertial Odometry with Accurate IMU Preintegration

Inertial Measurement Units (IMUs) are interceptive modalities that provi...
05/23/2019

IN2LAAMA: INertial Lidar Localisation Autocalibration And MApping

In this paper, we present INertial Lidar Localisation Autocalibration An...
02/17/2022

LiDAR-Inertial 3D SLAM with Plane Constraint for Multi-story Building

The ubiquitous planes and structural consistency are the most apparent f...
02/05/2021

LION: Lidar-Inertial Observability-Aware Navigator for Vision-Denied Environments

State estimation for robots navigating in GPS-denied and perceptually-de...
11/08/2019

Algorithmic Design and Implementation of Unobtrusive Multistatic Serial LiDAR Image

To fully understand interactions between marine hydrokinetic (MHK) equip...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

On-line localization of mobile agents plays an important role in autonomous navigation and it has been intensively studied for many years [13, 7, 3, 16, 4]. Since a light detection and ranging (LiDAR) rangefinder can provide highly reliable and precise distance measurements for surrounding environments and it is insensitive to ambient lighting and optical texture in a scene, the LiDAR rangefinder is one of the most widely adopted sensors for the indoor navigation among all available sensors [29, 18, 6]

. Pose of a rangefinder can be estimated by matching two different scans

[21]. Iterative Closest Point (ICP) algorithm is one of the dominant solutions for the scan matching problem [5, 1]. In existing ICP algorithms [5, 1, 27, 23, 8], the rotation of a laser beam is assumed to be the only motion of a scanning device. However, the rangefinder mounted on an autonomous vehicle or an unmanned aerial vehicle (UAV) usually moves during scanning a sweep due to movement of the agent. This introduces motion distortions among all points in the sweep, especially when the agent is moving fast.

Distortion caused by the movement of the agent affects accuracy of LiDAR odometry (LO). This problem was addressed by Bezet and Cherfaoui [2] in terms of time error correction. It is assumed that the sensor measures the distance to the same object between two frame at a same angle. The algorithm in [2] is not applicable if the agent rotates and moves fast. Zhang et al. [34]

proposed an interesting LO algorithm by computing the transformation matrix from the first point to the last point of a sweep and linearly interpolating a transformation matrix to correct motion distortion for each point in the sweep. Accurate LO requires knowledge of the LiDAR pose during continuous laser ranging to correct the motion distortion. Fortunately, the knowledge can be provided by an IMU. The linear acceleration from IMU was also utilized in

[34] to address the nonlinear motion of the agent. Experimental results indicate that the performance of the ICP can indeed be improved using the IMU measurements. Ye et al. [32] proposed an interesting de-skewing algorithm by using the IMU measurements. The transformation matrix from the first point to the last point of a sweep was first computed by using all the IMU measurements during the sweep. A transformation matrix is then linearly interpolated to correct the motion distortion for each point in the sweep. Same as [34], the movement of the agent was assumed to be linear during the sweep. The linear assumption is not true if the motion of the agent is complicated. For example, a sweep can be scanned when the agent goes straight and followed by a turn. It is desired to develop a new de-skewing algorithm without the linear assumption required by [34, 32]. Moreover, high accuracy preintegrated IMU models were proposed recently [13, 14, 9, 10]. All these models are derived for the moving agents where IMU pre-integration is considered in the world reference frame. On the other hand, the skewing distortion of 3D points occurs on the body frame. Therefore, the IMU pre-integration of 3D points in the body frame is worthy of being studied when the de-skewing of LiDAR scans is studied.

In this paper, an accurate piecewise linear de-skewing algorithm is proposed to correct motion distortions caused by the movement of the agent. The proposed algorithm is based on an observation that the IMU has a potential to provide the knowledge of the LiDAR pose during continuous laser ranging due to its high sampling rate. An elegant discrete model is first derived for a static 3D point of the environment in the body frame under the same assumption in [13, 14, 9], i.e., the linear acceleration and the angular velocity in the IMU frame are constant between two successive IMU measurements. The derivations in the body frame are much simpler than those in the world reference frame [13, 14, 9]. The new model is applied to derive a closed form IMU integration model for the static 3D point in the body frame. The proposed IMU motion integration model is different from the models in [13, 14, 9] in the sense that only the proposed model is on the static 3D point. Finally the proposed IMU integration model is adopted to correct the motion distortions caused by the movement of the agent. One point in a sweep is selected as the referencing point and each point in the sweep is corrected by using a transformation matrix which is computed by using all the IMU measurements between the referencing point and the point. Clearly, the proposed algorithm does not assume the motion of the agent to be linear during the sweep as required by [34, 32]. The corrected scans serve as the inputs of an existing LIO such as [32]. The proposed motion integration model is integrated with the LiDAR factor via a joint optimization with the mean measurement, the covariance matrix and the Jacobian matrix of the proposed model in [14]. As such, the proposed piecewise linear de-skewing algorithm and the LIO are seamlessly integrated together to reduce the motion distortion caused by the movement of the agent. The proposed de-skewing algorithm is complimentary to the existing LIO algorithms such as [32] and it can be used to improved the robustness of the existing LIO algorithms. Overall, two major contributions of this paper are: 1) an elegant closed form IMU integration model in the body frame for the static 3D point by using the IMU measurements, and 2) a piecewise linear de-skewing algorithm for correcting the motion distortion of the LiDAR which can be adopted by any existing LIO algorithm.

The rest of the paper is organized as follows. A literature review of related works is provided in Section II. Section III includes an IMU integration model for the 3D point, and section IV introduces the proposed de-skewing algorithm. Extensive results are given in Section V to verify the proposed algorithm. Finally, some conclusion remarks are provided in Section VI.

Ii Related Works

The existing ICP algorithms [5, 1, 27, 23, 8, 25] assume the rotation of a laser beam to be the only motion of a scanning device. Unfortunately, this is not always true [15]. An example is illustrated in Figure 1. The black line in Figure 1 represents an environment and the movement of the rangefinder is indicated by the green arrow while the red points are the raw scan data. Obviously, the set of the points is distorted due to the movement of the agent.

Fig. 1: Motion distortions in a sweep of LiDAR. Black points is a given an environment, red points stand for a distorted scan, and blue points are a motion corrected scan.

Accurate LO requires knowledge of the LiDAR pose during continuous laser ranging to correct the motion distortion, and the knowledge can be provided by an IMU. The LiDAR and IMU measurements could be loosely coupled. Zhang et al. [34, 35] calculated the orientation of the robot pose by IMU measurements in LiDAR odometry and mapping (LOAM) algorithm. The LOAM considers the IMU measurements as a prior factor only for the algorithm. As a result, the IMU measurements were not used in the optimization process. Tang et al. [31]

proposed to fuse the LiDAR and IMU measurements using an Extended Kalman filter (EKF). However, this algorithm was proposed for 2D scenarios and it is not applicable for a 3D environment. Lynen

et al. [22] proposed a robust multi-sensors fusion approach which is applicable to the 3D environment. The LiDAR and IMU measurements were also fused using the EKF. Generally, the loosely coupled paradigms are simpler but less accurate.

The LiDAR and IMU measurements can also be tightly coupled. Soloviev et al. [30] proposed an algorithm for 2D LiDAR scanning and matching. IMU orientation measurements were used to compensate the tilted LiDAR. Moreover, the drift in IMU measurements was corrected and updated using the Kalman filter (KF). One more tightly coupled algorithm was proposed by Hemann et al. [12]. The algorithm can be applied to achieve good and robust results over long travelling period. However, it could fail without prior information about the environment. Inspired by the work in [26], Ye et al. [32] proposed a 3D tightly coupled fusion algorithm between the LiDAR and IMU measurements. One common way to address the motion distortion is the continuous-time representation such as the work in [24, 17]. Another technique is the linear interpolation approach [34]. The movement of the agent was assumed to be linear during the sweep. Its accuracy could be hindered by the linear motion assumption. It is thus desired to design a new de-skewing algorithm in order to compensate the motion distortion by using the high frequency IMU measurements.

Iii IMU Pre-integration of 3D Points

In this section, a new discrete motion pre-integration model is introduced for static 3D points of the environment in the LiDAR frame. Same as [13, 14], the proposed model assumes that both the linear acceleration and the angular velocity in the IMU frame are constant between two successive IMU measurements. It is worth noting that and can be derived by multiplying the readouts of the IMU by the fixed rotational matrix between the IMU and LiDAR. All notations of the proposed model are included in Table I.

linear velocity of the
agent in the LIDAR frame
rotation matrix from the world
frame to the LIDAR frame
linear acceleration of the agent
in the LIDAR frame
gravitational acceleration
angular velocity of the agent
in the LIDAR frame
the th 3D
point in the LIDAR frame
the th synchronized 3D
point in the LIDAR frame
duration between two
successive IMU measurements
TABLE I: Notations used in the proposed motion pre-integration model

The dynamics of the th static 3D point of the environment is represented as

(1)

where the matrix is given by

(2)

is a identity matrix, and is defined as

(3)

Since both and are constant in the interval between the two successive IMU measurements, the system (1) is a time invariant linear system in the interval. It can be derived from the equation (1) that

(4)

Notice that the matrix can be decomposed as

(5)

where the matrices and are

(6)

Since , it follows that [33]

(7)

Two exponential matrices and can be easily computed as [33]

(8)

where is . The matrix is defined as

(9)

and and .

Subsequently, can be computed as

(10)

Due to the new states in the equation (1), the derivation of is much simpler than the corresponding derivations in [13, 14, 9]. A discrete motion model between the two successive IMU measurements can be derived for the th point as

(11)

where , and are , and , respectively. The matrices , and are defined as

(12)

Iv The Proposed Piecewise Linear De-skewing

Consider two time instances and . It should be pointed out that and in the interval between the th and th IMU measurements are different from those in the interval between the th and th IMU measurements. Therefore, the dynamics of the 3D point is a switched system between the two time instances [19, 20].

By accumulating from to and using the following two equations

(13)

the discrete motion pre-integration model between the two time instances can be derived for the th 3D point as

(14)

where

is a translator vector,

is a rotation matrix. , and are given as

(15)

and for all ’s, and the matrix is

(16)

It is worth noting that the proposed IMU motion integration model is different from the existing IMU integration models in [10, 14] which are on the motion of the moving agent rather than the static 3D points of the environment.

The proposed IMU integration model is applied to design a piecewise linear de-skewing algorithm for the 3D points in the th sweep. For simplicity, the transformation matrix from to is represented by

(17)

and the transformation matrix from to is

(18)

Suppose that the first point and the last point of the th sweep are scanned at the th IMU and the th IMU. The th point is scanned at . The synchronization point is selected as the th point in the th sweep. The corresponding IMU is the th one. Consider the following two cases:

Case 1: is smaller than . Further assume that there are IMU measurements in the interval . It can be easily derived that

(19)

where the matrix is

(20)

Case 2: is larger than . Further suppose that there are IMU measurements in the interval . Similarly, it can be derived that

(21)

where the matrix is

(22)

Using the above equation, all the points in the th sweep can be aligned with respect to the synchronization point of the th sweep. As such, all the motion distortions in the th sweep due to the movement of the agent can be removed. Clearly, the th point is corrected by using all the IMU measurements between the first (or last) point and the point. The proposed de-skewing algorithm does not assume the motion of the agent to be linear during the sweep as required by the de-skewing algorithms in [34, 32].

Let and be the biases of and in the interval . Since the biases of and affect the performance of the motion integration model, the values of and will be replaced by and , respectively [14]. The corrected scans serve as the inputs of an existing LIO such as [32]. The IMU integration model (14) and (15) can be adopted to derive an IMU integration model of the moving agent which is the same as the model in [16]. The IMU integration model of the moving agent is then integrated with the LiDAR factor via the joint optimization in [32] with the mean measurement, the covariance matrix and the Jacobian matrix of the proposed model in [14]. Due to the space limitation, the details are not repeated in this paper.

V Experimental Results

In this section, the proposed de-skewing algorithm is compared with the algorithms in [34], [32], [26], [10], and [28]

. The proposed algorithm is first implemented on top of the open-source code of LIO_mapping

111https://github.com/hyye/lio-mapping in [32].

The six datasets with different motion speed and movement in [32] are used to compare all these algorithms. The datasets in [32] were recorded indoor using a Velodyne VLP-16 LiDAR and a Xsens MTi-100 IMU. The sampling rates of the LiDAR and IMU are 5 Hz and 400 Hz, respectively. The toolbox in [36] is used to align the results with the ground truth from a motion capture system with a sampling rate of 100 Hz. All the illustrated results are averaged over 20 runs due to the variation in the output estimation. The results are obtained using a desktop with CPU and 32 RAM.

V-a Comparison of Different Motion Integration Models

Proposed Model Model in [32] Difference Percentage
Position Orientation Position Orientation Position
RMSE RMSE RMSE RMSE RMSE
Sequence [cm] [cm] [deg] [deg] [cm] [cm] [deg] [deg] [cm] []

Fast 1
11.96 4.37 0.1260 0.0971 16.25 4.79 0.1245 0.1043 4.29 26.40

Fast 2
34.60 17.42 0.2361 0.0775 37.68 16.64 0.1401 0.0645 3.08 8.17

Med 1
26.17 14.07 0.0852 0.0301 27.05 14.75 0.0785 0.0318 0.88 3.25

Med 2
31.08 13.57 0.3481 0.3080 32.81 14.72 0.3698 0.3080 1.73 5.27

Slow 1
45.56 23.93 0.3778 0.1560 45.25 23.47 0.3749 0.1529 -0.31 -0.68

Slow 2
46.53 22.92 0.2984 0.1975 44.75 19.15 0.2950 0.2069 -1.78 -3.98
TABLE II: IMU Odometry For Different Real Sequences With Different Motion Speeds
Proposed LIO LIO in [32] Difference Percentage Total Distance Linear Velocity
Position Orientation Position Orientation Position
RMSE RMSE RMSE RMSE RMSE
Sequence [cm] [cm] [deg] [deg] [cm] [cm] [deg] [deg] [cm] [] [m] [m/s]

Fast 1
9.13 4.46 0.1781 0.1602 11.79 5.50 0.1507 0.1327 2.66 22.56 31.59 0.8904

Fast 2
11.94 6.09 0.2322 0.2098 14.47 7.63 0.2034 0.1747 2.53 17.48 33.12 0.9404

Med 1
24.91 13.18 0.0787 0.0582 25.87 13.21 0.0781 0.0568 0.96 3.71 38.76 0.7047

Med 2
22.06 11.25 0.3496 0.3302 24.39 11.87 0.3583 0.3358 2.33 9.55 40.33 0.8528

Slow 1
17.69 9.64 0.2870 0.2744 18.01 9.91 0.2839 0.2707 0.32 1.78 23.02 0.6511

Slow 2
12.21 6.70 0.3251 0.3139 14.17 7.72 0.3252 0.3131 1.96 13.83 24.43 0.6950

TABLE III: Comparison of Proposed LIO Algorithm with the LIO Algorithm in [32] without Metric Maps

(RMSE On Position, Orientation and Their Standard Deviation (

) For Different Real Sequences With Different Motion Speeds)

This subsection aims to compare the closed form motion integration of the moving agent with the IMU integration model in [10, 26] because the LIO_mapping uses the discrete IMU model in [10, 26]. Note that the predicted IMU integration in the [32] was used in this evaluation. The IMU integration period is .

The results in Table II are obtained using a sampling rate of 100 Hz for both the IMU odometry and ground truth as the default in the open-source code. The IMU odometry has been aligned with the ground truth using the first 50 estimated poses under the evaluation trajectory toolbox in [36]. The results show that the proposed model outperforms the state-of-the-art one for complicated motion conditions. For example, the proposed model outperforms the state-of-the-art one on Fast 1 and Fast 2 sequences by and , respectively. The reason is that the proposed model takes into consideration the high dynamic change of the motion. On the contrary, the state-of-the-art one slightly outperforms the proposed model on Slow 1 and Slow 2 sequences. That is because the high sampling rate of IMU mitigates its discretization effect. The results are consistent with the evaluation in [13, 14].

V-B Evaluation of De-skewing and LIO without Metric Maps

In this subsection, the proposed de-skewing algorithm and the closed form motion integration model are evaluated using the LIO framework without enabling any metric map. The proposed de-skewing algorithm (19)-(20) is also implemented in the open-source code [32]. The proposed motion integration model is integrated with the LiDAR factor via a joint optimization with the mean measurement, the covariance matrix and the Jacobian matrix of the proposed model in [14]. Note that all the estimated states are aligned with the ground truth using the first 50 estimation poses under the toolbox in [36]. The sampling rate of the LiDAR is 5 Hz and all other settings in the experiment are kept the same for the fair comparison with the LIO algorithm in [32].

As shown in Table III, the proposed LIO algorithm outperforms the state-of-the-art [32] by and on Fast 1 and Slow 2 sequences, respectively. The is because the proposed motion integration model has good representation for the IMU excitation and takes into consideration the high dynamic change between two consecutive IMU measurements for the fast motion. This conclusion is foreseeable and consistent with the results of the IMU odometry test in Table II.

(a) Fast 1
(b) Slow 2
Fig. 2: Estimated trajectories against the ground-truth trajectories
(a) RMSE on Position Error
(b) RMSE on Orientation Error
(c) Absolute Error
Fig. 3: RMSE and position analysis on Fast 1 Sequence for the proposed LIO and the LIO [32]
(a) RMSE on Position Error
(b) RMSE on Orientation Error
(c) Absolute Error
Fig. 4: RMSE and position analysis on Slow 2 Sequence for the proposed LIO and the LIO [32]

In the case of the slow motion, the proposed algorithm outperforms the state-of-the-art one in [32] by and on Slow 1 and Slow 2 sequences, respectively. The results seem to be unexpected because the state-of-the-art motion integration model in [10, 26] outperforms the proposed motion integration model in the IMU odometry as shown in Table II. The improvement comes from the proposed IMU model which has been deployed as a tightly coupled with LiDAR factor, and from the transformation matrix (20) or (22) which is more accurate than the corresponding matrix in the state-of-the-art one. The de-skewing algorithm in [32] first computes the transformation matrix from the first point to the last point, and then linearly interpolates a transformation matrix for each point to be corrected. The interpolated transformation matrix is not accurate if the motion of the agent is not linear.

Figure 2 shows the aligned estimated trajectories against the ground-truth for Fast 1 and Slow 2 sequences. Clearly, the Fast 1 trajectory is more complicated and has many turning points than the Slow 2 trajectory. The proposed algorithm behaves well on both the trajectories. Figures 3 and 4 show the RMSE on position and orientation, and the absolute error analysis in the three dimensions on Fast 1 and Slow 2 Datasets, respectively. It shows that the proposed algorithm has a good performance in terms of the RMSE over the full trajectory. It is also observed that the difference in state estimation between the proposed algorithm and the state-of-the-art one in [32] is increased along the travelling time. Moreover, Table III shows the total travelling distance and linear velocity for each sequence.

Overall, the proposed LIO algorithm outperforms the state-of-the-art in [32] especially for complicated movements. The proposed model shows the reliability and robustness on the state estimation.

V-C Evaluation of De-skewing and LIO with Metric Maps

In this subsection, effect of metric maps to the LIO will be studied and discussed. The proposed algorithm is compared with the state-of-the-art one in [32] and the LOAM 222https://github.com/laboshinl/loam_velodyne [34]. LOAM is one of the top accurate state estimation algorithms, and it shows a high accuracy estimation on the most popular dataset in the LO field, i.e., Kitti datasets333http://www.cvlibs.net/datasets/kitti/eval_odometry.php [11]. This dataset has been recorded and obtained using a ground vehicle (GV). It is worth noting that the state estimation using the LOAM is done without any assistance from the IMU.

Proposed LIO with Metric Map LIO in [32] with Metric Map LOAM [34]
Position Position Position
RMSE RMSE Difference Percentage RMSE Difference Percentage
Sequence [cm] [cm] [cm] [cm] [cm] [] [cm] [cm] [cm] []
Fast 1 5.03 2.10 5.09 2.03 0.06 1.05 155.49 69.37 - -
Fast 2 9.65 5.75 9.92 5.75 0.27 2.71 152.81 102.15 - -

Med 1
10.94 5.32 10.94 5.24 - - 203.78 136.82 - -

Med 2
7.74 6.19 7.97 6.33 0.23 3.00 158.93 115.31 - -

Slow 1
6.09 2.77 5.80 2.67 -0.29 -4.97 15.57 8.24 9.48 60.89

Slow 2
5.13 2.20 5.32 2.48 0.19 3.57 9.22 5.22 4.09 44.36

TABLE IV: Comparison of Proposed LIO Algorithm with the LIO Algorithm in [32] with Metric Maps and the LOAM in [34](RMSE On Position For Different Real Sequences With Different Motion Speeds)

Table IV illustrates the RMSE on position of the full trajectory for the proposed algorithm, the LIO_mapping in [32], and the LOAM. The results show that the LOAM fails to estimate the state in the presence of fast or medium speed motion. There is no assistance from the IMU and the LOAM fails on these conditions. The RMSE of position for the LOAM on the Fast and Med datasets are highlighted using the green color in Table IV. The red highlight indicates that the LOAM fails to estimation the position of the robot as shown in Figure 5.

Fig. 5: Estimated trajectory via the LOAM against the ground-truth one on Fast 1 Dataset

The proposed algorithm shows a superior’s performance compared to the LOAM on these conditions by on Slow 1 dataset and by on Slow 2 dataset. This is because 1) the proposed de-skewing algorithm does not assume the motion of the agent to be linear during the sweep as required by the LOAM; and 2) the proposed motion integration model is integrated with the LiDAR factor via the tightly coupled joint optimization in [32]. It should be pointed out that the accuracy of LOAM will be improved if an IMU is deployed as an additional sensor as mentioned in [35].

The results in Table IV show that the proposed algorithm and the state-of-the-art one in [32] are comparable in presence of metric maps. This is because the results highly depend on the metric map with salient features.

V-D Further Evaluation of The Proposed De-skewing Algorithm

Recently, an interesting LIO algorithm was proposed in [28]. The de-skewing algorithm in [28] is replaced by the proposed one and the implementation is on top of the open-source code of LIO-SAM444https://github.com/TixiaoShan/LIO-SAM and one dataset in [28]. Both GPS and loop closure are disabled to test the effectiveness of the proposed model.

One dataset in [28] was recorded in the park which is covered by vegetation using a Velodyne VLP-16 LiDAR and a MicroStrain 3DM-GX5-25 IMU mounted on UGV. The sampling rates of the LiDAR and IMU are 10 Hz and 500 Hz, respectively.

Table V shows the Root Mean Square Error (RMSE) w.r.t the ground-truth provided by GPS. The RMSE results obtained in Table V do not include the error in the z-direction or orientation errors. The reason is that the ground-truth in the park dataset does not have z-direction or orientation measurements. The results show that the proposed algorithm outperforms the state-of-the-art LIO-SAM by . Figure 6 shows the estimated trajectories against the ground-truth on the park dataset, while Figures 7.(a) and 7.(b) show the RMSE on 2D position and the absolute error analysis in the two dimensions on the park dataset, respectively. It shows that the proposed algorithm has a slightly better performance and estimation regarding the RMSE over the full trajectory w.r.t the state-of-the-art one.

Proposed Lio_sam [28] Difference Percentage Total Distance Linear Velocity
RMSE RMSE
Sequence [m] [m] [m] [m] [m] [] [m] [m/s]

park
25.63 11.56 28.97 13.21 3.34 11.53 656.85 1.1932
TABLE V: RMSE On 2D Position w.r.t GPS
Fig. 6: Estimated trajectories against the ground-truth trajectories
(a) RMSE on Position Error
(b) Absolute Error
Fig. 7: RMSE and position analysis on Park dataset

Vi Conclusion Remarks

A novel piecewise linear de-skewing algorithm has been proposed for LiDAR inertial odometry (LIO) of fast moving agents using high frequency motion information provided by an inertial measurement unit (IMU). The robustness of the LIO can be enhanced by incorporating the proposed de-skewing algorithm into the LIO. Experimental results validated the performance of the proposed de-skewing algorithm. Due to its simplicity, the proposed de-skewing algorithm is practical for real world deployment.

Acknowledgement

This research is supported by SERC grant No. 192 25 00049 from the National Robotics Programme (NRP).

References

  • [1] P. J. Besl and N. D. McKay (1992) Method for registration of 3-d shapes. In Sensor fusion IV: control paradigms and data structures, Vol. 1611, pp. 586–606. Cited by: §I, §II.
  • [2] O. Bezet and V. Cherfaoui (2006) Time error correction for laser range scanner data. In 2006 9th International Conference on Information Fusion, pp. 1–7. Cited by: §I.
  • [3] B. Cao, C. Ritter, D. Göhring, and R. Rojas (2020)

    Accurate localization of autonomous vehicles based on pattern matching and graph-based optimization in urban environments

    .
    In 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), pp. 1–6. Cited by: §I.
  • [4] L. J. Chen, J. Henawy, B. B. Kocer, and G. G. L. Seet (2019) Aerial robots on the way to underground: an experimental evaluation of vins-mono on visual-inertial odometry camera. In 2019 International Conference on Data Mining Workshops (ICDMW), pp. 91–96. Cited by: §I.
  • [5] Y. Chen and G. G. Medioni (1992) Object modeling by registration of multiple range images.. Image Vision Comput. 10 (3), pp. 145–155. Cited by: §I, §II.
  • [6] J. F. Chow, B. B. Kocer, J. Henawy, G. Seet, Z. Li, W. Y. Yau, and M. Pratama (2019) Toward underground localization: lidar inertial odometry enabled aerial robot navigation. arXiv preprint arXiv:1910.13085. Cited by: §I.
  • [7] M. Demir and K. Fujimura (2019) Robust localization with low-mounted multiple lidars in urban environments. In 2019 IEEE Intelligent Transportation Systems Conference (ITSC), pp. 3288–3293. Cited by: §I.
  • [8] A. Diosi and L. Kleeman (2007) Fast laser scan matching using polar coordinates. The International Journal of Robotics Research 26 (10), pp. 1125–1153. Cited by: §I, §II.
  • [9] K. Eckenhoff, P. Geneva, and G. Huang (2019) Closed-form preintegration methods for graph-based visual–inertial navigation. The International Journal of Robotics Research 38 (5), pp. 563–586. Cited by: §I, §I, §III.
  • [10] C. Forster, L. Carlone, F. Dellaert, and D. Scaramuzza (2016) On-manifold preintegration for real-time visual–inertial odometry. IEEE Transactions on Robotics 33 (1), pp. 1–21. Cited by: §I, §IV, §V-A, §V-B, §V.
  • [11] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun (2013) Vision meets robotics: the kitti dataset. The International Journal of Robotics Research 32 (11), pp. 1231–1237. Cited by: §V-C.
  • [12] G. Hemann, S. Singh, and M. Kaess (2016) Long-range gps-denied aerial inertial navigation with lidar localization. In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1659–1666. Cited by: §II.
  • [13] J. Henawy, Z. Li, W. Y. Yau, G. Seet, and K. W. Wan (2019) Accurate imu preintegration using switched linear systems for autonomous systems. In 2019 IEEE Intelligent Transportation Systems Conference (ITSC), pp. 3839–3844. Cited by: §I, §I, §I, §III, §III, §V-A.
  • [14] J. Henawy, Z. Li, W. Yau, and G. Seet (2021) Accurate imu factor using switched linear systems for vio. IEEE Transactions on Industrial Electronics 68 (8), pp. 7199–7208. External Links: Document Cited by: §I, §I, §III, §III, §IV, §IV, §V-A, §V-B.
  • [15] S. Hong, H. Ko, and J. Kim (2010) VICP: velocity updating iterative closest point algorithm. In 2010 IEEE International Conference on Robotics and Automation, pp. 1893–1898. Cited by: §II.
  • [16] E. Javanmardi, M. Javanmardi, Y. Gu, and S. Kamijo (2017) Autonomous vehicle self-localization based on probabilistic planar surface map and multi-channel lidar in urban area. In 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), pp. 1–8. Cited by: §I.
  • [17] C. Le Gentil, T. Vidal-Calleja, and S. Huang (2018) 3d lidar-imu calibration based on upsampled preintegrated measurements for motion distortion correction. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 2149–2155. Cited by: §II.
  • [18] C. Le Gentil, T. Vidal-Calleja, and S. Huang (2019) IN2LAMA: inertial lidar localisation and mapping. In 2019 International Conference on Robotics and Automation (ICRA), pp. 6388–6394. Cited by: §I.
  • [19] Z. G. Li, Y. C. Soh, and C. Y. Wen (2001) Robust stability of a class of hybrid nonlinear systems. IEEE Transactions on Automatic Control 46 (6), pp. 897–903. Cited by: §IV.
  • [20] Z. Li, Y. Soh, and C. Wen (2005) Switched and impulsive systems: analysis, design and applications. Vol. 313, Springer Science & Business Media. Cited by: §IV.
  • [21] F. Lu and E. Milios (1997)

    Robot pose estimation in unknown environments by matching 2d range scans

    .
    Journal of Intelligent and Robotic systems 18 (3), pp. 249–275. Cited by: §I.
  • [22] S. Lynen, M. W. Achtelik, S. Weiss, M. Chli, and R. Siegwart (2013) A robust and modular multi-sensor fusion approach applied to mav navigation. In 2013 IEEE/RSJ international conference on intelligent robots and systems, pp. 3923–3929. Cited by: §II.
  • [23] J. Minguez, L. Montesano, and F. Lamiraux (2006) Metric-based iterative closest point scan matching for sensor displacement estimation. IEEE Transactions on Robotics 22 (5), pp. 1047–1054. Cited by: §I, §II.
  • [24] C. Park, P. Moghadam, S. Kim, A. Elfes, C. Fookes, and S. Sridharan (2018) Elastic lidar fusion: dense map-centric continuous-time slam. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 1206–1213. Cited by: §II.
  • [25] F. Pomerleau, F. Colas, R. Siegwart, and S. Magnenat (2013) Comparing icp variants on real-world data sets. Autonomous Robots 34 (3), pp. 133–148. Cited by: §II.
  • [26] T. Qin, P. Li, and S. Shen (2018) Vins-mono: a robust and versatile monocular visual-inertial state estimator. IEEE Transactions on Robotics 34 (4), pp. 1004–1020. Cited by: §II, §V-A, §V-B, §V.
  • [27] S. Rusinkiewicz and M. Levoy (2001) Efficient variants of the icp algorithm. In Proceedings Third International Conference on 3-D Digital Imaging and Modeling, pp. 145–152. Cited by: §I, §II.
  • [28] T. Shan, B. Englot, D. Meyers, W. Wang, C. Ratti, and D. Rus (2020) LIO-sam: tightly-coupled lidar inertial odometry via smoothing and mapping. arXiv preprint arXiv:2007.00258. Cited by: §V-D, §V-D, TABLE V, §V.
  • [29] T. Shan and B. Englot (2018) Lego-loam: lightweight and ground-optimized lidar odometry and mapping on variable terrain. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4758–4765. Cited by: §I.
  • [30] A. Soloviev, D. Bates, and F. Van Graas (2007) Tight coupling of laser scanner and inertial measurements for a fully autonomous relative navigation solution. Navigation 54 (3), pp. 189–205. Cited by: §II.
  • [31] J. Tang, Y. Chen, X. Niu, L. Wang, L. Chen, J. Liu, C. Shi, and J. Hyyppä (2015) LiDAR scan matching aided inertial navigation system in gnss-denied environments. Sensors 15 (7), pp. 16710–16728. Cited by: §II.
  • [32] H. Ye, Y. Chen, and M. Liu (2019) Tightly coupled 3d lidar inertial odometry and mapping. In 2019 International Conference on Robotics and Automation (ICRA), pp. 3144–3150. Cited by: §I, §I, §II, §IV, §IV, Fig. 3, Fig. 4, §V-A, §V-B, §V-B, §V-B, §V-B, §V-B, §V-C, §V-C, §V-C, §V-C, TABLE II, TABLE III, TABLE IV, §V, §V.
  • [33] F. Zhang (2011) Matrix theory: basic results and techniques. Springer Science & Business Media. Cited by: §III, §III.
  • [34] J. Zhang and S. Singh (2014) LOAM: lidar odometry and mapping in real-time.. In Robotics: Science and Systems, Vol. 2. Cited by: §I, §I, §II, §II, §IV, §V-C, TABLE IV, §V.
  • [35] J. Zhang and S. Singh (2017) Low-drift and real-time lidar odometry and mapping. Autonomous Robots 41 (2), pp. 401–416. Cited by: §II, §V-C.
  • [36] Z. Zhang and D. Scaramuzza (2018) A tutorial on quantitative trajectory evaluation for visual (-inertial) odometry. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 7244–7251. Cited by: §V-A, §V-B, §V.