R^3LIVE++: A Robust, Real-time, Radiance reconstruction package with a tightly-coupled LiDAR-Inertial-Visual state Estimator

09/08/2022
by   Jiarong Lin, et al.
0

Simultaneous localization and mapping (SLAM) are crucial for autonomous robots (e.g., self-driving cars, autonomous drones), 3D mapping systems, and AR/VR applications. This work proposed a novel LiDAR-inertial-visual fusion framework termed R^3LIVE++ to achieve robust and accurate state estimation while simultaneously reconstructing the radiance map on the fly. R^3LIVE++ consists of a LiDAR-inertial odometry (LIO) and a visual-inertial odometry (VIO), both running in real-time. The LIO subsystem utilizes the measurements from a LiDAR for reconstructing the geometric structure (i.e., the positions of 3D points), while the VIO subsystem simultaneously recovers the radiance information of the geometric structure from the input images. R^3LIVE++ is developed based on R^3LIVE and further improves the accuracy in localization and mapping by accounting for the camera photometric calibration (e.g., non-linear response function and lens vignetting) and the online estimation of camera exposure time. We conduct more extensive experiments on both public and our private datasets to compare our proposed system against other state-of-the-art SLAM systems. Quantitative and qualitative results show that our proposed system has significant improvements over others in both accuracy and robustness. In addition, to demonstrate the extendability of our work, we developed several applications based on our reconstructed radiance maps, such as high dynamic range (HDR) imaging, virtual environment exploration, and 3D video gaming. Lastly, to share our findings and make contributions to the community, we make our codes, hardware design, and dataset publicly available on our Github: github.com/hku-mars/r3live

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset