Evolution of SLAM: Toward the Robust-Perception of Autonomy
Simultaneous localisation and mapping (SLAM) is the problem of autonomous robots to construct or update a map of an undetermined unstructured environment while simultaneously estimate the pose in it. The current trend towards self-driving vehicles has influenced the development of robust SLAM techniques over the last 30 years. This problem is addressed by using a standard sensor or a sensor array (Ultrasonic sensor, LIDAR, Camera, Kinect RGB-D) with sensor fusion techniques to achieve the perception step. Sensing method is determined by considering the specifications of the environment to extract the features. Then the usage of classical Filter-based approaches, the global optimisation approach which is a popular method for visual-based SLAM and convolutional neural network-based methods such as deep learning-based SLAM are discussed whereas considering how to overcome the localisation and mapping issues. The robustness and scalability in long-term autonomy, performance and other new directions in the algorithms compared with each other to sort out. This paper is looking at the published previous work with a judgemental perspective from sensors to algorithm development while discussing open challenges and new research frontiers.
READ FULL TEXT