Robust Visual Teach and Repeat for UGVs Using 3D Semantic Maps

09/21/2021
by   Mohammad Mahdavian, et al.
0

In this paper, we propose a Visual Teach and Repeat (VTR) algorithm using semantic landmarks extracted from environmental objects for ground robots with fixed mount monocular cameras. The proposed algorithm is robust to changes in the starting pose of the camera/robot, where a pose is defined as the planar position plus the orientation around the vertical axis. VTR consists of a teach phase in which a robot moves in a prescribed path, and a repeat phase in which the robot tries to repeat the same path starting from the same or a different pose. Most available VTR algorithms are pose dependent and cannot perform well in the repeat phase when starting from an initial pose far from that of the teach phase. To achieve more robust pose independency, during the teach phase, we collect the camera poses and the 3D point clouds of the environment using ORB-SLAM. We also detect objects in the environment using YOLOv3. We then combine the two outputs to build a 3D semantic map of the environment consisting of the 3D position of the objects and the robot path. In the repeat phase, we relocalize the robot based on the detected objects and the stored semantic map. The robot is then able to move toward the teach path, and repeat it in both forward and backward directions. The results show that our algorithm is highly robust with respect to pose variations as well as environmental alterations. Our code and data are available at the following Github page: https://github.com/mmahdavian/semantic_visual_teach_repeat

READ FULL TEXT

page 1

page 2

page 3

research
07/17/2018

Wheeled Robots Path Planing and Tracking System Based on Monocular Visual SLAM

Warehouse logistics robots will work in different warehouse environments...
research
04/01/2020

Monocular Camera Localization in Prior LiDAR Maps with 2D-3D Line Correspondences

Light-weight camera localization in existing maps is essential for visio...
research
05/17/2023

TextSLAM: Visual SLAM with Semantic Planar Text Features

We propose a novel visual SLAM method that integrates text objects tight...
research
09/16/2019

Where are the Keys? -- Learning Object-Centric Navigation Policies on Semantic Maps with Graph Convolutional Networks

Emerging object-based SLAM algorithms can build a graph representation o...
research
01/24/2018

UAV Visual Teach and Repeat Using Only Semantic Object Features

We demonstrate the use of semantic object detections as robust features ...
research
01/17/2023

Position prediction using disturbance observer for planar pushing

The position and the orientation of a rigid body object pushed by a robo...
research
11/14/2017

Navigation without localisation: reliable teach and repeat based on the convergence theorem

We present a novel concept for teach-and-repeat visual navigation. The p...

Please sign up or login with your details

Forgot password? Click here to reset