The Surprising Effectiveness of Visual Odometry Techniques for Embodied PointGoal Navigation

08/26/2021 ∙ by Xiaoming Zhao, et al. ∙ 7

It is fundamental for personal robots to reliably navigate to a specified goal. To study this task, PointGoal navigation has been introduced in simulated Embodied AI environments. Recent advances solve this PointGoal navigation task with near-perfect accuracy (99.6 environments, assuming noiseless egocentric vision, noiseless actuation, and most importantly, perfect localization. However, under realistic noise models for visual sensors and actuation, and without access to a "GPS and Compass sensor," the 99.6 0.3 odometry for the task of PointGoal navigation in this realistic setting, i.e., with realistic noise models for perception and actuation and without access to GPS and Compass sensors. We show that integrating visual odometry techniques into navigation policies improves the state-of-the-art on the popular Habitat PointNav benchmark by a large margin, improving success from 64.5 while executing 6.4 times faster.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

page 14

page 15

page 16

page 17

Code Repositories

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.