ViNL: Visual Navigation and Locomotion Over Obstacles

10/26/2022
by   Simar Kareer, et al.
0

We present Visual Navigation and Locomotion over obstacles (ViNL), which enables a quadrupedal robot to navigate unseen apartments while stepping over small obstacles that lie in its path (e.g., shoes, toys, cables), similar to how humans and pets lift their feet over objects as they walk. ViNL consists of: (1) a visual navigation policy that outputs linear and angular velocity commands that guides the robot to a goal coordinate in unfamiliar indoor environments; and (2) a visual locomotion policy that controls the robot's joints to avoid stepping on obstacles while following provided velocity commands. Both the policies are entirely "model-free", i.e. sensors-to-actions neural networks trained end-to-end. The two are trained independently in two entirely different simulators and then seamlessly co-deployed by feeding the velocity commands from the navigator to the locomotor, entirely "zero-shot" (without any co-training). While prior works have developed learning methods for visual navigation or visual locomotion, to the best of our knowledge, this is the first fully learned approach that leverages vision to accomplish both (1) intelligent navigation in new environments, and (2) intelligent visual locomotion that aims to traverse cluttered environments without disrupting obstacles. On the task of navigation to distant goals in unknown environments, ViNL using just egocentric vision significantly outperforms prior work on robust locomotion using privileged terrain maps (+32.8 collisions per meter). Additionally, we ablate our locomotion policy to show that each aspect of our approach helps reduce obstacle collisions. Videos and code at http://www.joannetruong.com/projects/vinl.html

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

research
03/29/2022

Locomotion Policy Guided Traversability Learning using Volumetric Representations of Complex Environments

Despite the progress in legged robotic locomotion, autonomous navigation...
research
09/26/2022

Advanced Skills by Learning Locomotion and Local Navigation End-to-End

The common approach for local navigation on challenging environments wit...
research
12/03/2021

Coupling Vision and Proprioception for Navigation of Legged Robots

We exploit the complementary strengths of vision and proprioception to a...
research
05/01/2023

IndoorSim-to-OutdoorReal: Learning to Navigate Outdoors without any Outdoor Experience

We present IndoorSim-to-OutdoorReal (I2O), an end-to-end learned visual ...
research
08/01/2022

Learning to Navigate using Visual Sensor Networks

We consider the problem of navigating a mobile robot towards a target in...
research
06/29/2020

Vision-Based Goal-Conditioned Policies for Underwater Navigation in the Presence of Obstacles

We present Nav2Goal, a data-efficient and end-to-end learning method for...
research
09/19/2023

Bridging Zero-shot Object Navigation and Foundation Models through Pixel-Guided Navigation Skill

Zero-shot object navigation is a challenging task for home-assistance ro...

Please sign up or login with your details

Forgot password? Click here to reset