Visual Representation Learning for Preference-Aware Path Planning

09/18/2021
by   Kavan Singh Sikand, et al.
8

Autonomous mobile robots deployed in outdoor environments must reason about different types of terrain for both safety (e.g., prefer dirt over mud) and deployer preferences (e.g., prefer dirt path over flower beds). Most existing solutions to this preference-aware path planning problem use semantic segmentation to classify terrain types from camera images, and then ascribe costs to each type. Unfortunately, there are three key limitations of such approaches – they 1) require pre-enumeration of the discrete terrain types, 2) are unable to handle hybrid terrain types (e.g., grassy dirt), and 3) require expensive labelled data to train visual semantic segmentation. We introduce Visual Representation Learning for Preference-Aware Path Planning (VRL-PAP), an alternative approach that overcomes all three limitations: VRL-PAP leverages unlabeled human demonstrations of navigation to autonomously generate triplets for learning visual representations of terrain that are viewpoint invariant and encode terrain types in a continuous representation space. The learned representations are then used along with the same unlabeled human navigation demonstrations to learn a mapping from the representation space to terrain costs. At run time, VRL-PAP maps from images to representations and then representations to costs to perform preference-aware path planning. We present empirical results from challenging outdoor settings that demonstrate VRL-PAP 1) is successfully able to pick paths that reflect demonstrated preferences, 2) is comparable in execution to geometric navigation with a highly detailed manually annotated map (without requiring such annotations), 3) is able to generalize to novel terrain types with minimal additional unlabeled demonstrations.

READ FULL TEXT

page 1

page 3

page 5

page 6

research
09/18/2023

Wait, That Feels Familiar: Learning to Extrapolate Human Preferences for Preference Aligned Path Planning

Autonomous mobility tasks such as lastmile delivery require reasoning ab...
research
03/18/2021

S2P2: Self-Supervised Goal-Directed Path Planning Using RGB-D Data for Robotic Wheelchairs

Path planning is a fundamental capability for autonomous navigation of r...
research
03/01/2022

Embodied Active Domain Adaptation for Semantic Segmentation via Informative Path Planning

This work presents an embodied agent that can adapt its semantic segment...
research
03/01/2018

Learning Human-Aware Path Planning with Fully Convolutional Networks

This work presents an approach to learn path planning for robot social n...
research
08/30/2023

Predicting Energy Consumption and Traversal Time of Ground Robots for Outdoor Navigation on Multiple Types of Terrain

The outdoor navigation capabilities of ground robots have improved signi...
research
12/21/2017

Unifying Map and Landmark Based Representations for Visual Navigation

This works presents a formulation for visual navigation that unifies map...
research
06/15/2023

Path Generation for Wheeled Robots Autonomous Navigation on Vegetated Terrain

Wheeled robot navigation has been widely used in urban environments, but...

Please sign up or login with your details

Forgot password? Click here to reset