Polyline Based Generative Navigable Space Segmentation for Autonomous Visual Navigation

10/29/2021
by   Zheng Chen, et al.
5

Detecting navigable space is a fundamental capability for mobile robots navigating in unknown or unmapped environments. In this work, we treat the visual navigable space segmentation as a scene decomposition problem and propose Polyline Segmentation Variational AutoEncoder Networks (PSV-Nets), a representation-learning-based framework to enable robots to learn the navigable space segmentation in an unsupervised manner. Current segmentation techniques heavily rely on supervised learning strategies which demand a large amount of pixel-level annotated images. In contrast, the proposed framework leverages a generative model - Variational AutoEncoder (VAE) and an AutoEncoder (AE) to learn a polyline representation that compactly outlines the desired navigable space boundary in an unsupervised way. We also propose a visual receding horizon planning method that uses the learned navigable space and a Scaled Euclidean Distance Field (SEDF) to achieve autonomous navigation without an explicit map. Through extensive experiments, we have validated that the proposed PSV-Nets can learn the visual navigable space with high accuracy, even without any single label. We also show that the prediction of the PSV-Nets can be further improved with a small number of labels (if available) and can significantly outperform the state-of-the-art fully supervised-learning-based segmentation methods.

READ FULL TEXT

page 1

page 4

page 5

page 6

research
10/29/2021

NSS-VAEs: Generative Scene Decomposition for Visual Navigable Space Construction

Detecting navigable space is the first and also a critical step for succ...
research
05/04/2021

Combining Supervised and Un-supervised Learning for Automatic Citrus Segmentation

Citrus segmentation is a key step of automatic citrus picking. While mos...
research
04/02/2020

Guided Variational Autoencoder for Disentanglement Learning

We propose an algorithm, guided variational autoencoder (Guided-VAE), th...
research
05/29/2017

Towards Visual Ego-motion Learning in Robots

Many model-based Visual Odometry (VO) algorithms have been proposed in t...
research
07/06/2018

Deep Multiple Instance Feature Learning via Variational Autoencoder

We describe a novel weakly supervised deep learning framework that combi...
research
07/24/2019

LayoutVAE: Stochastic Scene Layout Generation from a Label Set

Recently there is an increasing interest in scene generation within the ...
research
03/10/2018

Webly Supervised Learning with Category-level Semantic Information

As tons of photos are being uploaded to public websites (e.g., Flickr, B...

Please sign up or login with your details

Forgot password? Click here to reset