Visual Navigation Among Humans with Optimal Control as a Supervisor

03/20/2020
by   Varun Tolani, et al.
5

Real world navigation requires robots to operate in unfamiliar, dynamic environments, sharing spaces with humans. Navigating around humans is especially difficult because it requires predicting their future motion, which can be quite challenging. We propose a novel framework for navigation around humans which combines learning-based perception with model-based optimal control. Specifically, we train a Convolutional Neural Network (CNN)-based perception module which maps the robot's visual inputs to a waypoint, or next desired state. This waypoint is then input into planning and control modules which convey the robot safely and efficiently to the goal. To train the CNN we contribute a photo-realistic bench-marking dataset for autonomous robot navigation in the presence of humans. The CNN is trained using supervised learning on images rendered from our photo-realistic dataset. The proposed framework learns to anticipate and react to peoples' motion based only on a monocular RGB image, without explicitly predicting future human motion. Our method generalizes well to unseen buildings and humans in both simulation and real world environments. Furthermore, our experiments demonstrate that combining model-based control and learning leads to better and more data-efficient navigational behaviors as compared to a purely learning based approach. Videos describing our approach and experiments are available on the project website.

READ FULL TEXT

page 1

page 6

page 7

page 8

page 10

research
03/06/2019

Combining Optimal Control and Learning for Visual Navigation in Novel Environments

Model-based control is a popular paradigm for robot navigation because i...
research
04/03/2019

Neural Autonomous Navigation with Riemannian Motion Policy

End-to-end learning for autonomous navigation has received substantial a...
research
12/20/2019

Generating Robust Supervision for Learning-Based Visual Navigation Using Hamilton-Jacobi Reachability

In Bansal et al. (2019), a novel visual navigation framework that combin...
research
03/23/2022

NavDreams: Towards Camera-Only RL Navigation Among Humans

Autonomously navigating a robot in everyday crowded spaces requires solv...
research
10/04/2022

COPILOT: Human Collision Prediction and Localization from Multi-view Egocentric Videos

To produce safe human motions, assistive wearable exoskeletons must be e...
research
05/06/2022

Robot navigation from human demonstration: learning control behaviors with environment feature maps

When working alongside human collaborators in dynamic and unstructured e...
research
12/07/2021

Combining optimal control and learning for autonomous aerial navigation in novel indoor environments

This report proposes a combined optimal control and perception framework...

Please sign up or login with your details

Forgot password? Click here to reset