Mapping Navigation Instructions to Continuous Control Actions with Position-Visitation Prediction

11/10/2018
by   Valts Blukis, et al.
0

We propose an approach for mapping natural language instructions and raw observations to continuous control of a quadcopter drone. Our model predicts interpretable position-visitation distributions indicating where the agent should go during execution and where it should stop, and uses the predicted distributions to select the actions to execute. This two-step model decomposition allows for simple and efficient training using a combination of supervised learning and imitation learning. We evaluate our approach with a realistic drone simulator, and demonstrate absolute task-completion accuracy improvements of 16.85

READ FULL TEXT

page 2

page 4

page 8

page 14

research
10/21/2019

Learning to Map Natural Language Instructions to Physical Quadcopter Control using Simulated Flight

We propose a joint simulation and real-world learning framework for mapp...
research
05/25/2018

Situated Mapping of Sequential Instructions to Actions with Single-step Reward Observation

We propose a learning approach for mapping context-dependent sequential ...
research
09/04/2018

Mapping Instructions to Actions in 3D Environments with Visual Goal Prediction

We propose to decompose instruction execution to goal prediction and act...
research
08/30/2020

Learn by Observation: Imitation Learning for Drone Patrolling from Videos of A Human Navigator

We present an imitation learning method for autonomous drone patrolling ...
research
05/31/2018

Following High-level Navigation Instructions on a Simulated Quadcopter with Imitation Learning

We introduce a method for following high-level navigation instructions b...
research
05/21/2018

Imitating Latent Policies from Observation

We describe a novel approach to imitation learning that infers latent po...
research
03/05/2022

Bridging the Gap Between Learning in Discrete and Continuous Environments for Vision-and-Language Navigation

Most existing works in vision-and-language navigation (VLN) focus on eit...

Please sign up or login with your details

Forgot password? Click here to reset