Learning Vision-and-Language Navigation from YouTube Videos

07/22/2023
by   Kunyang Lin, et al.
0

Vision-and-language navigation (VLN) requires an embodied agent to navigate in realistic 3D environments using natural language instructions. Existing VLN methods suffer from training on small-scale environments or unreasonable path-instruction datasets, limiting the generalization to unseen environments. There are massive house tour videos on YouTube, providing abundant real navigation experiences and layout information. However, these videos have not been explored for VLN before. In this paper, we propose to learn an agent from these videos by creating a large-scale dataset which comprises reasonable path-instruction pairs from house tour videos and pre-training the agent on it. To achieve this, we have to tackle the challenges of automatically constructing path-instruction pairs and exploiting real layout knowledge from raw and unlabeled videos. To address these, we first leverage an entropy-based method to construct the nodes of a path trajectory. Then, we propose an action-aware generator for generating instructions from unlabeled trajectories. Last, we devise a trajectory judgment pretext task to encourage the agent to mine the layout knowledge. Experimental results show that our method achieves state-of-the-art performance on two popular benchmarks (R2R and REVERIE). Code is available at https://github.com/JeremyLinky/YouTube-VLN

READ FULL TEXT

page 2

page 3

page 5

page 13

page 14

page 15

page 19

page 20

research
07/19/2022

Target-Driven Structured Transformer Planner for Vision-Language Navigation

Vision-language navigation is the task of directing an embodied agent to...
research
11/30/2022

Layout-aware Dreamer for Embodied Referring Expression Grounding

In this work, we study the problem of Embodied Referring Expression Grou...
research
12/08/2021

Contrastive Instruction-Trajectory Learning for Vision-Language Navigation

The vision-language navigation (VLN) task requires an agent to reach a t...
research
08/20/2021

Airbert: In-domain Pretraining for Vision-and-Language Navigation

Vision-and-language navigation (VLN) aims to enable embodied agents to n...
research
08/24/2022

Learning from Unlabeled 3D Environments for Vision-and-Language Navigation

In vision-and-language navigation (VLN), an embodied agent is required t...
research
07/29/2020

Object-and-Action Aware Model for Visual Language Navigation

Vision-and-Language Navigation (VLN) is unique in that it requires turni...
research
06/17/2020

Semantic Visual Navigation by Watching YouTube Videos

Semantic cues and statistical regularities in real-world environment lay...

Please sign up or login with your details

Forgot password? Click here to reset