EnvEdit: Environment Editing for Vision-and-Language Navigation

03/29/2022
by   Jialu Li, et al.
7

In Vision-and-Language Navigation (VLN), an agent needs to navigate through the environment based on natural language instructions. Due to limited available data for agent training and finite diversity in navigation environments, it is challenging for the agent to generalize to new, unseen environments. To address this problem, we propose EnvEdit, a data augmentation method that creates new environments by editing existing environments, which are used to train a more generalizable agent. Our augmented environments can differ from the seen environments in three diverse aspects: style, object appearance, and object classes. Training on these edit-augmented environments prevents the agent from overfitting to existing environments and helps generalize better to new, unseen environments. Empirically, on both the Room-to-Room and the multi-lingual Room-Across-Room datasets, we show that our proposed EnvEdit method gets significant improvements in all metrics on both pre-trained and non-pre-trained VLN agents, and achieves the new state-of-the-art on the test leaderboard. We further ensemble the VLN agents augmented on different edited environments and show that these edit methods are complementary. Code and data are available at https://github.com/jialuli-luka/EnvEdit

READ FULL TEXT

page 1

page 3

page 8

page 16

research
05/30/2023

PanoGen: Text-Conditioned Panoramic Environment Generation for Vision-and-Language Navigation

Vision-and-Language Navigation (VLN) requires the agent to follow langua...
research
05/23/2023

Masked Path Modeling for Vision-and-Language Navigation

Vision-and-language navigation (VLN) agents are trained to navigate in r...
research
09/10/2022

Anticipating the Unseen Discrepancy for Vision and Language Navigation

Vision-Language Navigation requires the agent to follow natural language...
research
03/30/2021

Visual Room Rearrangement

There has been a significant recent progress in the field of Embodied AI...
research
11/17/2019

Counterfactual Vision-and-Language Navigation via Adversarial Path Sampling

Vision-and-Language Navigation (VLN) is a task where agents must decide ...
research
03/01/2020

Environment-agnostic Multitask Learning for Natural Language Grounded Navigation

Recent research efforts enable study for natural language grounded navig...
research
05/06/2020

Diagnosing the Environment Bias in Vision-and-Language Navigation

Vision-and-Language Navigation (VLN) requires an agent to follow natural...

Please sign up or login with your details

Forgot password? Click here to reset