Contrastive Instruction-Trajectory Learning for Vision-Language Navigation

12/08/2021
by   Xiwen Liang, et al.
4

The vision-language navigation (VLN) task requires an agent to reach a target with the guidance of natural language instruction. Previous works learn to navigate step-by-step following an instruction. However, these works may fail to discriminate the similarities and discrepancies across instruction-trajectory pairs and ignore the temporal continuity of sub-instructions. These problems hinder agents from learning distinctive vision-and-language representations, harming the robustness and generalizability of the navigation policy. In this paper, we propose a Contrastive Instruction-Trajectory Learning (CITL) framework that explores invariance across similar data samples and variance across different ones to learn distinctive representations for robust navigation. Specifically, we propose: (1) a coarse-grained contrastive learning objective to enhance vision-and-language representations by contrasting semantics of full trajectory observations and instructions, respectively; (2) a fine-grained contrastive learning objective to perceive instructions by leveraging the temporal information of the sub-instructions; (3) a pairwise sample-reweighting mechanism for contrastive learning to mine hard samples and hence mitigate the influence of data sampling bias in contrastive learning. Our CITL can be easily integrated with VLN backbones to form a new learning paradigm and achieve better generalizability in unseen environments. Extensive experiments show that the model with CITL surpasses the previous state-of-the-art methods on R2R, R4R, and RxR.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/06/2020

Sub-Instruction Aware Vision-and-Language Navigation

Vision-and-language navigation requires an agent to navigate through a r...
research
10/18/2022

ULN: Towards Underspecified Vision-and-Language Navigation

Vision-and-Language Navigation (VLN) is a task to guide an embodied agen...
research
09/01/2023

Language-Conditioned Change-point Detection to Identify Sub-Tasks in Robotics Domains

In this work, we present an approach to identify sub-tasks within a demo...
research
02/13/2023

Actional Atomic-Concept Learning for Demystifying Vision-Language Navigation

Vision-Language Navigation (VLN) is a challenging task which requires an...
research
07/23/2021

Adversarial Reinforced Instruction Attacker for Robust Vision-Language Navigation

Language instruction plays an essential role in the natural language gro...
research
07/03/2019

Chasing Ghosts: Instruction Following as Bayesian State Tracking

A visually-grounded navigation instruction can be interpreted as a seque...
research
07/22/2023

Learning Vision-and-Language Navigation from YouTube Videos

Vision-and-language navigation (VLN) requires an embodied agent to navig...

Please sign up or login with your details

Forgot password? Click here to reset