Language and Visual Entity Relationship Graph for Agent Navigation

10/19/2020
by   Yicong Hong, et al.
0

Vision-and-Language Navigation (VLN) requires an agent to navigate in a real-world environment following natural language instructions. From both the textual and visual perspectives, we find that the relationships among the scene, its objects,and directional clues are essential for the agent to interpret complex instructions and correctly perceive the environment. To capture and utilize the relationships, we propose a novel Language and Visual Entity Relationship Graph for modelling the inter-modal relationships between text and vision, and the intra-modal relationships among visual entities. We propose a message passing algorithm for propagating information between language elements and visual entities in the graph, which we then combine to determine the next action to take. Experiments show that by taking advantage of the relationships we are able to improve over state-of-the-art. On the Room-to-Room (R2R) benchmark, our method achieves the new best performance on the test unseen split with success rate weighted by path length (SPL) of 52 On the Room-for-Room (R4R) dataset, our method significantly improves the previous best from 13 time warping (SDTW). Code is available at: https://github.com/YicongHong/Entity-Graph-VLN.

READ FULL TEXT
research
04/19/2021

Improving Cross-Modal Alignment in Vision Language Navigation via Syntactic Information

Vision language navigation is the task that requires an agent to navigat...
research
10/11/2020

Constructing a Visual Relationship Authenticity Dataset

A visual relationship denotes a relationship between two objects in an i...
research
03/31/2022

Continuous Scene Representations for Embodied AI

We propose Continuous Scene Representations (CSR), a scene representatio...
research
07/05/2022

CLEAR: Improving Vision-Language Navigation with Cross-Lingual, Environment-Agnostic Representations

Vision-and-Language Navigation (VLN) tasks require an agent to navigate ...
research
03/05/2019

The Regretful Agent: Heuristic-Aided Navigation through Progress Estimation

As deep learning continues to make progress for challenging perception t...
research
03/31/2020

Take the Scenic Route: Improving Generalization in Vision-and-Language Navigation

In the Vision-and-Language Navigation (VLN) task, an agent with egocentr...
research
06/02/2019

Are You Looking? Grounding to Multiple Modalities in Vision-and-Language Navigation

Vision-and-Language Navigation (VLN) requires grounding instructions, su...

Please sign up or login with your details

Forgot password? Click here to reset