-
Towards Learning a Generic Agent for Vision-and-Language Navigation via Pre-training
Learning to navigate in a visual environment following natural-language ...
read it
-
Language and Visual Entity Relationship Graph for Agent Navigation
Vision-and-Language Navigation (VLN) requires an agent to navigate in a ...
read it
-
Take the Scenic Route: Improving Generalization in Vision-and-Language Navigation
In the Vision-and-Language Navigation (VLN) task, an agent with egocentr...
read it
-
Are You Looking? Grounding to Multiple Modalities in Vision-and-Language Navigation
Vision-and-Language Navigation (VLN) requires grounding instructions, su...
read it
-
Evolving Graphical Planner: Contextual Global Planning for Vision-and-Language Navigation
The ability to perform effective planning is crucial for building an ins...
read it
-
Learning to Navigate Unseen Environments: Back Translation with Environmental Dropout
A grand goal in AI is to build a robot that can accurately navigate base...
read it
-
Learning to Stop: A Simple yet Effective Approach to Urban Vision-Language Navigation
Vision-and-Language Navigation (VLN) is a natural language grounding tas...
read it
Multi-View Learning for Vision-and-Language Navigation
Learning to navigate in a visual environment following natural language instructions is a challenging task because natural language instructions are highly variable, ambiguous, and under-specified. In this paper, we present a novel training paradigm, Learn from EveryOne (LEO), which leverages multiple instructions (as different views) for the same trajectory to resolve language ambiguity and improve generalization. By sharing parameters across instructions, our approach learns more effectively from limited training data and generalizes better in unseen environments. On the recent Room-to-Room (R2R) benchmark dataset, LEO achieves 16 as the base agent (25.3 Length (SPL). Further, LEO is complementary to most existing models for vision-and-language navigation, allowing for easy integration with the existing techniques, leading to LEO+, which creates the new state of the art, pushing the R2R benchmark to 62
READ FULL TEXT
Comments
There are no comments yet.