Active Visual Information Gathering for Vision-Language Navigation

07/15/2020
by   Hanqing Wang, et al.
0

Vision-language navigation (VLN) is the task of entailing an agent to carry out navigational instructions inside photo-realistic environments. One of the key challenges in VLN is how to conduct a robust navigation by mitigating the uncertainty caused by ambiguous instructions and insufficient observation of the environment. Agents trained by current approaches typically suffer from this and would consequently struggle to avoid random and inefficient actions at every step. In contrast, when humans face such a challenge, they can still maintain robust navigation by actively exploring the surroundings to gather more information and thus make more confident navigation decisions. This work draws inspiration from human navigation behavior and endows an agent with an active information gathering ability for a more intelligent vision-language navigation policy. To achieve this, we propose an end-to-end framework for learning an exploration policy that decides i) when and where to explore, ii) what information is worth gathering during exploration, and iii) how to adjust the navigation decision after the exploration. The experimental results show promising exploration strategies emerged from training, which leads to significant boost in navigation performance. On the R2R challenge leaderboard, our agent gets promising results all three VLN settings, i.e., single run, pre-exploration, and beam search.

READ FULL TEXT
research
10/18/2022

ULN: Towards Underspecified Vision-and-Language Navigation

Vision-and-Language Navigation (VLN) is a task to guide an embodied agen...
research
07/02/2023

Active Sensing with Predictive Coding and Uncertainty Minimization

We present an end-to-end procedure for embodied exploration based on two...
research
02/09/2023

Learning by Asking for Embodied Visual Navigation and Task Completion

The research community has shown increasing interest in designing intell...
research
09/15/2023

LASER: LLM Agent with State-Space Exploration for Web Navigation

Large language models (LLMs) have been successfully adapted for interact...
research
03/14/2021

Active Dynamical Prospection: Modeling Mental Simulation as Particle Filtering for Sensorimotor Control during Pathfinding

What do humans do when confronted with a common challenge: we know where...
research
10/14/2022

Learning Active Camera for Multi-Object Navigation

Getting robots to navigate to multiple objects autonomously is essential...
research
07/20/2023

Behavioral Analysis of Vision-and-Language Navigation Agents

To be successful, Vision-and-Language Navigation (VLN) agents must be ab...

Please sign up or login with your details

Forgot password? Click here to reset