Accessible Instruction-Following Agent

05/08/2023
by   Kairui Zhou, et al.
0

Humans can collaborate and complete tasks based on visual signals and instruction from the environment. Training such a robot is difficult especially due to the understanding of the instruction and the complicated environment. Previous instruction-following agents are biased to English-centric corpus, making it unrealizable to be applied to users that use multiple languages or even low-resource languages. Nevertheless, the instruction-following agents are pre-trained in a mode that assumes the user can observe the environment, which limits its accessibility. In this work, we're trying to generalize the success of instruction-following agents to non-English languages with little corpus resources, and improve its intractability and accessibility. We introduce UVLN (Universal Vision-Language Navigation), a novel machine-translation instructional augmented framework for cross-lingual vision-language navigation, with a novel composition of state-of-the-art large language model (GPT3) with the image caption model (BLIP). We first collect a multilanguage vision-language navigation dataset via machine translation. Then we extend the standard VLN training objectives to a multilingual setting via a cross-lingual language encoder. The alignment between different languages is captured through a shared vision and action context via a cross-modal transformer, which encodes the inputs of language instruction, visual observation, and action decision sequences. To improve the intractability, we connect our agent with the large language model that informs the situation and current state to the user and also explains the action decisions. Experiments over Room Across Room Dataset prove the effectiveness of our approach. And the qualitative results show the promising intractability and accessibility of our instruction-following agent.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/01/2021

UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training

Vision-and-language pre-training has achieved impressive success in lear...
research
10/24/2019

Cross-Lingual Vision-Language Navigation

Vision-Language Navigation (VLN) is the task where an agent is commanded...
research
05/31/2022

ADAPT: Vision-Language Navigation with Modality-Aligned Action Prompts

Vision-Language Navigation (VLN) is a challenging task that requires an ...
research
08/09/2019

Transferable Representation Learning in Vision-and-Language Navigation

Vision-and-Language Navigation (VLN) tasks such as Room-to-Room (R2R) re...
research
04/30/2020

Improving Vision-and-Language Navigation with Image-Text Pairs from the Web

Following a navigation instruction such as 'Walk down the stairs and sto...
research
05/19/2023

Multimodal Web Navigation with Instruction-Finetuned Foundation Models

The progress of autonomous web navigation has been hindered by the depen...
research
03/06/2019

Tactical Rewind: Self-Correction via Backtracking in Vision-and-Language Navigation

We present FAST NAVIGATOR, a general framework for action decoding, whic...

Please sign up or login with your details

Forgot password? Click here to reset