DeepAI AI Chat
Log In Sign Up

Multimodal Text Style Transfer for Outdoor Vision-and-Language Navigation

by   Wanrong Zhu, et al.
University of California Santa Cruz
The Regents of the University of California
University of California, San Diego

In the vision-and-language navigation (VLN) task, an agent follows natural language instructions and navigate in visual environments. Compared to the indoor navigation task that has been broadly studied, navigation in real-life outdoor environments remains a significant challenge with its complicated visual inputs and an insufficient amount of instructions that illustrate the intricate urban scenes. In this paper, we introduce a Multimodal Text Style Transfer (MTST) learning approach to mitigate the problem of data scarcity in outdoor navigation tasks by effectively leveraging external multimodal resources. We first enrich the navigation data by transferring the style of the instructions generated by Google Maps API, then pre-train the navigator with the augmented external outdoor navigation dataset. Experimental results show that our MTST learning approach is model-agnostic, and our MTST approach significantly outperforms the baseline models on the outdoor VLN task, improving task completion rate by 22% relatively on the test set and achieving new state-of-the-art performance.


Diagnosing Vision-and-Language Navigation: What Really Matters

Vision-and-language navigation (VLN) is a multimodal task where an agent...

Learning to Stop: A Simple yet Effective Approach to Urban Vision-Language Navigation

Vision-and-Language Navigation (VLN) is a natural language grounding tas...

Reasoning about the Unseen for Efficient Outdoor Object Navigation

Robots should exist anywhere humans do: indoors, outdoors, and even unma...

Touchdown: Natural Language Navigation and Spatial Reasoning in Visual Street Environments

We study the problem of jointly reasoning about language and vision thro...

Analyzing Generalization of Vision and Language Navigation to Unseen Outdoor Areas

Vision and language navigation (VLN) is a challenging visually-grounded ...

RUN through the Streets: A New Dataset and Baseline Models for Realistic Urban Navigation

Following navigation instructions in natural language requires a composi...

Guide Your Agent with Adaptive Multimodal Rewards

Developing an agent capable of adapting to unseen environments remains a...