ArraMon: A Joint Navigation-Assembly Instruction Interpretation Task in Dynamic Environments

by   Hyounghun Kim, et al.

For embodied agents, navigation is an important ability but not an isolated goal. Agents are also expected to perform specific tasks after reaching the target location, such as picking up objects and assembling them into a particular arrangement. We combine Vision-and-Language Navigation, assembling of collected objects, and object referring expression comprehension, to create a novel joint navigation-and-assembly task, named ArraMon. During this task, the agent (similar to a PokeMON GO player) is asked to find and collect different target objects one-by-one by navigating based on natural language instructions in a complex, realistic outdoor environment, but then also ARRAnge the collected objects part-by-part in an egocentric grid-layout environment. To support this task, we implement a 3D dynamic environment simulator and collect a dataset (in English; and also extended to Hindi) with human-written navigation and assembling instructions, and the corresponding ground truth trajectories. We also filter the collected instructions via a verification stage, leading to a total of 7.7K task instances (30.8K instructions and paths). We present results for several baseline models (integrated and biased) and metrics (nDTW, CTC, rPOD, and PTC), and the large model-human performance gap demonstrates that our task is challenging and presents a wide scope for future work. Our dataset, simulator, and code are publicly available at:



There are no comments yet.


page 1

page 4

page 9

page 12

page 13

page 15

page 17

page 18


Generating Landmark Navigation Instructions from Maps as a Graph-to-Text Problem

Car-focused navigation services are based on turns and distances of name...

CHALET: Cornell House Agent Learning Environment

We present CHALET, a 3D house simulator with support for navigation and ...

MultiON: Benchmarking Semantic Map Memory using Multi-Object Navigation

Navigation tasks in photorealistic 3D environments are challenging becau...

Source-Target Inference Models for Spatial Instruction Understanding

Models that can execute natural language instructions for situated robot...

Predicting Like A Pilot: Dataset and Method to Predict Socially-Aware Aircraft Trajectories in Non-Towered Terminal Airspace

Pilots operating aircraft in un-towered airspace rely on their situation...

RUN through the Streets: A New Dataset and Baseline Models for Realistic Urban Navigation

Following navigation instructions in natural language requires a composi...

Diagnosing Vision-and-Language Navigation: What Really Matters

Vision-and-language navigation (VLN) is a multimodal task where an agent...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.