DeepAI AI Chat
Log In Sign Up

Out of the Box: Embodied Navigation in the Real World

by   Roberto Bigazzi, et al.

The research field of Embodied AI has witnessed substantial progress in visual navigation and exploration thanks to powerful simulating platforms and the availability of 3D data of indoor and photorealistic environments. These two factors have opened the doors to a new generation of intelligent agents capable of achieving nearly perfect PointGoal Navigation. However, such architectures are commonly trained with millions, if not billions, of frames and tested in simulation. Together with great enthusiasm, these results yield a question: how many researchers will effectively benefit from these advances? In this work, we detail how to transfer the knowledge acquired in simulation into the real world. To that end, we describe the architectural discrepancies that damage the Sim2Real adaptation ability of models trained on the Habitat simulator and propose a novel solution tailored towards the deployment in real-world scenarios. We then deploy our models on a LoCoBot, a Low-Cost Robot equipped with a single Intel RealSense camera. Different from previous work, our testing scene is unavailable to the agent in simulation. The environment is also inaccessible to the agent beforehand, so it cannot count on scene-specific semantic priors. In this way, we reproduce a setting in which a research group (potentially from other fields) needs to employ the agent visual navigation capabilities as-a-Service. Our experiments indicate that it is possible to achieve satisfying results when deploying the obtained model in the real world. Our code and models are available at


page 1

page 2

page 3

page 4


Unsupervised Domain Adaptation for Visual Navigation

Advances in visual navigation methods have led to intelligent embodied n...

Spot the Difference: A Novel Task for Embodied Agents in Changing Environments

Embodied AI is a recent research area that aims at creating intelligent ...

Are We Making Real Progress in Simulated Environments? Measuring the Sim2Real Gap in Embodied Visual Navigation

Does progress in simulation translate to progress in robotics? Specifica...

NavDreams: Towards Camera-Only RL Navigation Among Humans

Autonomously navigating a robot in everyday crowded spaces requires solv...

Learning to Navigate from Simulation via Spatial and Semantic Information Synthesis with Noise Model Embedding

While training an end-to-end navigation network in the real world is usu...

RoboTHOR: An Open Simulation-to-Real Embodied AI Platform

Visual recognition ecosystems (e.g. ImageNet, Pascal, COCO) have undenia...