-
Room-Across-Room: Multilingual Vision-and-Language Navigation with Dense Spatiotemporal Grounding
We introduce Room-Across-Room (RxR), a new Vision-and-Language Navigatio...
read it
-
Learning to Move with Affordance Maps
The ability to autonomously explore and navigate a physical space is a f...
read it
-
MultiON: Benchmarking Semantic Map Memory using Multi-Object Navigation
Navigation tasks in photorealistic 3D environments are challenging becau...
read it
-
Simultaneous Mapping and Target Driven Navigation
This work presents a modular architecture for simultaneous mapping and t...
read it
-
Bayesian Relational Memory for Semantic Visual Navigation
We introduce a new memory architecture, Bayesian Relational Memory (BRM)...
read it
-
Exploiting Scene-specific Features for Object Goal Navigation
Can the intrinsic relation between an object and the room in which it is...
read it
-
Effective and General Evaluation for Instruction Conditioned Navigation using Dynamic Time Warping
In instruction conditioned navigation, agents interpret natural language...
read it
Seeing the Un-Scene: Learning Amodal Semantic Maps for Room Navigation
We introduce a learning-based approach for room navigation using semantic maps. Our proposed architecture learns to predict top-down belief maps of regions that lie beyond the agent's field of view while modeling architectural and stylistic regularities in houses. First, we train a model to generate amodal semantic top-down maps indicating beliefs of location, size, and shape of rooms by learning the underlying architectural patterns in houses. Next, we use these maps to predict a point that lies in the target room and train a policy to navigate to the point. We empirically demonstrate that by predicting semantic maps, the model learns common correlations found in houses and generalizes to novel environments. We also demonstrate that reducing the task of room navigation to point navigation improves the performance further.
READ FULL TEXT
Comments
There are no comments yet.