Look, Listen, and Act: Towards Audio-Visual Embodied Navigation

by   Chuang Gan, et al.

A crucial aspect of mobile intelligent agents is their ability to integrate the evidence from multiple sensory inputs in an environment and plan a sequence of actions to achieve their goals. In this paper, we attempt to address the problem of Audio-Visual Embodied Navigation, the task of planning the shortest path from a random starting location in a scene to the sound source in an indoor environment, given only raw egocentric visual and audio sensory data. To accomplish this task, the agent is required to learn from various modalities, i.e. relating the audio signal to the visual environment. Here we describe an approach to the audio-visual embodied navigation that can take advantage of both visual and audio pieces of evidence. Our solution is based on three key ideas: a visual perception mapper module that can construct its spatial memory of the environment, a sound perception module that infers the relative location of the sound source from the agent, and a dynamic path planner that plans a sequence of actions based on the visual-audio observations and the spatial memory of the environment, and then navigates towards the goal. Experimental results on a newly collected Visual-Audio-Room dataset using the simulated multi-modal environment demonstrate the effectiveness of our approach over several competitive baselines.


page 1

page 4

page 6


Learning to Set Waypoints for Audio-Visual Navigation

In audio-visual navigation, an agent intelligently travels through a com...

Visual Semantic Planning using Deep Successor Representations

A crucial capability of real-world intelligent agents is their ability t...

Guided Navigation from Multiple Viewpoints using Qualitative Spatial Reasoning

Navigation is an essential ability for mobile agents to be completely au...

Agents that Listen: High-Throughput Reinforcement Learning with Multiple Sensory Systems

Humans and other intelligent animals evolved highly sophisticated percep...

Static Visual Spatial Priors for DoA Estimation

As we interact with the world, for example when we communicate with our ...

Towards Generalisable Audio Representations for Audio-Visual Navigation

In audio-visual navigation (AVN), an intelligent agent needs to navigate...

Online Grounding of PDDL Domains by Acting and Sensing in Unknown Environments

To effectively use an abstract (PDDL) planning domain to achieve goals i...