Move2Hear: Active Audio-Visual Source Separation

05/15/2021
by   Sagnik Majumder, et al.
6

We introduce the active audio-visual source separation problem, where an agent must move intelligently in order to better isolate the sounds coming from an object of interest in its environment. The agent hears multiple audio sources simultaneously (e.g., a person speaking down the hall in a noisy household) and must use its eyes and ears to automatically separate out the sounds originating from the target object within a limited time budget. Towards this goal, we introduce a reinforcement learning approach that trains movement policies controlling the agent's camera and microphone placement over time, guided by the improvement in predicted audio separation quality. We demonstrate our approach in scenarios motivated by both augmented reality (system is already co-located with the target object) and mobile robotics (agent begins arbitrarily far from the target object). Using state-of-the-art realistic audio-visual simulations in 3D environments, we demonstrate our model's ability to find minimal movement sequences with maximal payoff for audio source separation. Project: http://vision.cs.utexas.edu/projects/move2hear.

READ FULL TEXT

page 1

page 9

page 16

research
02/02/2022

Active Audio-Visual Separation of Dynamic Sound Sources

We explore active audio-visual separation for dynamic sound sources, whe...
research
04/16/2019

Co-Separating Sounds of Visual Objects

Learning how objects sound from video is challenging, since they often h...
research
03/28/2023

Language-Guided Audio-Visual Source Separation via Trimodal Consistency

We propose a self-supervised approach for learning to perform audio sour...
research
09/24/2021

Visual Scene Graphs for Audio Source Separation

State-of-the-art approaches for visually-guided audio source separation ...
research
02/03/2021

Music source separation conditioned on 3D point clouds

Recently, significant progress has been made in audio source separation ...
research
02/11/2021

Multichannel-based learning for audio object extraction

The current paradigm for creating and deploying immersive audio content ...
research
01/04/2023

Chat2Map: Efficient Scene Mapping from Multi-Ego Conversations

Can conversational videos captured from multiple egocentric viewpoints r...

Please sign up or login with your details

Forgot password? Click here to reset