Hear to Segment: Unmixing the Audio to Guide the Semantic Segmentation

05/12/2023
by   Yuhang Ling, et al.
0

In this paper, we focus on a recently proposed novel task called Audio-Visual Segmentation (AVS), where the fine-grained correspondence between audio stream and image pixels is required to be established. However, learning such correspondence faces two key challenges: (1) audio signals inherently exhibit a high degree of information density, as sounds produced by multiple objects are entangled within the same audio stream; (2) the frequency of audio signals from objects with the same category tends to be similar, which hampers the distinction of target object and consequently leads to ambiguous segmentation results. Toward this end, we propose an Audio Unmixing and Semantic Segmentation Network (AUSS), which encourages unmixing complicated audio signals and distinguishing similar sounds. Technically, our AUSS unmixs the audio signals into a set of audio queries, and interacts them with visual features by masked attention mechanisms. To encourage these audio queries to capture distinctive features embedded within the audio, two self-supervised losses are also introduced as additional supervision at both class and mask levels. Extensive experimental results on the AVSBench benchmark show that our AUSS sets a new state-of-the-art in both single-source and multi-source subsets, demonstrating the effectiveness of our AUSS in bridging the gap between audio and vision modalities.

READ FULL TEXT

page 3

page 7

page 8

research
09/18/2023

Discovering Sounding Objects by Audio Queries for Audio Visual Segmentation

Audio visual segmentation (AVS) aims to segment the sounding objects for...
research
05/23/2017

Look, Listen and Learn

We consider the question: what can be learnt by looking at and listening...
research
04/06/2023

A Closer Look at Audio-Visual Semantic Segmentation

Audio-visual segmentation (AVS) is a complex task that involves accurate...
research
03/28/2023

Language-Guided Audio-Visual Source Separation via Trimodal Consistency

We propose a self-supervised approach for learning to perform audio sour...
research
12/22/2021

Class-aware Sounding Objects Localization via Audiovisual Correspondence

Audiovisual scenes are pervasive in our daily life. It is commonplace fo...
research
12/08/2020

I'm Sorry for Your Loss: Spectrally-Based Audio Distances Are Bad at Pitch

Growing research demonstrates that synthetic failure modes imply poor ge...
research
03/05/2021

Slow-Fast Auditory Streams For Audio Recognition

We propose a two-stream convolutional network for audio recognition, tha...

Please sign up or login with your details

Forgot password? Click here to reset