DeepAI AI Chat
Log In Sign Up

Hybrid Spectrogram and Waveform Source Separation

by   Alexandre Défossez, et al.

Source separation models either work on the spectrogram or waveform domain. In this work, we show how to perform end-to-end hybrid source separation, letting the model decide which domain is best suited for each source, and even combining both. The proposed hybrid version of the Demucs architecture won the Music Demixing Challenge 2021 organized by Sony. This architecture also comes with additional improvements, such as compressed residual branches, local attention or singular value regularization. Overall, a 1.4 dB improvement of the Signal-To-Distortion (SDR) was observed across all sources as measured on the MusDB HQ dataset, an improvement confirmed by human subjective evaluation, with an overall quality rated at 2.83 out of 5 (2.36 for the non hybrid Demucs), and absence of contamination at 3.04 (against 2.37 for the non hybrid Demucs and 2.44 for the second ranking model submitted at the competition).


page 1

page 2

page 3

page 4


Hybrid Y-Net Architecture for Singing Voice Separation

This research paper presents a novel deep learning-based neural network ...

Hybrid Transformers for Music Source Separation

A natural question arising in Music Source Separation (MSS) is whether l...

Music Source Separation in the Waveform Domain

Source separation for music is the task of isolating contributions, or s...

Demucs: Deep Extractor for Music Sources with extra unlabeled data remixed

We study the problem of source separation for music using deep learning ...

Danna-Sep: Unite to separate them all

Deep learning-based music source separation has gained a lot of interest...

Towards robust music source separation on loud commercial music

Nowadays, commercial music has extreme loudness and heavily compressed d...

Music Separation Enhancement with Generative Modeling

Despite phenomenal progress in recent years, state-of-the-art music sepa...