Hybrid Transformers for Music Source Separation

11/15/2022
by   Simon Rouard, et al.
0

A natural question arising in Music Source Separation (MSS) is whether long range contextual information is useful, or whether local acoustic features are sufficient. In other fields, attention based Transformers have shown their ability to integrate information over long sequences. In this work, we introduce Hybrid Transformer Demucs (HT Demucs), an hybrid temporal/spectral bi-U-Net based on Hybrid Demucs, where the innermost layers are replaced by a cross-domain Transformer Encoder, using self-attention within one domain, and cross-attention across domains. While it performs poorly when trained only on MUSDB, we show that it outperforms Hybrid Demucs (trained on the same data) by 0.45 dB of SDR when using 800 extra training songs. Using sparse attention kernels to extend its receptive field, and per source fine-tuning, we achieve state-of-the-art results on MUSDB with extra training data, with 9.20 dB of SDR.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/05/2023

Music Source Separation with Band-Split RoPE Transformer

Music source separation (MSS) aims to separate a music recording into mu...
research
11/05/2021

Hybrid Spectrogram and Waveform Source Separation

Source separation models either work on the spectrogram or waveform doma...
research
01/13/2020

Residual Attention Net for Superior Cross-Domain Time Sequence Modeling

We present a novel architecture, residual attention net (RAN), which mer...
research
11/21/2019

WildMix Dataset and Spectro-Temporal Transformer Model for Monoaural Audio Source Separation

Monoaural audio source separation is a challenging research area in mach...
research
03/19/2020

Voice and accompaniment separation in music using self-attention convolutional neural network

Music source separation has been a popular topic in signal processing fo...

Please sign up or login with your details

Forgot password? Click here to reset