MMDenseLSTM: An efficient combination of convolutional and recurrent neural networks for audio source separation

05/07/2018
by   Naoya Takahashi, et al.
0

Deep neural networks have become an indispensable technique for audio source separation (ASS). It was recently reported that a variant of CNN architecture called MMDenseNet was successfully employed to solve the ASS problem of estimating source amplitudes, and state-of-the-art results were obtained for DSD100 dataset. To further enhance MMDenseNet, here we propose a novel architecture that integrates long short-term memory (LSTM) in multiple scales with skip connections to efficiently model long-term structures within an audio context. The experimental results show that the proposed method outperforms MMDenseNet, LSTM and a blend of the two networks. The number of parameters and processing time of the proposed model are significantly less than those for simple blending. Furthermore, the proposed method yields better results than those obtained using ideal binary masks for a singing voice separation task.

READ FULL TEXT

page 2

page 3

research
09/01/2020

Analysis of memory in LSTM-RNNs for source separation

Long short-term memory recurrent neural networks (LSTM-RNNs) are conside...
research
06/29/2017

Multi-scale Multi-band DenseNets for Audio Source Separation

This paper deals with the problem of audio source separation. To handle ...
research
12/19/2019

CNN-LSTM models for Multi-Speaker Source Separation using Bayesian Hyper Parameter Optimization

In recent years there have been many deep learning approaches towards th...
research
11/09/2018

Long Short-Term Memory with Dynamic Skip Connections

In recent years, long short-term memory (LSTM) has been successfully use...
research
08/24/2018

Memory Time Span in LSTMs for Multi-Speaker Source Separation

With deep learning approaches becoming state-of-the-art in many speech (...
research
12/22/2020

Compressing LSTM Networks by Matrix Product Operators

Long Short-Term Memory (LSTM) models are the building blocks of many sta...
research
12/09/2017

Word Sense Disambiguation with LSTM: Do We Really Need 100 Billion Words?

Recently, Yuan et al. (2016) have shown the effectiveness of using Long ...

Please sign up or login with your details

Forgot password? Click here to reset