MITAS: A Compressed Time-Domain Audio Separation Network with Parameter Sharing

12/09/2019
by   Chao-I Tuan, et al.
0

Deep learning methods have brought substantial advancements in speech separation (SS). Nevertheless, it remains challenging to deploy deep-learning-based models on edge devices. Thus, identifying an effective way to compress these large models without hurting SS performance has become an important research topic. Recently, TasNet and Conv-TasNet have been proposed. They achieved state-of-the-art results on several standardized SS tasks. Moreover, their low latency natures make them definitely suitable for real-time on-device applications. In this study, we propose two parameter-sharing schemes to lower the memory consumption on TasNet and Conv-TasNet. Accordingly, we derive a novel so-called MiTAS (Mini TasNet). Our experimental results first confirmed the robustness of our MiTAS on two types of perturbations in mixed audio. We also designed a series of ablation experiments to analyze the relation between SS performance and the amount of parameters in the model. The results show that MiTAS is able to reduce the model size by a factor of four while maintaining comparable SS performance with improved stability as compared to TasNet and Conv-TasNet. This suggests that MiTAS is more suitable for real-time low latency applications.

READ FULL TEXT
research
01/26/2022

SkiM: Skipping Memory LSTM for Low-Latency Real-Time Continuous Speech Separation

Continuous speech separation for meeting pre-processing has recently bec...
research
11/01/2017

TasNet: time-domain audio separation network for real-time, single-channel speech separation

Robust speech processing in multi-talker environments requires effective...
research
02/08/2023

Short-Term Memory Convolutions

The real-time processing of time series signals is a critical issue for ...
research
02/16/2020

Real-time binaural speech separation with preserved spatial cues

Deep learning speech separation algorithms have achieved great success i...
research
08/08/2021

Audio Spectral Enhancement: Leveraging Autoencoders for Low Latency Reconstruction of Long, Lossy Audio Sequences

With active research in audio compression techniques yielding substantia...
research
04/12/2022

Low Latency Time Domain Multichannel Speech and Music Source Separation

The Goal is to obtain a simple multichannel source separation with very ...
research
12/17/2019

A Unified Framework for Speech Separation

Speech separation refers to extracting each individual speech source in ...

Please sign up or login with your details

Forgot password? Click here to reset