X-DC: Explainable Deep Clustering based on Learnable Spectrogram Templates

09/18/2020
by   Chihiro Watanabe, et al.
0

Deep neural networks (DNNs) have achieved substantial predictive performance in various speech processing tasks. Particularly, it has been shown that a monaural speech separation task can be successfully solved with a DNN-based method called deep clustering (DC), which uses a DNN to describe the process of assigning a continuous vector to each time-frequency (TF) bin and measure how likely each pair of TF bins is to be dominated by the same speaker. In DC, the DNN is trained so that the embedding vectors for the TF bins dominated by the same speaker are forced to get close to each other. One concern regarding DC is that the embedding process described by a DNN has a black-box structure, which is usually very hard to interpret. The potential weakness owing to the non-interpretable black-box structure is that it lacks the flexibility of addressing the mismatch between training and test conditions (caused by reverberation, for instance). To overcome this limitation, in this paper, we propose the concept of explainable deep clustering (X-DC), whose network architecture can be interpreted as a process of fitting learnable spectrogram templates to an input spectrogram followed by Wiener filtering. During training, the elements of the spectrogram templates and their activations are constrained to be non-negative, which facilitates the sparsity of their values and thus improves interpretability. The main advantage of this framework is that it naturally allows us to incorporate a model adaptation mechanism into the network thanks to its physically interpretable structure. We experimentally show that the proposed X-DC enables us to visualize and understand the clues for the model to determine the embedding vectors while achieving speech separation performance comparable to that of the original DC models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/04/2021

Manifold-Aware Deep Clustering: Maximizing Angles between Embedding Vectors Based on Regular Simplex

This paper presents a new deep clustering (DC) method called manifold-aw...
research
07/23/2019

Discriminative Learning for Monaural Speech Separation Using Deep Embedding Features

Deep clustering (DC) and utterance-level permutation invariant training ...
research
06/22/2021

Deep neural network Based Low-latency Speech Separation with Asymmetric analysis-Synthesis Window Pair

Time-frequency masking or spectrum prediction computed via short symmetr...
research
04/06/2020

Simultaneous Denoising and Dereverberation Using Deep Embedding Features

Monaural speech dereverberation is a very challenging task because no sp...
research
06/23/2021

AC/DC: Alternating Compressed/DeCompressed Training of Deep Neural Networks

The increasing computational requirements of deep neural networks (DNNs)...
research
08/08/2022

Robust and Imperceptible Black-box DNN Watermarking Based on Fourier Perturbation Analysis and Frequency Sensitivity Clustering

Recently, more and more attention has been focused on the intellectual p...
research
02/05/2021

Convolutional Neural Network Interpretability with General Pattern Theory

Ongoing efforts to understand deep neural networks (DNN) have provided m...

Please sign up or login with your details

Forgot password? Click here to reset