Improving the Generalizability of Deep Neural Network Based Speech Enhancement
Enhancing noisy speech is an important task to restore its quality and to improve its intelligibility. In traditional non-machine-learning (ML) based approaches the parameters required for noise reduction are estimated blindly from the noisy observation while the actual filter functions are derived analytically based on statistical assumptions. Even though such approaches generalize well to many different acoustic conditions, the noise suppression capability in transient noises is low. More recently, ML and especially deep learning has been employed for speech enhancement and studies show promising results in noise types where non-ML based approaches fail. However, due to their data-driven nature, the generalizability of ML based approaches to unknown noise types is still discussed. To improve the generalizability of ML based algorithms and to enhance the noise suppression of non-ML based methods, we propose a combination of both approaches. For this, we employ the a priori signal-to-noise ratio (SNR) and the a posteriori SNR estimated by non-ML based algorithms as input features in a deep neural network (DNN) based enhancement scheme. We show that this approach allows ML based speech estimators to generalize quickly to unknown noise types even if only few noise conditions have been seen during training. Instrumental measures such as Perceptual Evaluation of Speech Quality (PESQ) and the segmental SNR indicate strong improvements in unseen conditions when using the proposed features. Listening experiments clearly confirm the improved generalization of our proposed combination.
READ FULL TEXT