Position-Agnostic Multi-Microphone Speech Dereverberation

by   Yochai Yemini, et al.

Neural networks (NNs) have been widely applied in speech processing tasks, and, in particular, those employing microphone arrays. Nevertheless, most of the existing NN architectures can only deal with fixed and position-specific microphone arrays. In this paper, we present an NN architecture that can cope with microphone arrays on which no prior knowledge is presumed, and demonstrate its applicability on the speech dereverberation problem. To this end, our approach harnesses recent advances in the Deep Sets framework to design an architecture that enhances the reverberant log-spectrum. We provide a setup for training and testing such a network. Our experiments, using REVERB challenge datasets, show that the proposed position-agnostic setup performs comparably with the position-aware framework and sometimes slightly better, even with fewer microphones. In addition, it substantially improves performance over a single microphone architecture.


Constrained locating arrays for combinatorial interaction testing

This paper introduces the notion of Constrained Locating Arrays (CLAs), ...

Microphone Array Generalization for Multichannel Narrowband Deep Speech Enhancement

This paper addresses the problem of microphone array generalization for ...

Comparison of Neural Network Architectures for Spectrum Sensing

Different neural network (NN) architectures have different advantages. C...

Accelerating Neural Network Inference with Processing-in-DRAM: From the Edge to the Cloud

Neural networks (NNs) are growing in importance and complexity. A neural...

VarArray: Array-Geometry-Agnostic Continuous Speech Separation

Continuous speech separation using a microphone array was shown to be pr...

GACT: Activation Compressed Training for General Architectures

Training large neural network (NN) models requires extensive memory reso...

Improving Performance in Reinforcement Learning by Breaking Generalization in Neural Networks

Reinforcement learning systems require good representations to work well...