Data Augmentation and Deep Convolutional Neural Networks for Blind Room Acoustic Parameter Estimation

09/09/2019
by   Nicholas J. Bryan, et al.
0

Reverberation time (T60) and the direct-to-reverberant ratio (DRR) are two commonly used parameters to characterize acoustic environments. Both parameters are useful for various speech processing applications and can be measured from an acoustic impulse response (AIR). In many scenarios, however, AIRs are not available, motivating blind estimation methods that operate directly from recorded speech. While many methods exist to solve this problem, neural networks are an appealing approach. Such methods, however, require large, balanced amounts of realistic training data (i.e. AIRs), which is expensive and time consuming to collect. To address this problem, we propose an AIR augmentation procedure that can parametrically control the T60 and DRR of real AIRs, allowing us to expand a small dataset of real AIRs into a large balanced dataset that is orders of magnitude larger. To show the validity of the method, we train a baseline convolutional neural network to predict both T60 and DDR from speech convolved with our augmented AIRs. We compare the performance of our estimators to prior work via the ACE Challenge evaluation tools and benchmarked results. Results suggest our baseline estimators outperform past single- and multi-channel state-of-the-art T60 and DRR algorithms in terms of the Pearson correlation coefficient and bias, and are either better or comparable in terms of MSE.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset