deepbeam
Deep learning based Speech Beamforming
view repo
Multi-channel speech enhancement with ad-hoc sensors has been a challenging task. Speech model guided beamforming algorithms are able to recover natural sounding speech, but the speech models tend to be oversimplified or the inference would otherwise be too complicated. On the other hand, deep learning based enhancement approaches are able to learn complicated speech distributions and perform efficient inference, but they are unable to deal with variable number of input channels. Also, deep learning approaches introduce a lot of errors, particularly in the presence of unseen noise types and settings. We have therefore proposed an enhancement framework called DEEPBEAM, which combines the two complementary classes of algorithms. DEEPBEAM introduces a beamforming filter to produce natural sounding speech, but the filter coefficients are determined with the help of a monaural speech enhancement neural network. Experiments on synthetic and real-world data show that DEEPBEAM is able to produce clean, dry and natural sounding speech, and is robust against unseen noise.
READ FULL TEXT VIEW PDFDeep learning based Speech Beamforming
Multi-channel speech enhancement with ad-hoc sensors has long been a challenging task [1]. As the traditional benchmark in multi-channel enhancement tasks, beamforming algorithms do not work well with with ad-hoc microphones. This is because most beamformers need to calibrate the speaker location as well as the interference characteristics, so that it can turn its beam toward the speaker, while suppressing the interference. However, neither of the two vital information can be accurately measured, due to the missing sensor position information and microphone heterogeneity [2].
Another class of beamforming algorithms avoid measuring the speaker position and interference. Instead, they introduce prior knowledge on speech, and find the optimal beamformer by maximizing the ”speechness” criteria, such as sample kurtosis
[3], negentropy [4], speech prior distributions [5, 6], fitting glottal residual [7] etc. In particular, the GRAB algorithm [7] is able to outperform the closest microphone strategy even in very adverse real-world scenarios. Despite their success, these algorithms are bottlenecked by their oversimplified prior knowledge. For example, GRAB only models glottal energy, resulting in vocal tract ambiguity.On the other hand, deep learning techniques are well known for their ability to capture complex probability dependencies and efficient inference, and thus have been widely used in single-channel speech enhancement tasks
[8, 9, 10, 11, 12, 13]. Unfortunately, directly applying deep enhancement networks to multi-channel enhancement suffers from two difficulties. First, deep enhancement techniques often produce a lot of artifacts and nonlinear distortions [11, 12]， which are perceptually undesirable. Second, neural networks often generalize poorly to unseen noise and configurations, whereas in speech enhancement with ad-hoc sensors, such variability is large.It turns out, these problems can in turn be resolved by traditional beamforming. Therefore, several algorithms [14, 15, 16, 17, 18]
have been proposed that applies deep learning to predict time-frequency masks, and then beamforming to produce the enhanced speech. However, these methods are confined to frequency domain, which suffers from two problems for our application. First, they to not work well for ad-hoc microphones, because of the spatial correlation estimation errors. Second, our application is for human consumption, but the frequency-domain methods suffer from phase distortions and discontinuities, which impede perceptual quality.
Motivated by this observation, we have proposed an enhancement framework for ad-hoc microphones called DeepBeam, which combines deep learning and beamforming, and which directly works on waveform. DeepBeam introduces a time-domain beamforming filter to produce natural sounding speech, but the filter coefficients are iteratively determined with the help of WaveNet [19]. It can be shown that despite the error-prone enhancement network, DeepBeam is able to converge approximately to the optimal beamformer under some assumptions. Experiments on both the simulated and real-world data show that DeepBeam is able to produce clean, dry and natural sounding speech, and generalize well to various settings.
To formally define the problem, denote as the clean speech signal. Suppose there are channels of observed signals, , which are represented as
(1) |
where denotes discrete convolution, denotes additive noise. and are the impulse responses of the signal reverberation and noise reverberation in the -th channel respectively. Our goal is to design a -tap beamformer , whose output is defined as
(2) |
For notational brevity, define
(3) | ||||
which are all random vectors. Also define convolutional matrices
(4) |
and
(5) |
With these notations, Eq. (2) can be simplified as
(6) |
The target of designing the beamformer is to minimize the weighted mean squared error (MSE):
(7) |
where ; is a positive definite weight matrix, which, in our case, is a diagonal matrix of .
Eq. (7) is a Wiener filtering problem [20], whose solution is
(8) |
where
(9) |
is in fact the projection matrix onto the beamforming output space. So by Eq. (8), is essentially projecting onto the space that is representable by the beamforming filter.
As shown by Eq. (8), solving the Wiener filtering problem requires computing , which, due to the complex probabilistic dependencies, we would like to introduce a deep neural network to learn. However, as discussed, training a neural network to directly predict from the multi-channel input suffers from inflexible input dimensions, artifacts and poor generalization. DeepBeam tries to resolve these problems and find an approximate solution.
In this section, we will describe the DeepBeam algorithm. We will first outline the algorithm, and then describe the neural network structure it applies. Finally, a convergence analysis is introduced.
As mentioned, DeepBeam introduces a deep enhancement network to learn the posterior expectation, while addressing its limitations. First, DeepBeam are regularized by the beamformer to generalize well to unseen noise and microphone configurations. Second, it tolerates the distortions and artifacts generated by the neural network. Formally, the neural network outputs an inaccurate prediction of the posterior expectation ,
(10) |
where is a single-channel noisy observation, and is the prediction error. The goal of DeepBeam is to approximate the optimal beamformer given the inaccurate enhancement network. Alg. 1 shows the description of the DeepBeam algorithm. A graph of the DeepBeam framework is shown in Fig. 1.
by finding the channel that has the smallest 0.4 quantile of its squared sample points.
(11) |
(12) |
Alg. 1 essentially alternates between the posterior expectation and projection iteratively. It will be shown in section 3.3 that as long as the error term is not too large, this iteration will approximately converge to the optimal beamformer output.
One elegance of DeepBeam is that can be regarded as a noisy observation, and shares some statistical structures with the true noisy observations, . To see this, notice that by Eq. (12), is the output of a beamformer on . Therefore, it can be shown that also takes the form of Eq. (1), with the same speech and noise source, but with a different impulse response. This justifies the use of one monaural enhancement network to take care of all the .
DeepBeam is a general framework, in which the choice of the neural network structure is not fixed. The following network structure is just one of the structures that produce competitive results.
The enhancement network applied here is similar to [12], which is inspired by WaveNet [19]. Formally, denote the quantized speech samples as , and the samples of as
. Then the enhancement network predicts the posterior probability mass function (PMF) of
:(13) |
Here we have restricted the probabilistic dependency to span
time steps. Cross-entropy is applied as the loss function.
Similar to WaveNet, the enhancement network consists of two modules. The first module, called the dilated convolution module, contains a stack of dilated convolutional layers with residual connections and skip outputs. The second module, called the post processing module, sums all the skip outputs and feeds them into a stack of fully connected layers before producing the final output.
There are two major differences from the standard WaveNet structure. First, the input to the enhancement network is the noisy observation waveform instead of the clean speech. Second, to account for the future dependencies, the convolutional layers are noncausal instead of the causal .
After the posterior distribution is predicted, the posterior moments,
and (for computing ), are computed as the moments of the predicted PMF.In order to analyze the convergence property of DeepBeam, we assume the following bound on the error term
(14) |
where is some constant. This assumption is actually not quite stringent, because it does not bound the weighted norm of itself, but its projected value . In fact, the projection can drastically reduce the weighted norm of the error term. For example, most of the artifacts and nonlinear distortions that the enhancement network introduces cannot possibly be generated by beamforming on , and therefore will be removed by the projection. The only errors that are likely to remain are residual noise and reverberations. This is one advantage of combining beamforming filter and neural network. This assumption is also very intuitive. It means that the projected output error is always smaller than input error.
Then, we have the following theorem.
If , then Eq. (15) implies mean square convergence to the optimal beamformer output. In actuality, is nonzero, but it tends to be very small. The first term of measures the distance between the optimal beamformer output and the true speech. According to our empirical study, when the number of channel is sufficient, the optimal beamformer is able to recover the true speech very well, so the first term is small. The second term of measures the distance between two posterior expectations and . The former is conditional on single-channel noisy speech, and the latter on multiple-channel noisy speech. Considering that the speech sample space is highly structured, and that the noisy speech is relatively clean already, both posterior expectations should be close to the true speech, and thereby close to each other. In a nutshell, with a small , the DeepBeam prediction is highly accurate. Section 4.4 will verify the convergence behavior of DeepBeam empirically.
This section first introduces how the enhancement network is configured and trained, and then presents the results of experiments on both simulated and real-world data. Audio samples can be found in http://tiny.cc/a1qjoy .
The enhancement network hyperparameter configurations follow
[19]. It has 4 blocks of 10 dilated convolution layers. There are two post processing layers. The hidden node dimension is 32, and the skip node dimension is 256. The clean speech is quantized into 256 level via-law companding, and thus the output dimension is 256. The activation function in the dilated convolutional layers is the gated activation unit; that in the post processing layers is the ReLU function. The output activation is softmax.
The enhancement network is trained on simulated data only, which is generated in the same way as in [7]. The speech source, noise source and eight microphones are randomly placed into a randomly sized cubic room. The impulse response from each source to each microphone is generated using the image-source method [21, 22]. The noisy observations are generated according to Eq. (1). The reverberation time is uniformly randomly drawn from [, ] ms. The energy ratio between the speech source and noise source, , is uniformly randomly drawn from [, ] dB. The speech content is drawn from VCTK [23], which contains 109 speakers. The noise content contains 90 minutes of audio drawn from [24, 25, 26]. The total duration of the training audio is 8 hours. The enhancement network is trained using ADAM optimizer for 400,000 iterations.
The simulated data for evaluation is generated the same way as the training data, except for two differences. First, the source energy ratio, , is set to four levels, dB, dB, dB, and dB. Second, both the speaker and noise can be either seen or unseen in the training set, leading to four different scenarios to test generalizability. It is worth highlighting that the unseen speaker utterances and unseen noise are both drawn from different corpora from training, TIMIT [27] and FreeSFX [28] respectively. Each utterance is 3 seconds in length. The total length of the dataset is 12 minutes.
DeepBeam is compared with GRAB [7], MVDR^{1}^{1}1Clean speech is given for voice activity detection. [29], IVA [5] and the closest channel (CLOSEST), in term of two criteria:
Signal-to-Noise Ratio (SNR): The energy ratio of processed clean speech over processed noise in dB.
Direct-to-Reverberant Ratio (DRR): the ratio of the energy of direct path speech in the processed output over that of its reverberation in dB. Direct path and reverberation are defined as clean dry speech convolved with the peak portion and tail portion of processed room impulse response. The peak portion is defined as ms within the highest peak; the tail portion is defined as ms beyond.
-10 | 0 | 10 | 20 | ||
SNR (dB) | DeepBeam S1 | 18.5 | 22.0 | 26.5 | 28.4 |
DeepBeam S2 | 17.1 | 20.3 | 25.9 | 27.4 | |
DeepBeam S3 | 15.3 | 19.5 | 24.1 | 27.6 | |
DeepBeam S4 | 14.1 | 19.0 | 23.1 | 28.5 | |
GRAB | 2.48 | 12.5 | 21.6 | 25.4 | |
CLOSEST | -5.13 | 3.38 | 14.9 | 24.8 | |
MVDR | 8.41 | 12.9 | 22.6 | 26.7 | |
IVA | 10.3 | 13.3 | 16.8 | 19.2 | |
DRR (dB) | DeepBeam S1 | 3.45 | 8.97 | 11.2 | 11.5 |
DeepBeam S2 | 7.38 | 11.9 | 12.6 | 11.5 | |
DeepBeam S3 | 5.60 | 4.85 | 8.43 | 9.78 | |
DeepBeam S4 | 2.11 | 6.68 | 7.10 | 9.31 | |
GRAB | -0.83 | 1.70 | 3.63 | 3.68 | |
CLOSEST | 8.56 | 7.32 | 7.67 | 8.44 | |
MVDR | -2.17 | -3.47 | -3.42 | -4.13 | |
IVA | -8.92 | -8.77 | -8.81 | -8.99 | |
S1: seen speaker, seen noise; S2: seen speaker, unseen noise; | |||||
S3: unseen speaker, seen noise; S4: unseen speaker, unseen noise. |
Table 1 shows the results. As expected, DeepBeam’s performance drops from S1, where both noise and speaker are seen during training, to S4, where neither is seen. However, in terms of SNR, even DeepBeam S4 significantly outperforms MVDR, which is the benchmark in noise suppression. In terms of DRR, DeepBeam matches or surpasses CLOSEST except for -10 dB. GRAB performs poorer than in [7], because each utterance is reduced from 10 seconds to 3 seconds, which is more realistic but challenging. In short, of “cleanness” and “dryness”, most algorithms can only achieve one, but DeepBeam can achieve both with superior performance.
DeepBeam and the baselines are also evaluated on the real-world dataset introduced in [7], which consists of two utterances by two speakers mixed with five types of noises, all recorded in a real conference room using eight randomly positioned microphones. The source energy ratio is set such that the SNR for the closest microphone is dB. The utterance in each scenario is around 1 minute long, so the total length of the dataset is minutes.
Besides SNR, a subjective test similar to [7] is performed on Amazon Mechanical Turk. Each utterance is broken into six sentences. In each test unit, called HIT, a subject is presented with one sentence processed by the five algorithms, and asked to assign an MOS [30] to each of them. Each HIT is assigned to 10 subjects.
Noise Type | N1 | N2 | N3 | N4 | N5 | |
---|---|---|---|---|---|---|
SNR (dB) | DeepBeam | 20.1 | 20.0 | 16.9 | 19.6 | 18.7 |
GRAB | 18.9 | 17.4 | 12.4 | 18.5 | 17.4 | |
CLOSEST | 10.0 | 10.0 | 10.0 | 10.0 | 10.0 | |
MVDR | 10.8 | 16.5 | 7.72 | 14.0 | 13.4 | |
IVA | 11.7 | 9.74 | 6.83 | 12.4 | 15.9 | |
MOS | DeepBeam | 3.83 | 3.72 | 3.63 | 4.09 | 4.20 |
GRAB | 3.10 | 3.06 | 2.93 | 3.71 | 3.45 | |
CLOSEST | 2.74 | 2.68 | 3.02 | 3.55 | 3.50 | |
MVDR | 2.05 | 2.40 | 2.28 | 2.71 | 2.62 | |
IVA | 1.73 | 2.03 | 1.75 | 1.78 | 2.08 | |
N1: cell phone; N2: CombBind machine; N3:paper shuffle; | ||||||
N4: door slide; N5: footsteps. |
Table 2 shows the results. As can be seen, DeepBeam outperforms the other algorithms by a large margin. In particular, DeepBeam achieves MOS in some noise types. These results are very impressive, because DeepBeam is only trained on simulated data. The real-world data differ significantly from the simulated data in terms of speakers, noise types and recording environment. What’s more, some microphones are contaminated by strong electric noise, which is not accounted for in Eq. (1). Still, DeepBeam manages to conquer all the unexpected. Neural network used to be vulnerable to unseen scenarios, but DeepBeam has now made it robust.
In order to empirically test whether DeepBeam has a good convergence property, 10 sets of eight-channel simulated data are generated with the S1 setting and . To study different number of channels, in each sub-test, channels are randomly drawn from each set of data for DeepBeam prediction, and the resulting SNR convergence curves of the 10 sets are averaged. runs from 3 to 8.
Fig. 2 shows all the averaged convergence curves. As can be seen, DeepBeam converges well in all the sub-tests, which supports our convergence discussions in section 3.3. Also, the more channels DeepBeam has, the higher convergence level it can reach, which shows that DeepBeam is able to accommodate different numbers of channels using only one monaural network. We also see that the marginal benefit of having one more channel diminishes.
We have proposed DeepBeam as a solution to multi-channel speech enhancement with ad-hoc sensors. DeepBeam combines the complementary beamforming and deep learning techniques, and has exhibited superior performance and generalizability in terms of noise suppression, reverberation cancellation and perceptual quality. DeepBeam has made one step closer to resolving the long lasting crux of low perceptual quality and poor generalizability in deep enhancement networks, which demonstrates the power of bridging the signal processing and deep learning areas.
“Optimal distributed minimum-variance beamforming approaches for speech enhancement in wireless acoustic sensor networks,”
Signal Processing, vol. 107, pp. 4–20, 2015.“Long short-term memory for speaker generalization in supervised speech separation,”
in INTERSPEECH, 2016, pp. 3314–3318.“Discriminatively trained recurrent neural networks for single-channel speech separation,”
in IEEE Global Conference on Signal and Information Processing (GlobalSIP), 2014, pp. 577–581.“Dnn-based speech mask estimation for eigenvector beamforming,”
in Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on. IEEE, 2017, pp. 66–70.Extrapolation, interpolation, and smoothing of stationary time series
, vol. 7, MIT press Cambridge, MA, 1949.