Log In Sign Up

Deep Learning Based Speech Beamforming

Multi-channel speech enhancement with ad-hoc sensors has been a challenging task. Speech model guided beamforming algorithms are able to recover natural sounding speech, but the speech models tend to be oversimplified or the inference would otherwise be too complicated. On the other hand, deep learning based enhancement approaches are able to learn complicated speech distributions and perform efficient inference, but they are unable to deal with variable number of input channels. Also, deep learning approaches introduce a lot of errors, particularly in the presence of unseen noise types and settings. We have therefore proposed an enhancement framework called DEEPBEAM, which combines the two complementary classes of algorithms. DEEPBEAM introduces a beamforming filter to produce natural sounding speech, but the filter coefficients are determined with the help of a monaural speech enhancement neural network. Experiments on synthetic and real-world data show that DEEPBEAM is able to produce clean, dry and natural sounding speech, and is robust against unseen noise.


page 1

page 2

page 3

page 4


Deep Ad-hoc Beamforming

Deep learning based speech enhancement methods face two problems. First,...

Joint Noise Reduction and Listening Enhancement for Full-End Speech Enhancement

Speech enhancement (SE) methods mainly focus on recovering clean speech ...

Lightweight Speech Enhancement in Unseen Noisy and Reverberant Conditions using KISS-GEV Beamforming

This paper introduces a new method referred to as KISS-GEV (for Keep It ...

A Database for Research on Detection and Enhancement of Speech Transmitted over HF links

In this paper we present an open database for the development of detecti...

A Training Framework for Stereo-Aware Speech Enhancement using Deep Neural Networks

Deep learning-based speech enhancement has shown unprecedented performan...

Comparative Study between Adversarial Networks and Classical Techniques for Speech Enhancement

Speech enhancement is a crucial task for several applications. Among the...

Channel-Attention Dense U-Net for Multichannel Speech Enhancement

Supervised deep learning has gained significant attention for speech enh...

Code Repositories


Deep learning based Speech Beamforming

view repo

1 Introduction

Multi-channel speech enhancement with ad-hoc sensors has long been a challenging task [1]. As the traditional benchmark in multi-channel enhancement tasks, beamforming algorithms do not work well with with ad-hoc microphones. This is because most beamformers need to calibrate the speaker location as well as the interference characteristics, so that it can turn its beam toward the speaker, while suppressing the interference. However, neither of the two vital information can be accurately measured, due to the missing sensor position information and microphone heterogeneity [2].

Another class of beamforming algorithms avoid measuring the speaker position and interference. Instead, they introduce prior knowledge on speech, and find the optimal beamformer by maximizing the ”speechness” criteria, such as sample kurtosis

[3], negentropy [4], speech prior distributions [5, 6], fitting glottal residual [7] etc. In particular, the GRAB algorithm [7] is able to outperform the closest microphone strategy even in very adverse real-world scenarios. Despite their success, these algorithms are bottlenecked by their oversimplified prior knowledge. For example, GRAB only models glottal energy, resulting in vocal tract ambiguity.

On the other hand, deep learning techniques are well known for their ability to capture complex probability dependencies and efficient inference, and thus have been widely used in single-channel speech enhancement tasks

[8, 9, 10, 11, 12, 13]. Unfortunately, directly applying deep enhancement networks to multi-channel enhancement suffers from two difficulties. First, deep enhancement techniques often produce a lot of artifacts and nonlinear distortions [11, 12], which are perceptually undesirable. Second, neural networks often generalize poorly to unseen noise and configurations, whereas in speech enhancement with ad-hoc sensors, such variability is large.

It turns out, these problems can in turn be resolved by traditional beamforming. Therefore, several algorithms [14, 15, 16, 17, 18]

have been proposed that applies deep learning to predict time-frequency masks, and then beamforming to produce the enhanced speech. However, these methods are confined to frequency domain, which suffers from two problems for our application. First, they to not work well for ad-hoc microphones, because of the spatial correlation estimation errors. Second, our application is for human consumption, but the frequency-domain methods suffer from phase distortions and discontinuities, which impede perceptual quality.

Motivated by this observation, we have proposed an enhancement framework for ad-hoc microphones called DeepBeam, which combines deep learning and beamforming, and which directly works on waveform. DeepBeam introduces a time-domain beamforming filter to produce natural sounding speech, but the filter coefficients are iteratively determined with the help of WaveNet [19]. It can be shown that despite the error-prone enhancement network, DeepBeam is able to converge approximately to the optimal beamformer under some assumptions. Experiments on both the simulated and real-world data show that DeepBeam is able to produce clean, dry and natural sounding speech, and generalize well to various settings.

2 Problem Formulation

To formally define the problem, denote as the clean speech signal. Suppose there are channels of observed signals, , which are represented as


where denotes discrete convolution, denotes additive noise. and are the impulse responses of the signal reverberation and noise reverberation in the -th channel respectively. Our goal is to design a -tap beamformer , whose output is defined as


For notational brevity, define


which are all random vectors. Also define convolutional matrices




With these notations, Eq. (2) can be simplified as


The target of designing the beamformer is to minimize the weighted mean squared error (MSE):


where ; is a positive definite weight matrix, which, in our case, is a diagonal matrix of .

Eq. (7) is a Wiener filtering problem [20], whose solution is




is in fact the projection matrix onto the beamforming output space. So by Eq. (8), is essentially projecting onto the space that is representable by the beamforming filter.

As shown by Eq. (8), solving the Wiener filtering problem requires computing , which, due to the complex probabilistic dependencies, we would like to introduce a deep neural network to learn. However, as discussed, training a neural network to directly predict from the multi-channel input suffers from inflexible input dimensions, artifacts and poor generalization. DeepBeam tries to resolve these problems and find an approximate solution.

3 The DeepBeam Framework

In this section, we will describe the DeepBeam algorithm. We will first outline the algorithm, and then describe the neural network structure it applies. Finally, a convergence analysis is introduced.

3.1 The Algorithm Overview

As mentioned, DeepBeam introduces a deep enhancement network to learn the posterior expectation, while addressing its limitations. First, DeepBeam are regularized by the beamformer to generalize well to unseen noise and microphone configurations. Second, it tolerates the distortions and artifacts generated by the neural network. Formally, the neural network outputs an inaccurate prediction of the posterior expectation ,


where is a single-channel noisy observation, and is the prediction error. The goal of DeepBeam is to approximate the optimal beamformer given the inaccurate enhancement network. Alg. 1 shows the description of the DeepBeam algorithm. A graph of the DeepBeam framework is shown in Fig. 1.

Figure 1: DeepBeam framework.
0:  Multi-channel noisy speech observations ; A neural network that predicts (Eq. (10)) from any single-channel noisy observation .
0:  Beamformer output .

1:  Find the ‘cleanest’ channel

by finding the channel that has the smallest 0.4 quantile of its squared sample points.

2:  Set .
3:  for  to maximum number of iterations do
4:     Feed to the monaural enhancement network, and obtain its output
5:     Update the beamformer coefficients and output
6:  end for
7:  return  
Algorithm 1 The DeepBeam algorithm.

Alg. 1 essentially alternates between the posterior expectation and projection iteratively. It will be shown in section 3.3 that as long as the error term is not too large, this iteration will approximately converge to the optimal beamformer output.

One elegance of DeepBeam is that can be regarded as a noisy observation, and shares some statistical structures with the true noisy observations, . To see this, notice that by Eq. (12), is the output of a beamformer on . Therefore, it can be shown that also takes the form of Eq. (1), with the same speech and noise source, but with a different impulse response. This justifies the use of one monaural enhancement network to take care of all the .

3.2 Enhancement Network Structure

DeepBeam is a general framework, in which the choice of the neural network structure is not fixed. The following network structure is just one of the structures that produce competitive results.

The enhancement network applied here is similar to [12], which is inspired by WaveNet [19]. Formally, denote the quantized speech samples as , and the samples of as

. Then the enhancement network predicts the posterior probability mass function (PMF) of



Here we have restricted the probabilistic dependency to span

time steps. Cross-entropy is applied as the loss function.

Similar to WaveNet, the enhancement network consists of two modules. The first module, called the dilated convolution module, contains a stack of dilated convolutional layers with residual connections and skip outputs. The second module, called the post processing module, sums all the skip outputs and feeds them into a stack of fully connected layers before producing the final output.

There are two major differences from the standard WaveNet structure. First, the input to the enhancement network is the noisy observation waveform instead of the clean speech. Second, to account for the future dependencies, the convolutional layers are noncausal instead of the causal .

After the posterior distribution is predicted, the posterior moments,

and (for computing ), are computed as the moments of the predicted PMF.

3.3 Convergence Analysis

In order to analyze the convergence property of DeepBeam, we assume the following bound on the error term


where is some constant. This assumption is actually not quite stringent, because it does not bound the weighted norm of itself, but its projected value . In fact, the projection can drastically reduce the weighted norm of the error term. For example, most of the artifacts and nonlinear distortions that the enhancement network introduces cannot possibly be generated by beamforming on , and therefore will be removed by the projection. The only errors that are likely to remain are residual noise and reverberations. This is one advantage of combining beamforming filter and neural network. This assumption is also very intuitive. It means that the projected output error is always smaller than input error.

Then, we have the following theorem.

Theorem 1.

Suppose Eq. (14) holds. Then




On one hand, from Eqs. (11) and (12)


On the other hand, by orthogonality principle


Combining Eqs. (14), (17) and (18), we have


Create an auxiliary sequence


Then by Eq. (19),


Taking on both sides of Eq. (21) concludes the proof. ∎

If , then Eq. (15) implies mean square convergence to the optimal beamformer output. In actuality, is nonzero, but it tends to be very small. The first term of measures the distance between the optimal beamformer output and the true speech. According to our empirical study, when the number of channel is sufficient, the optimal beamformer is able to recover the true speech very well, so the first term is small. The second term of measures the distance between two posterior expectations and . The former is conditional on single-channel noisy speech, and the latter on multiple-channel noisy speech. Considering that the speech sample space is highly structured, and that the noisy speech is relatively clean already, both posterior expectations should be close to the true speech, and thereby close to each other. In a nutshell, with a small , the DeepBeam prediction is highly accurate. Section 4.4 will verify the convergence behavior of DeepBeam empirically.

4 Experiments

This section first introduces how the enhancement network is configured and trained, and then presents the results of experiments on both simulated and real-world data. Audio samples can be found in .

4.1 Enhancement Network Configurations

The enhancement network hyperparameter configurations follow

[19]. It has 4 blocks of 10 dilated convolution layers. There are two post processing layers. The hidden node dimension is 32, and the skip node dimension is 256. The clean speech is quantized into 256 level via

-law companding, and thus the output dimension is 256. The activation function in the dilated convolutional layers is the gated activation unit; that in the post processing layers is the ReLU function. The output activation is softmax.

The enhancement network is trained on simulated data only, which is generated in the same way as in [7]. The speech source, noise source and eight microphones are randomly placed into a randomly sized cubic room. The impulse response from each source to each microphone is generated using the image-source method [21, 22]. The noisy observations are generated according to Eq. (1). The reverberation time is uniformly randomly drawn from [, ] ms. The energy ratio between the speech source and noise source, , is uniformly randomly drawn from [, ] dB. The speech content is drawn from VCTK [23], which contains 109 speakers. The noise content contains 90 minutes of audio drawn from [24, 25, 26]. The total duration of the training audio is 8 hours. The enhancement network is trained using ADAM optimizer for 400,000 iterations.

4.2 Simulated Data Evaluation

The simulated data for evaluation is generated the same way as the training data, except for two differences. First, the source energy ratio, , is set to four levels,  dB,  dB,  dB, and  dB. Second, both the speaker and noise can be either seen or unseen in the training set, leading to four different scenarios to test generalizability. It is worth highlighting that the unseen speaker utterances and unseen noise are both drawn from different corpora from training, TIMIT [27] and FreeSFX [28] respectively. Each utterance is 3 seconds in length. The total length of the dataset is 12 minutes.

DeepBeam is compared with GRAB [7], MVDR111Clean speech is given for voice activity detection. [29], IVA [5] and the closest channel (CLOSEST), in term of two criteria:
Signal-to-Noise Ratio (SNR): The energy ratio of processed clean speech over processed noise in dB.

Direct-to-Reverberant Ratio (DRR): the ratio of the energy of direct path speech in the processed output over that of its reverberation in dB. Direct path and reverberation are defined as clean dry speech convolved with the peak portion and tail portion of processed room impulse response. The peak portion is defined as  ms within the highest peak; the tail portion is defined as  ms beyond.

-10 0 10 20
SNR (dB) DeepBeam S1 18.5 22.0 26.5 28.4
DeepBeam S2 17.1 20.3 25.9 27.4
DeepBeam S3 15.3 19.5 24.1 27.6
DeepBeam S4 14.1 19.0 23.1 28.5
GRAB 2.48 12.5 21.6 25.4
CLOSEST -5.13 3.38 14.9 24.8
MVDR 8.41 12.9 22.6 26.7
IVA 10.3 13.3 16.8 19.2
DRR (dB) DeepBeam S1 3.45 8.97 11.2 11.5
DeepBeam S2 7.38 11.9 12.6 11.5
DeepBeam S3 5.60 4.85 8.43 9.78
DeepBeam S4 2.11 6.68 7.10 9.31
GRAB -0.83 1.70 3.63 3.68
CLOSEST 8.56 7.32 7.67 8.44
MVDR -2.17 -3.47 -3.42 -4.13
IVA -8.92 -8.77 -8.81 -8.99
S1: seen speaker, seen noise;        S2: seen speaker, unseen noise;
S3: unseen speaker, seen noise;    S4: unseen speaker, unseen noise.
Table 1: Simulated Data Evaluation Results.

Table 1 shows the results. As expected, DeepBeam’s performance drops from S1, where both noise and speaker are seen during training, to S4, where neither is seen. However, in terms of SNR, even DeepBeam S4 significantly outperforms MVDR, which is the benchmark in noise suppression. In terms of DRR, DeepBeam matches or surpasses CLOSEST except for -10 dB. GRAB performs poorer than in [7], because each utterance is reduced from 10 seconds to 3 seconds, which is more realistic but challenging. In short, of “cleanness” and “dryness”, most algorithms can only achieve one, but DeepBeam can achieve both with superior performance.

4.3 Real-world Data Evaluation

DeepBeam and the baselines are also evaluated on the real-world dataset introduced in [7], which consists of two utterances by two speakers mixed with five types of noises, all recorded in a real conference room using eight randomly positioned microphones. The source energy ratio is set such that the SNR for the closest microphone is  dB. The utterance in each scenario is around 1 minute long, so the total length of the dataset is minutes.

Besides SNR, a subjective test similar to [7] is performed on Amazon Mechanical Turk. Each utterance is broken into six sentences. In each test unit, called HIT, a subject is presented with one sentence processed by the five algorithms, and asked to assign an MOS [30] to each of them. Each HIT is assigned to 10 subjects.

Noise Type N1 N2 N3 N4 N5
SNR (dB) DeepBeam 20.1 20.0 16.9 19.6 18.7
GRAB 18.9 17.4 12.4 18.5 17.4
CLOSEST 10.0 10.0 10.0 10.0 10.0
MVDR 10.8 16.5 7.72 14.0 13.4
IVA 11.7 9.74 6.83 12.4 15.9
MOS DeepBeam 3.83 3.72 3.63 4.09 4.20
GRAB 3.10 3.06 2.93 3.71 3.45
CLOSEST 2.74 2.68 3.02 3.55 3.50
MVDR 2.05 2.40 2.28 2.71 2.62
IVA 1.73 2.03 1.75 1.78 2.08
N1: cell phone;    N2: CombBind machine;    N3:paper shuffle;
N4: door slide;     N5: footsteps.
Table 2: Realworld Data Evaluation Results.

Table 2 shows the results. As can be seen, DeepBeam outperforms the other algorithms by a large margin. In particular, DeepBeam achieves MOS in some noise types. These results are very impressive, because DeepBeam is only trained on simulated data. The real-world data differ significantly from the simulated data in terms of speakers, noise types and recording environment. What’s more, some microphones are contaminated by strong electric noise, which is not accounted for in Eq. (1). Still, DeepBeam manages to conquer all the unexpected. Neural network used to be vulnerable to unseen scenarios, but DeepBeam has now made it robust.

4.4 Empirical Convergence Analysis

Figure 2: SNR convergence curves with different numbers of channels.

In order to empirically test whether DeepBeam has a good convergence property, 10 sets of eight-channel simulated data are generated with the S1 setting and . To study different number of channels, in each sub-test, channels are randomly drawn from each set of data for DeepBeam prediction, and the resulting SNR convergence curves of the 10 sets are averaged. runs from 3 to 8.

Fig. 2 shows all the averaged convergence curves. As can be seen, DeepBeam converges well in all the sub-tests, which supports our convergence discussions in section 3.3. Also, the more channels DeepBeam has, the higher convergence level it can reach, which shows that DeepBeam is able to accommodate different numbers of channels using only one monaural network. We also see that the marginal benefit of having one more channel diminishes.

5 Conclusion

We have proposed DeepBeam as a solution to multi-channel speech enhancement with ad-hoc sensors. DeepBeam combines the complementary beamforming and deep learning techniques, and has exhibited superior performance and generalizability in terms of noise suppression, reverberation cancellation and perceptual quality. DeepBeam has made one step closer to resolving the long lasting crux of low perceptual quality and poor generalizability in deep enhancement networks, which demonstrates the power of bridging the signal processing and deep learning areas.


  • [1] Michael Brandstein and Darren Ward, Microphone arrays: signal processing techniques and applications, Springer Science & Business Media, 2013.
  • [2] Shmulik Markovich-Golan, Alexander Bertrand, Marc Moonen, and Sharon Gannot,

    “Optimal distributed minimum-variance beamforming approaches for speech enhancement in wireless acoustic sensor networks,”

    Signal Processing, vol. 107, pp. 4–20, 2015.
  • [3] Bradford W Gillespie, Henrique S Malvar, and Dinei AF Florêncio, “Speech dereverberation via maximum-kurtosis subband adaptive filtering,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2001, vol. 6, pp. 3701–3704.
  • [4] Kenichi Kumatani, John McDonough, Barbara Rauch, Dietrich Klakow, Philip N Garner, and Weifeng Li, “Beamforming with a maximum negentropy criterion,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 17, no. 5, pp. 994–1008, 2009.
  • [5] Taesu Kim, Hagai T Attias, Soo-Young Lee, and Te-Won Lee, “Blind source separation exploiting higher-order frequency dependencies,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 15, no. 1, pp. 70–79, 2007.
  • [6] Daichi Kitamura, Nobutaka Ono, Hiroshi Sawada, Hirokazu Kameoka, and Hiroshi Saruwatari, “Determined blind source separation unifying independent vector analysis and nonnegative matrix factorization,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 24, no. 9, pp. 1626–1641, 2016.
  • [7] Yang Zhang, Dinei Florêncio, and Mark Hasegawa-Johnson, “Glottal model based speech beamforming for ad-hoc microphone arrays,” INTERSPEECH, pp. 2675–2679, 2017.
  • [8] Jitong Chen and Deliang Wang,

    Long short-term memory for speaker generalization in supervised speech separation,”

    in INTERSPEECH, 2016, pp. 3314–3318.
  • [9] Po-Sen Huang, Minje Kim, Mark Hasegawa-Johnson, and Paris Smaragdis, “Deep learning for monaural speech separation,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2014, pp. 1562–1566.
  • [10] Felix Weninger, John R Hershey, Jonathan Le Roux, and Björn Schuller,

    “Discriminatively trained recurrent neural networks for single-channel speech separation,”

    in IEEE Global Conference on Signal and Information Processing (GlobalSIP), 2014, pp. 577–581.
  • [11] Kaizhi Qian, Yang Zhang, Shiyu Chang, Xuesong Yang, Dinei Florêncio, and Mark Hasegawa-Johnson, “Speech enhancement using Bayesian Wavenet,” INTERSPEECH, pp. 2013–2017, 2017.
  • [12] Dario Rethage, Jordi Pons, and Xavier Serra, “A Wavenet for speech denoising,” arXiv preprint arXiv:1706.07162, 2017.
  • [13] Santiago Pascual, Antonio Bonafonte, and Joan Serrà, “SEGAN: Speech enhancement generative adversarial network,” arXiv preprint arXiv:1703.09452, 2017.
  • [14] Jahn Heymann, Lukas Drude, and Reinhold Haeb-Umbach, “Neural network based spectral mask estimation for acoustic beamforming,” in Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on. IEEE, 2016, pp. 196–200.
  • [15] Hakan Erdogan, John R Hershey, Shinji Watanabe, Michael I Mandel, and Jonathan Le Roux, “Improved mvdr beamforming using single-channel mask prediction networks.,” in INTERSPEECH, 2016, pp. 1981–1985.
  • [16] Xiong Xiao, Shengkui Zhao, Douglas L Jones, Eng Siong Chng, and Haizhou Li, “On time-frequency mask estimation for mvdr beamforming with application in robust speech recognition,” in Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on. IEEE, 2017, pp. 3246–3250.
  • [17] Xueliang Zhang, Zhong-Qiu Wang, and DeLiang Wang, “A speech enhancement algorithm by iterating single-and multi-microphone processing and its application to robust asr,” in Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on. IEEE, 2017, pp. 276–280.
  • [18] Lukas Pfeifenberger, Matthias Zöhrer, and Franz Pernkopf,

    “Dnn-based speech mask estimation for eigenvector beamforming,”

    in Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on. IEEE, 2017, pp. 66–70.
  • [19] Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu, “WaveNet: A generative model for raw audio,” arXiv preprint arXiv:1609.03499, 2016.
  • [20] Norbert Wiener,

    Extrapolation, interpolation, and smoothing of stationary time series

    , vol. 7,
    MIT press Cambridge, MA, 1949.
  • [21] Jont B Allen and David A Berkley, “Image method for efficiently simulating small-room acoustics,” The Journal of the Acoustical Society of America, vol. 65, no. 4, pp. 943–950, 1979.
  • [22] Eric A Lehmann and Anders M Johansson, “Diffuse reverberation model for efficient image-source simulation of room impulse responses,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 18, no. 6, pp. 1429–1439, 2010.
  • [23] Junichi Yamagishi, “English multi-speaker corpus for CSTR voice cloning toolkit,”
  • [24] Anurag Kumar and Dinei Florêncio, “Speech enhancement in multiple-noise conditions using deep neural networks,” INTERSPEECH, 2016.
  • [25] “Freesound,”, 2015.
  • [26] Guoning Hu, “100 nonspeech sounds,”, 2015.
  • [27] John S Garofolo, Lori F Lamel, William M Fisher, Jonathon G Fiscus, and David S Pallett, “DARPA TIMIT acoustic-phonetic continous speech corpus CD-ROM. NIST speech disc 1-1.1,” NASA STI/Recon technical report n, vol. 93, 1993.
  • [28] “FreeSFX,”, 2017.
  • [29] Lloyd Griffiths and CW Jim, “An alternative approach to linearly constrained adaptive beamforming,” IEEE Transactions on antennas and propagation, vol. 30, no. 1, pp. 27–34, 1982.
  • [30] Flávio Ribeiro, Dinei Florêncio, Cha Zhang, and Michael Seltzer, “CrowdMOS: An approach for crowdsourcing mean opinion score studies,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2011, pp. 2416–2419.