Investigation of Independent Monaural Front-End Processing for Robust ASR without Retraining and Joint-Training

10/22/2018 ∙ by Zhihao Du, et al. ∙ Harbin Institute of Technology 0

In recent years, monaural speech separation has been formulated as a supervised learning problem, which has been systematically researched and shown the dramatical improvement of speech intelligibility and quality for human listeners. However, it has not been well investigated whether the methods can be employed as the front-end processing and directly improve the performance of a machine listener, i.e., an automatic speech recognizer, without retraining and joint-training the acoustic model. In this paper, we explore the effectiveness of the independent front-end processing for the multi-conditional trained ASR on the CHiME-3 challenge. We find that directly feeding the enhanced features to ASR can make 36.40% and 11.78% relative WER reduction for the GMM-based and DNN-based ASR respectively. We also investigate the affect of noisy phase and generalization ability under unmatched noise condition.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Monaural speech separation aims at separating speech from the noisy backgrounds by using one microphone. In recent years, speech separation has been formulated as a supervised learning problem. Thanks to the rise of deep learning, supervised speech separation has made significant progress

[1].

Speech separation for improving human speech intelligibility and quality has been systematically evaluated and successfully utilized. In general, speech separation can be divided into three groups, i.e., masking-based methods, mapping-based methods and signal approximation. The masking-based methods try to predict a mask computed from premixed noise and clean speech, e.g. ideal ratio mask [2], phase sensitive mask [3] and complex ratio mask [4]. Mapping-based method tries to enhance speech by finding a mapping function between noisy feature and spectrum of the clean speech [5]

. The idea of signal approximation (SA) is to train a ratio mask estimator that minimizes the difference between the spectral magnitude of clean speech and that of estimated speech

[6]. A lot of learning machines have also been introduced for speech separation, In [2, 4]

, deep neural networks (DNNs) are employed to predict ideal masks. Lu

et al. used a deep denoising auto-encoder (DDAE) to obtain a clean Mel frequency power spectrogram (fbank) from a noisy one [7], In [8, 9]

, convolutional neural networks (CNNs) have been introduced. Besides the feed-forward networks, recurrent networks (RNNs) have also became a popular choice in the speech separation community

[3]. As for features, Wang et al. proposed a complementary feature [10] and Chen et al. found multi-resolution cochleagram is a better feature in low signal-noise-ratio conditions [11].

Compared with human listeners, ASR is more sensitive to the noise interfering and the speech distortion. In general, there are three strategies introduced to improve the robustness of ASR. The first one is using a separation frontend to enhance both training and test sets and retraining the acoustic model with enhanced features [12, 13]. The second one is joint-training the front-end enhancement model with the back-end acoustic model [14, 15]. The third one is multi-conditional training which performs acoustic modeling on noisy speech and the extracted features are directly fed to the acoustic model for decoding at the test stage. This strategy is shown to be effective in matched condition but gives an unremarkable performance for the unseen noise [16].

All the above strategies require retraining or joint-training an acoustic model which can be time-consuming and sophisticated. Compared with speech separation, it is relatively hard to collect training data for speech recognition which needs handcrafted annotation. In practice, a preferred choice is to train the front-end speech separation and the back-end ASR independently. And we wonder whether the supervised speech separation methods can directly improve the performance of ASR without retraining or joint-training under the real noisy condition. Wang et al. evaluated a masking-based method on the simulated noisy dataset which is derived from Google Voice dataset, which made 0.3% improvement for the multi-condition trained ASR [17]. Wang et al. investigated the effectiveness of the front-end processing on the reverberant condition [18]. But there still lacks of a work to systematically examine the ability of the supervised speech separation methods for the multi-conditional trained ASR. In this paper, different speech separation methods based on various time-frequency (T-F) representations are investigated on the third CHiME challenge.

2 Speech separation methods

In speech separation community, RNNs with long short-term memorys (LSTMs) have been widely employed to leverage the sequential information of speech signals and shown superior performance as compared with DNNs and CNNs

[3, 18]. For optimization objectives, ratio masking, direct mapping and signal approximation are three popular choices. Note that all these methods can be performed in different T-F representations, such as log-power spectrogram, log-fbank feature. In this investigation, we wonder which combination of the optimization objectives and T-F representations is most appropriate for the robust ASR. Therefore, we fix our learning machine as a RNN with the bidirectional long short-term memories (BiLSTMs) [19] and focus on the different optimization objectives and T-F representations.

2.1 Optimization objectives

The general training objective of supervised speech separation is defined as:

(1)

where is the desired output at frame , is the noisy T-F representation and the input of separation model which is parameterized by , and means squared loss, which is defined as:

(2)

where

is the 2-norm of a vector.

2.1.1 Ratio Masking

The masking-based methods try to learn a mapping function from the noisy T-F representations to the T-F masks of the clean speech. The training target of the ratio mask is defined as:

(3)

where is the desired ratio mask at frame . We investigate a direct masking method, which is defined as:

(4)

where and are the T-F representations of clean and noisy speech at frame respectively. Because the direct masks are not well bounded, we clip them to for the training stability.

2.1.2 Direct Mapping

Mapping-based methods train the learning machine to predict the T-F representation of the clean speech from the noisy speech directly. The optimization objective of direct mapping is defined as:

(5)

where and are the T-F representations of the clean and noisy speech at frame respectively.

2.1.3 Signal Approximation

SA-based methods implicitly learn ratio mask from noisy T-F representations. Different from the masking-based methods which directly reduce the training loss between the desired mask and the predicted one, SA-based methods reduce the loss between the T-F representations of the target speech and the estimated ones. SA-based optimization objective is defined as:

(6)

where is element-wise multiplication. The output of is restricted to the range and bounded as the ratio mask.

2.2 Target domains

The above optimization objectives can be performed on different target domains. In ASR community, log-fbank is the most used feature, so we optimize our model on the log-fbank domain. Because the log-fbank features can be directly extracted from the spectrograms (fft domain), we also perform the optimization on the fft domain and its logarithmic counterpart.

2.3 Features

Different learning tasks can benefit from the appropriate features. Log-fbank features are widely used for training the acoustic model, and the log-fft spectrograms are usually fed to the speech separation models. In this paper, the targets on log-fbank and fbank domain are predicted from log-fbank features and the log-fft spectrograms are fed to the models on log-fft and fft domain. The input features, output domains and optimization objectives of the evaluated methods are shown in table 1.

Evaluated          methods Input    domain Output domain Optimization objectives
log-fbank mapping log-fbank log-fbank mapping
log-fbank SA log-fbank log-fbank SA
log-fbank masking log-fbank log-fbank ratio masking
log-fft mapping log-fft log-fft mapping
log-fft SA log-fft log-fft SA
log-fft masking log-fft log-fft ratio masking
fbank masking log-fbank fbank ratio masking
fft masking log-fft fft ratio masking
Table 1: The input features, output domains and optimization objectives of the evaluated methods.

3 Experimental settings

We perform our investigation on the CHiME-3 Challenge [20] which provides multi-channel data for distant-talking automatic speech recognition and we only use the fifth channel in this paper.

In the training phase of ASR, we follow the recipe for CHiME-3 in the newest Kaldi release to build our baseline. There are two differences between our training and the default. First, we train the recognizer with multi-conditional training strategy (MCT), i.e., we train the GMM-based and DNN-based acoustic model with the clean utterances, the simulated noisy utterances in the fifth channel, the real noisy utterances in the fifth channel and the real close-talk utterances in channel zero while the default training is only with the real and simulated noisy utterances in the fifth channel. The intuition behind this MCT is that the front-end processing tries to reconstruct the clean features, only training the recognizer with the noisy utterances is obviously unreasonable. Second, we train the recognizer with fbank features instead of MFCC. The fbank feature has been widely used in robust speech recognition community [21]. With the MCT strategy and fbank features, our ASR baseline achieves the similar performance which is claimed in the CHiME-3 challenge.

For the front-end processing, we employ a 4-layer RNN with 512 bidirectional LSTM cells in each layer. A dense layer with softplus activations is followed for the mapping-based methods. And the sigmoid function is employed for the masking-based and the SA-based methods. Different methods are evaluated on the log-fft domain and log-fbank domain, however, the fft and fbank domain are only evaluated with the masking-based method because of their large value range. To evaluate the affect of noisy phases, the recognizer is also fed with the synthesized waveforms which are reconstructed from the noisy phases and the estimated magnitudes via the inverse STFT. In the training phase of the front-end models, the T-F representations extracted from the simulated and real noisy utterances are fed to the models and the corresponding clean counterparts are estimated. We also expand the training set by mixing the clean utterances and the noise records in training set by 0dB, 3dB and 6dB.

In evaluating phase, the word error rate (WER) is calculated for the simulated and real noisy utterances in development and test set. The front-end processing is also performed on the clean and close-talk utterances to find whether it will lead to a degradation on the relatively clean utterances.

Methods dt_bth dt_close dt_simu dt_real et_bth et_close et_simu et_real dt_avg et_avg
Baseline 5.63 7.52 20.26 21.29 5.60 14.31 25.00 38.39 20.78 31.70
log-fbank mapping 6.31 7.60 16.87 16.48 6.39 11.05 18.40 28.56 16.68 23.48
log-fbank SA 5.68 6.98 14.99 15.28 5.81 9.99 16.88 25.87 15.14 21.38
log-fbank masking 5.74 7.15 15.15 15.54 5.85 10.04 16.98 25.65 15.35 21.32
log-fft mapping 6.31 8.25 18.99 19.71 6.13 12.14 22.22 30.26 19.35 26.24
    +noisy phases 6.42 8.13 18.00 19.76 6.52 12.20 20.73 30.34 18.88 25.54
log-fft SA 5.93 7.37 17.40 17.87 6.01 11.07 19.83 28.18 17.64 24.01
    +noisy phases 5.94 7.30 16.56 17.56 5.85 11.04 18.77 27.88 17.06 23.33
log-fft masking 5.78 7.44 16.66 17.54 5.85 11.56 19.54 27.94 17.10 23.74
    +noisy phases 6.11 7.30 16.27 16.92 5.66 11.45 18.69 27.85 16.60 23.27
fbank masking 5.69 7.15 14.19 17.01 5.88 9.66 15.36 24.95 15.60 20.16
fft masking 5.56 7.09 14.48 16.19 5.73 10.22 16.16 24.84 15.34 20.50
    +noisy phases 5.99 7.26 14.51 16.39 5.77 14.38 17.06 27.67 15.45 22.37
Table 2: The WERs (%) of GMM-based ASR on development and test set
Methods dt_bth dt_close dt_simu dt_real et_bth et_close et_simu et_real dt_avg et_avg
Baseline 3.42 4.92 12.68 14.19 4.03 8.11 15.14 25.44 13.44 20.29
log-fbank mapping 4.04 5.18 13.64 14.40 4.58 8.36 15.10 26.34 14.02 20.72
log-fbank SA 3.36 4.77 12.30 12.95 4.09 7.13 13.78 22.73 12.63 18.26
log-fbank masking 3.36 4.73 12.08 12.70 3.92 7.12 13.44 22.36 12.39 17.90
log-fft mapping 3.92 5.94 16.57 16.14 4.52 9.74 19.38 25.87 16.36 22.63
    +noisy phases 3.98 5.93 16.49 16.09 4.76 10.07 19.35 25.90 16.29 22.63
log-fft SA 3.47 5.04 14.84 14.72 4.11 8.03 17.25 24.78 14.78 21.02
    +noisy phases 3.89 5.08 14.66 14.34 4.20 8.17 16.81 24.20 14.50 20.51
log-fft masking 3.50 5.13 14.59 14.17 4.09 8.73 17.26 24.89 14.38 21.08
    +noisy phases 3.69 5.18 14.49 13.97 4.30 8.93 17.08 24.70 14.23 20.89
fbank masking 3.32 4.87 12.46 14.63 4.13 6.93 13.42 23.50 13.55 18.46
fft masking 3.38 4.93 12.09 13.42 4.15 7.46 13.65 22.26 12.76 17.96
    +noisy phases 3.73 5.08 12.23 13.13 4.24 10.57 14.35 24.21 13.74 19.28
Table 3: The WERs (%) of DNN-based ASR on development and test set

4 Results and discussions

Table 2 and 3 show the WERs of GMM-based and DNN-based ASR respectively. The columns with dt_* and et_* show the results of development set and test set. The WERs of utterances recorded in booth and real noisy environments are given in columns *_bth and *_real. The columns *_close represent the results of close-talk utterances in channel zero and the WERs of simulated noisy speech in fifth channel are shown in columns *_simu. The rows marked by ”+noisy phases” indicate that we reconstruct waveforms in time domain and extract the ASR features on the waveforms. We do this because that speech enhancement often runs in local system and ASR locates in cloud server for many real scenarios, and the interface always needs waveform. The average performances of simulated and real noisy utterances are given in the *_avg columns.

For the GMM-based ASR (seen in table 2), masking-based method in the log-fbank domain achieves the best performance, 36.40% relative improvement from 31.70% to 20.16%, on the noisy test set. SA in the log-fbank domain gets the lowest WER on the noisy development set. It seems that the mapping-based method is not a good ideal for the automatic speech recognition purpose. When noisy phase is involved, the masking-based method in the fft domain degrades significantly on test set. Although the methods in the log-fft domain are affected slightly by noisy phase, performances are much worse than the methods in the fft domain.

For the DNN-based ASR(seen in table 3), the masking-based method in the log-fbank domain is a good choice and achieves 7.78% and 11.78% relative improvement on the noisy development and test set respectively. The masking-based method in the fft domain gets lower WER than all methods in the log-fft domain, but it is significantly degraded by noisy phase. The mapping-based front-end processing and the methods in the log-fft domain do not improve the performance of ASR anymore.

These front-end processing methods make very little degradation on the relatively clean speech utterances (see the *_clean and *_close columns). Surprisingly, some methods can even improve the performance of ASR for the close-talk utterances in test set, which is possibly because the close-talk utterances are not very clean but slightly noisy.

From table 3 and 4, we can see that independent front-end processing can dramatically enhance the ASR performance with same noise condition. To evaluate the generalization ability, we calculate WERs of noisy utterances interfered by babble noise which does not appear in ASR and speech enhancement training data. The method in the log-fbank domain achieves the best performance for the unseen babble noise which also gets the lowest WER under the noise matched condition. We find the ASR with MCT strategy does not generalize well for the unseen noise while speech enhancement efficiently leverages the information of noise and performs better under the unmatched condition.

5 Conclusions

In this paper, we investigate the independent front-end processing methods for ASR without retraining or joint-training on the CHiME-3 challenge. The masking-based, mapping-based and SA-based methods are evaluated in the log-fbank domain, log-fft domain and their linear counterparts. From this investigation, we find the masking-based method is a good choice for ASR. Direct masking in the log-fbank domain achieves the lowest WER under the matched and unmatched noise condition as compared with the baseline which is a strong DNN-based acoustic model.

Methods 0 dB 3 dB 6 dB
Baseline 38.22 21.53 13.23
log-fbank masking 32.48 18.21 10.54
fbank masking 34.89 19.29 11.09
log-fft masking 39.52 22.96 13.76
fft masking 34.56 19.73 11.82
Table 4: The WERs (%) of the masking-base methods under the unmatched noise condition.

Noisy phase leads to a considerable degradation for the masking-based methods in the fft domain while the affect in the log-fft domain is very slight. The independent front-end generalizes better than MCT for the unseen noise. In the future, we will try to further reduce WER of the DNN-based ASR with the independent front-end processing.

6 Acknowledgements

This research was supported by National Science Foundation of China No.61876214, National Key Research and Development Program of China under Grant 2017YFB1002102 and National Natural Science Foundation of China under Grant U1736210.

References

  • [1] DeLiang Wang and Jitong Chen, “Supervised speech separation based on deep learning: An overview,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2018.
  • [2] Yuxuan Wang, Arun Narayanan, and DeLiang Wang, “On training targets for supervised speech separation,” IEEE Transactions on Audio, Speech and Language Processing, vol. 22, no. 12, pp. 1849–1858, 2014.
  • [3] Hakan Erdogan, John R Hershey, Shinji Watanabe, and Jonathan Le Roux,

    “Phase-sensitive and recognition-boosted speech separation using deep recurrent neural networks,”

    in IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2015, pp. 708–712.
  • [4] Donald S Williamson, Yuxuan Wang, and DeLiang Wang, “Complex ratio masking for monaural speech separation,” IEEE Transactions on Audio, Speech and Language Processing, vol. 24, no. 3, pp. 483–492, 2016.
  • [5] Yong Xu, Jun Du, Li-Rong Dai, and Chin-Hui Lee, “A regression approach to speech enhancement based on deep neural networks,” IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP), vol. 23, no. 1, pp. 7–19, 2015.
  • [6] Felix Weninger, John R Hershey, Jonathan Le Roux, and Björn Schuller, “Discriminatively trained recurrent neural networks for single-channel speech separation,” in GlobalSIP, Atlanta, GA, USA, 2014.
  • [7] Xugang Lu, Yu Tsao, Shigeki Matsuda, and Chiori Hori,

    “Speech enhancement based on deep denoising autoencoder.,”

    in Interspeech, 2013, pp. 436–440.
  • [8] Like Hui, Meng Cai, Cong Guo, Liang He, Wei-Qiang Zhang, and Jia Liu, “Convolutional maxout neural networks for speech separation,” in IEEE International Symposium on Signal Processing and Information Technology. IEEE, 2015, pp. 24–27.
  • [9] Ke Tan, Jitong Chen, and DeLiang Wang, “Gated residual networks with dilated convolutions,” in IEEE International Conference on Acoustics, Speech and Signal Processing, 2018, p. 5.
  • [10] Yuxuan Wang, Kun Han, and DeLiang Wang, “Exploring monaural features for classification-based speech segregation,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 21, no. 2, pp. 270–279, 2013.
  • [11] J. Chen, Y. Wang, and D. Wang, “A feature study for classification-based speech separation at very low signal-to-noise ratio,” in IEEE International Conference on Acoustics, Speech and Signal Processing, May 2014, pp. 7039–7043.
  • [12] Kun Han, Yanzhang He, Deblin Bagchi, Eric Fosler-Lussier, and DeLiang Wang, “Deep neural network based spectral feature mapping for robust speech recognition,” in Sixteenth Annual Conference of the International Speech Communication Association, 2015.
  • [13] Felix Weninger, Hakan Erdogan, Shinji Watanabe, Emmanuel Vincent, Jonathan Le Roux, John R Hershey, and Björn Schuller, “Speech enhancement with lstm recurrent neural networks and its application to noise-robust asr,” in International Conference on Latent Variable Analysis and Signal Separation. Springer, 2015, pp. 91–99.
  • [14] Zhong-Qiu Wang and DeLiang Wang, “A joint training framework for robust automatic speech recognition,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 24, no. 4, pp. 796–806, 2016.
  • [15] Bin Liu, Shuai Nie, Yaping Zhang, Dengfeng Ke, Shan Liang, and Wenju Liu, “Boosting noise robustness of acoustic model via deep adversarial training,” arXiv preprint arXiv:1805.01357, 2018.
  • [16] Feipeng Li, Phani S Nidadavolu, and Hynek Hermansky, “A long, deep and wide artificial neural net for robust speech recognition in unknown noise,” in Interspeech, 2014.
  • [17] Yuxuan Wang, Ananya Misra, and Kean K Chin, “Time-frequency masking for large scale robust speech recognition,” in Sixteenth Annual Conference of the International Speech Communication Association, 2015.
  • [18] Ke Wang, Junbo Zhang, Sining Sun, Yujun Wang, Fei Xiang, and Lei Xie, “Investigating generative adversarial networks based speech dereverberation for robust speech recognition,” in Proc. Interspeech 2018, 2018, pp. 1581–1585.
  • [19] Mike Schuster and Kuldip K Paliwal,

    Bidirectional recurrent neural networks,”

    IEEE Transactions on Signal Processing, vol. 45, no. 11, pp. 2673–2681, 1997.
  • [20] Jon Barker, Ricard Marxer, Emmanuel Vincent, and Shinji Watanabe, “The third ‘chime’speech separation and recognition challenge: Dataset, task and baselines,” in Automatic Speech Recognition and Understanding (ASRU), 2015 IEEE Workshop on. IEEE, 2015, pp. 504–511.
  • [21] Jinyu Li, Li Deng, Yifan Gong, and Reinhold Haeb-Umbach, “An overview of noise-robust automatic speech recognition,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 22, no. 4, pp. 745–777, 2014.