Robustness against background speech is one of the key factors for automatic speech recognition (ASR). Even if multiple talkers are surrounding us and speaking simultaneously, we can focus on a specific target speaker. This is called the cocktail-party effect , and this function is realized by our auditory system. On the other hand, current ASR systems cannot handle such a situation and therefore the background speech usually causes a serious performance degradation in ASR. One practical example of this scenario is a smart speaker located in a living room. In this situation, background speech from surrounding people, television and radio are expected to overlap the target speech.
A simple way to realize robust ASR against background speech is to introduce blind speech separation processing before recognition. Blind speech separation has been studied for decades, and it can be divided into single-channel and multi-channel approaches. The single-channel approach is based on the spectral characteristics of each source signal. Non-negative matrix factorization (NMF)  and time-frequency masking 
are well known methods. In addition, recent deep learning-based technologies such as permutation invariant training (PIT) or deep clustering 
have caught a great deal of researchers’ attention. The multi-channel approach is based on the spatial information. Independent component analysis (ICA), full-rank spatial covariance model-based methods  are typical solutions, and also deep learning-based techniques have been studied recently .
The mixture signal of a target speech with background speech can be separated into individual source signals by the above blind speech separation algorithms. However, these algorithms cannot identify which output signal corresponds to the target speech to be recognized. This problem is regarded as one of the permutation problem , and there has been some work to try to solve it by imposing constraints about speaker gender  or signal intensity . However, such constraints are not necessarily satisfied. Ẑmolíková, et al. proposed a method named SpeakerBeam, which extracts target speech directly without any constraints about signal characteristics . Their method instead assumes that the target speaker is known in advance, and requires a pre-recorded clean utterance from the target speaker.
In this work, we propose an alternative approach to realize robust ASR of a target speaker in background speech while avoiding the permutation problem. The novelty of our method is that we focus on a wakeup keyword like ”okay Google” or ”alexa”, which is used for activating ASR systems such as a smart speaker. These systems usually assume that the target speaker speaks a specific keyword and then a command to be recognized. Therefore, it is naturally considered that the keyword utterance provides some important cues about the target speaker, which is beneficial for recognizing a subsequent command utterance. Motivated by this, the proposed method utilizes the keyword utterance to estimate spatial information of the target speaker. From the same point of view, King, et al. also proposed to utilize the keyword to calculate the mean value for feature normalization used for the acoustic modeling . Our proposed method firstly separates the mixture signal into the keyword and the remaining background speech using a specially designed DNN-based mask estimator. Then the separated signals are used for calculating a beamforming filter to enhance the subsequent utterances from the target speaker. An advantage of the proposed method compared to SpeakerBeam is that the proposed method can enhance any speaker’s utterance where the keyword is spoken, and it does not require the pre-recording procedure.
Signal-level and ASR evaluations are performed to verify the effectiveness of our proposed method. We use two Japanese test sets for the evaluation. The first test set is composed of simulated mixture signals between two speakers, and the second set is of realistic utterances recorded under television sound.
The rest of the paper is structured as follows. Sections 2 and 3 describe the details of our proposed method. The training and test sets used are described in Section 4. The experimental evaluations and results are shown in Sections 5 and 6. Section 7 presents our conclusions and future work.
2 DNN-based keyword mask estimation
2.1 Network configuration
Our DNN structure is shown in Fig. 1. Given a mixture signal of a keyword with background speech, this DNN outputs two kinds of mask. The first mask is a keyword mask which works as an extractor of the keyword. The other mask is a non-keyword mask which works as a remover of the keyword, and accordingly extracts the remaining background speech. The interested signal for the extraction or removal is always the specific keyword. Therefore, this DNN can be trained efficiently, because the acoustic variation to be considered is far less than those used in standard speech separation problems.
The mixed signal is firstly converted to feature vectors. In this work, magnitude spectra are used for the features. Conditions for speech analysis are shown in Table1
. A following context splicing block extends the 256 dimensional magnitude spectra with its neighboring 20 context frames (from left 10 frames to right 10 frames), resulting in 5,376 dimensions. Then mean and variance normalization is performed with its global value calculated from the entire training data. Finally, the normalized feature vectors are fed into the DNN.
Our DNN has 3 fully connected (FC) hidden layers. Each hidden layer has 1,024 nodes, and the output layer has 256 nodes for each of the two masks. The activation function used in the hidden layers is a rectified linear unit (ReLU) function, but the sigmoid function is used for the output layer in order to limit the range of DNN output between 0 and 1.
2.2 Parameter optimization
The network parameters in the DNN are trained to minimize the error between the two output masks and a given reference. In this work, we use an ideal binary mask (IBM) for the reference like 
, and a cross entropy function is adopted for the error criterion. The mini-batch size for the stochastic gradient descent (SGD) algorithm is set to 128. Dropout is also used, in which the dropout rate is 0.2 for the input layer and 0.5 for each hidden layer. The learning rate is set to 0.01. The number of training epochs is 50.
|Sampling frequency||16 kHz|
|Frame length||32 ms|
|Frame shift||16 ms|
3 Proposed system
A schematic diagram of the proposed system is presented in Fig. 2. The proposed system is activated when the existence of a keyword utterance is given by a keyword detection method, which is defined outside of the system. Given a detected keyword with its estimated time region, the observed signal among the keyword region is fed into the trained DNN shown in Fig. 1. Then the keyword and the remaining background speech are separated by applying the obtained keyword mask and non-keyword mask to the original mixture signal respectively. This process is repeated for each of the microphone channels, and the separated multi-channel signals are then used for calculating a beamforming filter. A well known minimum variance distortionless response (MVDR) beamformer is employed in this work. After that, the beamforming process is applied to the subsequent signal in order to enhance target speaker utterances while reducing background speech. Note that the beamforming filter is calculated once after the keyword, and not updated during subsequent command. Finally, the enhanced target speaker utterance is input to the ASR system.
3.2 MVDR filter estimation
MVDR filter estimation can be explained as follows. Given a background non-keyword speech covariance matrix and a steering vector of keyword speech, the MVDR filter can be calculated by the following equation:
where denotes the number of microphone, and and indicate transpose and conjugate transpose of a matrix, respectively. Note that the frequency index is omitted from the above equation and the following discussion if not necessary. In this work, is estimated using the observed multichannel magnitude spectra and the non-keyword mask with denoting the time frame index. is defined as the median of , with denoting the estimated non-keyword mask. Based on these, is calculated like  as:
where indicates the set of time frames indices among the keyword region.
The steering vector can be estimated by a covariance matrix , which can be calculated similarly as with the keyword mask :
An eigen value decomposition of is calculated, and
is estimated as the eigen-vector with the maximum eigenvalue.
The estimated beamforming filter is then applied to the subsequent mixture signal as , and the enhanced target signal is obtained.
4.1 Training data for keyword mask estimator
In this work, we used our in-house Japanese data for the training and test set. Keyword and background speech were recorded independently in the same quiet room. We used a 4-channel microphone array for the recording. For the training data, the recorded 4-channel signal was treated as four single-channel signals. Speakers were located at several points in the room, and the distances between the microphone array and the speakers were between 1 and 3 meters. The number of recorded keyword utterances was 1,660 from 35 speakers, and that of background speech utterances was 1,400 from 25 speakers. Various combination pairs between keyword and background speech made a total of 116,200 mixed utterances, and these were used as the training set of our DNN-based mask estimator. Note that the keyword and background speech for the mixing were selected from different speakers and their genders may be the same. The average signal-to-noise ratio (SNR) for the mixing was 3.2 dB, and the standard deviation was 3.4 dB. The keyword used in this work was a single Japanese word composed of 3 syllables with average duration 0.7 second.
4.2 Evaluation data
Two test sets were used for verifying the effectiveness of the proposed method. They were recorded in the same room as the training data, and the same microphone array was used. The first set was recorded in the same manner as the training set using different speakers. For the test set, the target speaker spoke a keyword and then a command. The command utterances are designed assuming a personal assistant system. 120 target utterances from 4 speakers and 120 interfering utterances from another 4 speakers were randomly mixed, and 10 different combination patterns resulted in a test set comprising 1,200 utterances. The two speakers were located at different angles from the microphone array. We call this test set a simu-set.
Another test set was also recorded in the same room, but the situation was more realistic. The recording setting is presented in Fig. 3. In this test set, one target speaker spoke under television sound. The distance from the microphone array to the television was 1.2 meters, and that to the target speaker was also 1.2 meters. The recording was performed by changing the speaker location (shown as A and B in the figure) and the volume of the television. The number of utterances was 4,396 from 67 speakers. We call this test set real-set.
5 Signal-level evaluation
5.1 Evaluation metrics
Firstly, the signal-level evaluation was performed to verify that the keyword mask can extract the keyword signal only from the mixture while the non-keyword mask can extract the remaining signal as well. As evaluation measures, the signal-to-distortion ratio improvement (SDRi)  was used. The SDRi represents the degree of reduction of the undesired signal and the extraction of the desired signal. When this figure becomes positive, the estimated mask is thought to work properly. Given a magnitude spectra of the desired signal , that of the undesired signal and the mask , the SDRi is calculated by:
where , and denote the frequency bin index, the total number of frequency bins, and the set of all frequency indexes respectively. represents the SDR before masking and is defined as follows:
The simu-set was used for evaluation. In this work, the evaluation was performed using manually annotated keyword regions to separate the accuracies of keyword detection and the proposed method. We compared the following four kinds of mask: estimated keyword mask , estimated non-keyword mask , IBM corresponding to the keyword , and IBM corresponding to the non-keyword . Note that the keyword signal was regarded as for the evaluation of keyword mask, but it was regarded as for that of non-keyword mask. Table 2 indicates the average and standard deviation of the SDRi, which was calculated from whole utterances of the simu-set. We found from this table that the trained DNN-mask worked preferably, because the results of the two estimated masks were all positive.
5.3 Example of the processing result
An example of the DNN-based mask estimation is presented in Fig. 4, which shows the spectrograms of the original keyword speech, background speech, and their mixed speech signal. The estimated keyword mask and non-keyword mask are also presented. From the figure, we can see that the estimated two masks selectively extracted the keyword and background speech respectively. From this, it turned out that the trained DNN was able to model the desired behavior. However, Fig. 4 (d) also shows that the high frequency part of the keyword signal was erroneously missing from the keyword mask (see around frame 20). We speculate that this was the reason for the performance gap of SNRi between and seen in Table 2.
Furthermore, the beamforming result using the estimated keyword mask and non-keyword mask shown in Fig. 4 (d), (e) is also presented. Fig. 5 (a), (b) show spectrograms of a command speech from the target speaker and its mixed signal with background speech, which are subsequent signals seen in Fig. 4 (a), (c). Note that the shown spectrograms are one of the four recorded channels. The beamforming result is also shown in Fig. 5 (c). From the figure, background speech was significantly reduced by the beamforming. Therefore, our proposed method works well for this example.
6 ASR evaluation
6.1 ASR system
Next, the effectiveness of the beamforming using the trained DNN-based masks was verified with ASR experiments. Our ASR system used a DNN-HMM (Deep Neural Network–Hidden Markov Model) based acoustic model. The DNN had 5 fully connected hidden layers and each layer had 1,024 nodes. The model parameters were trained with the cross entropy error criterion. 1,800 hours of speech data was used for training the acoustic model, which was collected through our voice service including search, dialogue and car-navigation. The training utterances were split into three subsets, and 20% of them were used directly. 40% was mixed with various kinds of daily life noise, and various reverberation filters generated by a room simulator were added to the remaining 40%. The language model was a tri-gram model trained using text queries of the Yahoo! JAPAN search engine and transcriptions of mobile voice search queries. The vocabulary size was about 1.6 million words. Our decoder was an internally developed single-pass WFST decoder. Prior to recognition, DNN-based voice activity detection (VAD) similar to  was performed to minimize insertion errors.
6.2 Results for simu-set
The results for the simu-set are firstly presented. We compared the proposed method with some reference signals, and Table 3 summarizes their results. The table shows the character error rates (CER) and the relative error reduction rates (RERR) from the result when the mixed signal is input directly to the ASR, which is shown as ‘Mixed’. ’Clean’ indicates the result of the target signal before mixing. ‘Proposed’ indicates the proposed method. ‘Oracle (IBM)’ indicates the result of oracle experiment that runs the proposed method with IBM calculated during the keyword region instead of the estimated masks. Therefore, the result of ‘Oracle (IBM)’ is thought to be an upper limit of the proposed method. In addition, ’BeamformIt’ shows the result of BeamformIt, a well known beamforming method .
We would like to start our discussion by comparing ‘Clean’ and ‘Mixed’. From the table, we can see the error rates were drastically increased by mixing background speech. ‘BeamformIt’ showed it improved the error rate from ‘Mixed’, but the improvement was not significant. However, this result had been expected because BeamformIt simply estimated the beamforming filter from the observed signal, and it seemed difficult to selectively extract target speech from the mixture. On the other hand, we can see that ‘Proposed’ improved the CER significantly, even if the beamforming filter was estimated during only the short keyword utterance. From this result, we confirmed the effectiveness of the proposed method for ASR under background speech. However, the CER of ‘Oracle (IBM)’ was smaller than ‘Proposed’, and the performance gap was not trivial. This means that there is still room to further improve our mask estimator.
|CER (%)||RERR (%)|
|A||Medium||34.3||30.8 (10.2)||26.2 (23.6)|
|A||Large||48.9||48.2 (1.4)||37.9 (22.5)|
|B||Medium||26.7||29.4 (-10.1)||24.1 (9.7)|
|B||Large||38.1||43.4 (-13.9)||32.5 (14.7)|
6.3 Results for real-set
Table 4 shows the results for the real-set. As described in Section 4.2, the evaluation was performed for four different acoustic conditions including two speaker locations and two levels of the television volume (medium, large). Note that the results of IBM are not presented in this table as IBM could not be calculated from real recorded data. The table shows that the CER was significantly decreased by our proposed method in all acoustic conditions. Thus, the effectiveness of the proposed method could also be confirmed in the realistic situation.
This paper describes one solution for robust ASR under background speech. The novelty of our approach is that it utilizes the wakeup keyword utterance in order to estimate spatial characteristics of the target speaker. The proposed method firstly separated the mixture signal into the keyword and the remaining background speech using a DNN-based mask estimator, and then the separated signals were used for calculating a beamforming filter to enhance the subsequent utterances from the target speaker. The signal-level evaluation showed that our DNN-based mask estimator could selectively separate these signals, and the effectiveness of the proposed method was also confirmed with ASR experiments.
Our future work includes the improvement of the mask estimator using a more sophisticated neural network architecture with more training data. The verification of the proposed method under various noise conditions are also included in the future work.
-  Neville Moray, “Attention in dichotic listening: Affective cues and the influence of instructions,” Quarterly journal of experimental psychology, vol. 11, no. 1, pp. 56–60, 1959.
-  Paris Smaragdis, “Convolutive speech bases and their application to supervised speech separation,” IEEE Trans. Audio, Speech, and Language Processing, vol. 15, no. 1, pp. 1–12, 2007.
-  Ozgur Yilmaz and Scott Rickard, “Blind separation of speech mixtures via time-frequency masking,” IEEE Trans. signal processing, vol. 52, no. 7, pp. 1830–1847, 2004.
-  Dong Yu, Morten Kolbæk, Zheng-Hua Tan, and Jesper Jensen, “Permutation invariant training of deep models for speaker-independent multi-talker speech separation,” in Proc. ICASSP, 2017, pp. 241–245.
-  John R Hershey, Zhuo Chen, Jonathan Le Roux, and Shinji Watanabe, “Deep clustering: Discriminative embeddings for segmentation and separation,” in Proc. ICASSP, 2016, pp. 31–35.
-  Paris Smaragdis, “Blind separation of convolved mixtures in the frequency domain,” Neurocomputing, vol. 22, no. 1-3, pp. 21–34, 1998.
-  Taesu Kim, Torbjørn Eltoft, and Te-Won Lee, “Independent vector analysis: An extension of ica to multivariate components,” in Proc. ICA, 2006, pp. 165–172.
-  Ngoc QK Duong, Emmanuel Vincent, and Rémi Gribonval, “Under-determined reverberant audio source separation using a full-rank spatial covariance model,” IEEE Trans. Audio, Speech, and Language Processing, vol. 18, no. 7, pp. 1830–1840, 2010.
-  Hiroshi Sawada, Hirokazu Kameoka, Shoko Araki, and Naonori Ueda, “Multichannel extensions of non-negative matrix factorization with complex-valued data,” IEEE Trans. Audio, Speech, and Language Processing, vol. 21, no. 5, pp. 971–982, 2013.
-  Takuya Yoshioka, Haken Erdogan, Zhuo Chen, and Fil Alleva, “Multi-microphone neural speech separation for far-field multi-talker speech recognition,” in Proc. ICASSP, 2018, pp. 5739–5743.
-  Marc Delcroix, Kater̂ina Ẑmolíková, Keisuke Kinoshita, Atsunori Ogawa, and Tomohiro Nakatani, “Single channel target speaker extraction and recognition with speaker beam,” in Proc. ICASSP, 2018, pp. 5554–5558.
-  Chao Weng, Dong Yu, Michael L Seltzer, and Jasha Droppo, “Deep neural networks for single-channel multi-talker speech recognition,” IEEE Trans. Audio, Speech, and Language Processing, vol. 23, no. 10, pp. 1670–1679, 2015.
-  Yannan Wang, Jun Du, Li-Rong Dai, and Chin-Hui Lee, “Unsupervised single-channel speech separation via deep neural network for different gender mixtures,” in Proc. APSIPA, 2016, pp. 1–4.
-  Kater̂ina Ẑmolíková, Marc Delcroix, Keisuke Kinoshita, Takuya Higuchi, Atsunori Ogawa, and Tomohiro Nakatani, “Speaker-aware neural network based beamformer for speaker extraction in speech mixtures,” in Proc. Interspeech, 2017, pp. 2655–2659.
-  Brian King, I-Fan Chen, Yonatan Vaizman, Yuzong Liu, Roland Maas, Sree Hari Krishnan Parthasarathi, and Björn Hoffmeister, “Robust speech recognition via anchor word representations,” Proc. Interspeech, pp. 2471–2475, 2017.
-  Jahn Heymann, Lukas Drude, and Reinhold Haeb-Umbach, “Neural network based spectral mask estimation for acoustic beamforming,” in Proc. ICASSP, 2016, pp. 196–200.
-  Emmanuel Vincent, Rémi Gribonval, and Cédric Févotte, “Performance measurement in blind audio source separation,” IEEE Trans. Audio, Speech, and Language Processing, vol. 14, no. 4, pp. 1462–1469, 2006.
-  Ken-ichi Iso, Edward Whittaker, Tadashi Emori, and Junpei Miyake, “Improvements in japanese voice search,” in Proc. Interspeech, 2012, pp. 2109–2112.
Xiao-Lei Zhang and Ji Wu,
“Deep belief networks based voice activity detection,”IEEE Trans. Audio, Speech, and Language Processing, vol. 21, no. 4, pp. 697–710, 2013.
-  Xavier Anguera, Chuck Wooters, and Javier Hernando, “Acoustic beamforming for speaker diarization of meetings,” IEEE Trans. Audio, Speech, and Language Processing, vol. 15, no. 7, pp. 2011–2022, 2007.