Keyword spotting is a task to detect a pre-defined keyword from a continuous stream of speech. KWS recently has drawn increasing attention since it is used as a wake-up word on mobile devices. In this case, the KWS model should satisfy the requirement of high accuracy, low-latency, and small-footprint.
The methods based on large vocabulary continuous speech recognition system (LVCSR) are used to process the audio offline 
. They generate rich lattices and search for the keyword. Due to high-latency, these methods are not suitable for the mobile devices. Another competitive technique for KWS is the keyword/filler Hidden Markov Model (HMM). HMMs are trained separately for the keyword and non-keyword segments. At runtime, a Viterbi searching is needed to search for the keyword, which can be computationally expensive due to the HMM typology.
With the success of the application of neural network (NN) in automatic speech recognition (ASR), both keyword and non-keyword audio segments are used to train the same acoustic NN-based model. For example, in the Deep KWS model proposed by
, a single DNN model is used to output the posterior probability of the sub-segment of the keyword and to make the KWS decision based on a confidence score using a posterior smoothing. To improve the performance, the more powerful neural networks such as convolutional neural network (CNN)
and recurrent neural network (RNN) are used to substitute DNN. Inspired by these models, some end-to-end models have been proposed to directly output the probability of the whole keyword instead of sub-word, without any searching method or posterior handling [6, 7, 8].
Although tremendous improvement has been made, the previous models mainly focus on single-channel KWS. In industry, people usually use the microphone array for more complicated situations. Thus some signal processing techniques should be applied to convert the multi-channel signal into single-channel. However, these pre-processing techniques are sub-optimal because they are not optimized towards the final goal of interest. There are extensive literature in learning useful representations for multi-channel input in speech recognition. For example,  concatenates the multi-channel signal into the network input. In , CNN is used to implicitly explore the spatial relationship between multiple channels. Attention-based methods have also been proposed to model the auditory attention in multi-channel speech recognition .
Inspired by the application of attention mechanism in ASR , we propose an attention-based end-to-end model for multi-channel KWS. Compared with , our attention mechanism is computationally cheaper, which is more suitable in the KWS task. Transfer learning and multi-target spectral mapping are incorporated in the model to achieve an better result in the noisy evaluation data.
We describe the proposed model in Section 2. The experiment data, setup, and results follow in section 3. Section 4 closes with the conclusion.
2 The Proposed Model
As illustrated in Fig.1, the model mainly consists of three components: (i) the attention mechanism, (ii) the sequence-to-sequence training, (iii) the decoding smoothing.
2.1 Attention Mechanism
The attention mechanism we use is the soft attention, as proposed in
. For each time-step, we compute a 6-dimensional attention weight vectoras followed:
where is a 640 input feature matrix, W is a 40
128 weight matrix, b is a 128-dimension bias vector, andis a 128-dimension vector. A softmax function is applied for normalization. is the weighted sum of the multi-channel inputs .
2.2 Sequence to sequence training
The training framework we use is sequence-to-sequence. The encoder learns the higher representations for the enhanced speech features
. With a linear transformation and a softmax function, the probability of the whole keyword can be predicted at each frame.
Multi-task Learning. To improve the performance, we take the spectral mapping as an auxiliary task for our KWS model. The model learns the nonlinear mapping between the multi-channel speech features and the single-channel speech features, which is inspired by 
. The target speech features come from the traditional signal processing techniques. Although we depreciate the idea of separating the front-end signal processing techniques and the acoustic model, we conjecture whether the multi-task framework can improve the performance. The loss function is:
Transfer Learning. We also adopt transfer learning to improve the performance in the noisy environment. Transfer learning  refers to initializing the model parameters with the corresponding parameters of a trained model. Here we initialize the network using the proposed multi-channel KWS model trained with the relatively clean data, and fine-tune the model with only noisy data.
Multi-target Mapping. Since it is difficult to train the model with all noisy data, we propose multi-target spectral mapping. We conjecture that with more mapping targets, the spectral mapping can converge better than learning the nonlinear relationship between the noisiest input and the cleanest output. Compared with the spectral mapping mentioned above, two extra mapping targets are involved when training (detailed in sub-section 3.1). The loss function is described as followed:
with the constraint that .
When decoding, our model, takes as the input a feature matrix and outputs the keyword spotting probability at each frame. We adopt a posterior probability smoothing method, and finally the decision is made based on the average probability of frames.
The training data consists of 240k utterances of the keyword (which includes 120k ones with echo and the other without echo), and 200 hours of negative examples, with 10% of them used for validation. The evaluation data includes 50 hours of filler data and 48k keyword data (with 50% echo keywords and 50% non-echo ones). We also record 1k noisy keywords as Fig.2 illustrates. They consists of two equal parts, which are recorded in two situations. The first half (referred to hard-noisy data, with the average SNR about -20) is recorded as shown in Fig.1(a) where the recording device is close to the music noise and the speaker is 3 meters away from the device. The other (referred to easy-noisy data, with the average SNR about -18) are recorded as Fig.1(b) shows. The distance between the device and speaker remains unchanged, but the between the device and the music source is 1 meter.
Besides that, we also recorded 50 hours of music for the multi-target mapping experiment. In the experiment, we randomly add music to the 120k non-echo keywords and 200 hours of negative examples to generate the noisy training data. Algorithm 1 shows the procedure of creating multiple mapping targets.
3.2 Experiment setups
In the baseline single-channel KWS model, the front-end component mainly includes beamforming and acoustic echo cancellation (AEC). These two blocks are constructed as proposed in  and , respectively.
The input feature in all the experiment is the trainable PCEN 
. The 40-dimension filter-bank features are extracted using a window of 25ms with a shift of 10ms. The encoder in the experiment is two GRU (Gated Recurrent Units) layers and one fully-connected layer. Both the GRU and FC layer have 128 units, with a 0.9 dropout rate. In the multi-task models, two tasks use two separate FC layers. Adam optimizer  is used to update the training parameters, with the batch size and initial learning rate is 64 and 0.001, respectively. The value in the spectral mapping is 0.5, 0.2,0.2 ,and 0.1, respectively, since they are reasonably good in the development data.
3.3 Impact of Attention mechanism
We first evaluate the performance of the baseline model and the proposed multi-channel models. The baseline model uses the signal processing techniques in sub-section 3.2. The input of the signal processing techniques is seven channels, while the input of the proposed model (i.e. Attention) is only six channels, without the reference signal for AEC.
It is obvious that Attention outperforms the baseline model in all the evaluation data sets. At 0.5 false alarm (FA) per hour, Attention gains an absolute 4% improvement and 7%, respectively in the non-echo and echo data (Fig. 2(a) and Fig.2(b)).
The difference in the performances becomes larger in the noisy data (Fig.2(c) and Fig.2(d)). The performance improvements are 40% and 60%, respectively in the hard-noisy data and easy-noisy data. This great difference may be largely attributed to that the signal processing techniques are not robust to the noisy environment, especially when the noise is close to the wake-up device.
3.4 Impact of multi-task learning
As indicated in Fig.2(a) and Fig.2(b), the proposed model with spectral mapping (i.e. Mapping) outperforms Attention slightly, which to some degree confirms our conjecture. However, the result gets worse in the noisy data (Fig.2(c) and Fig.2(d)). Such a difference lie in the difference between the training data and noisy testing data.
3.5 Impact of transfer learning and multi-target mapping
To increase the noise-robustness of the model, we initialize the model with the parameters of Attention and fine-tune the model with the artificial noisy training data (detailed in sub-section 3.1). As shown in Fig.2(a) and Fig.2(b), all the models with transfer learning (i.e. Transfer, Transfer_map, and Tran_Multi_Map) perform worse than Attention in both non-echo and echo testing data. The reason lies in the difference between the training data and the testing data. However, the main target of the transfer learning and multi-target mapping is the noisy data. As illustrated in Fig.2(c) and Fig.2(d), transfer learning and single-target spectra mapping do not result in a better result than Attention, which confirms the difficulties in training the model with only noisy data. However, the model with the transfer learning and multi-target mapping (i.e. Tran_Multi_Map) outperforms all the models by a large margin in the noisy data. At 0.5 false alarm per hour, Tran_Multi_map gains an absolute 30% and 10% improvement over Attention, respectively in the hard_noisy data and easy_noisy data.
Without the reference signal for AEC, the proposed attention-based model for multi-channel KWS out-performs the baseline model in all the testing data. With spectral mapping, the performance can gain a slight improvement when the training data and testing data are similar. In addition, transfer learning and multi-target spectral mapping can enhance the model’s robustness to the noisy environment, which shed lights on the NN-based speech enhancement in ASR.
The authors would like to thanks the colleagues from the acoustic group for useful discussion.
-  David RH Miller, Michael Kleber, Chia-Lin Kao, Owen Kimball, Thomas Colthurst, Stephen A Lowe, Richard M Schwartz, and Herbert Gish. Rapid and accurate spoken term detection. In Eighth Annual Conference of the International Speech Communication Association, 2007.
-  Richard C Rose and Douglas B Paul. A hidden markov model based keyword recognition system. In Acoustics, Speech, and Signal Processing, 1990. ICASSP-90., 1990 International Conference on, pages 129–132. IEEE, 1990.
-  Guoguo Chen, Carolina Parada, and Georg Heigold. Small-footprint keyword spotting using deep neural networks. In Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, pages 4087–4091. IEEE, 2014.
-  Tara N Sainath and Carolina Parada. Convolutional neural networks for small-footprint keyword spotting. In Sixteenth Annual Conference of the International Speech Communication Association, 2015.
-  Ming Sun, Anirudh Raju, George Tucker, Sankaran Panchapagesan, Gengshen Fu, Arindam Mandal, Spyros Matsoukas, Nikko Strom, and Shiv Vitaladevuni. Max-pooling loss training of long short-term memory networks for small-footprint keyword spotting. In Spoken Language Technology Workshop (SLT), 2016 IEEE, pages 474–480. IEEE, 2016.
-  Ye Bai, Jiangyan Yi, Hao Ni, Zhengqi Wen, Bin Liu, Ya Li, and Jianhua Tao. End-to-end keywords spotting based on connectionist temporal classification for mandarin. In Chinese Spoken Language Processing (ISCSLP), 2016 10th International Symposium on, pages 1–5. IEEE, 2016.
-  Sercan O Arik, Markus Kliegl, Rewon Child, Joel Hestness, Andrew Gibiansky, Chris Fougner, Ryan Prenger, and Adam Coates. Convolutional recurrent neural networks for small-footprint keyword spotting. arXiv preprint arXiv:1703.05390, 2017.
-  Changhao Shan, Junbo Zhang, Yujun Wang, and Lei Xie. Attention-based end-to-end models for small-footprint keyword spotting. arXiv preprint arXiv:1803.10916, 2018.
-  Michael L Seltzer. Bridging the gap: Towards a unified framework for hands-free speech recognition using microphone arrays. In Hands-Free Speech Communication and Microphone Arrays, 2008. HSCMA 2008, pages 104–107. IEEE, 2008.
-  Yulan Liu, Pengyuan Zhang, and Thomas Hain. Using neural network front-ends on far field multiple microphones based speech recognition. In Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, pages 5542–5546. IEEE, 2014.
-  Steve Renals and Pawel Swietojanski. Neural networks for distant speech recognition. In Hands-free Speech Communication and Microphone Arrays (HSCMA), 2014 4th Joint Workshop on, pages 172–176. IEEE, 2014.
-  Suyoun Kim and Ian Lane. Recurrent models for auditory attention in multi-microphone distance speech recognition. arXiv preprint arXiv:1511.06407, 2015.
-  FA Chowdhury, Quan Wang, Ignacio Lopez Moreno, and Li Wan. Attention-based models for text-dependent speaker verification. arXiv preprint arXiv:1710.10470, 2017.
-  Zhuo Chen, Shinji Watanabe, Hakan Erdogan, and John R Hershey. Speech enhancement and recognition using multi-task learning of long short-term memory recurrent neural networks. In Sixteenth Annual Conference of the International Speech Communication Association, 2015.
-  Sinno Jialin Pan, Qiang Yang, et al. A survey on transfer learning. IEEE Transactions on knowledge and data engineering, 22(10):1345–1359, 2010.
-  François Grondin, Dominic Létourneau, François Ferland, Vincent Rousseau, and François Michaud. The manyears open framework. Autonomous Robots, 34(3):217–232, 2013.
-  Jean-Marc Valin. On adjusting the learning rate in frequency domain echo cancellation with double-talk. IEEE Transactions on Audio, Speech, and Language Processing, 15(3):1030–1034, 2007.
-  Yuxuan Wang, Pascal Getreuer, Thad Hughes, Richard F Lyon, and Rif A Saurous. Trainable frontend for robust and far-field keyword spotting. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5670–5674. IEEE, 2017.
-  Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555, 2014.
-  Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.