Query-by-example on-device keyword spotting

10/11/2019 ∙ by Byeonggeun Kim, et al. ∙ 0

A keyword spotting (KWS) system determines the existence of, usually predefined, keyword in a continuous speech stream. This paper presents a query-by-example on-device KWS system which is user-specific. The proposed system consists of two main steps: query enrollment and testing. In query enrollment step, phonetic posteriors are output by a small-footprint automatic speech recognition model based on connectionist temporal classification. Using the phonetic-level posteriorgram, hypothesis graph of finite-state transducer (FST) is built, thus can enroll any keywords thus avoiding an out-of-vocabulary problem. In testing, a log-likelihood is scored for input audio using the FST. We propose a threshold prediction method while using the user-specific keyword hypothesis only. The system generates query-specific negatives by rearranging each query utterance in waveform. The threshold is decided based on the enrollment queries and generated negatives. We tested two keywords in English, and the proposed work shows promising performance while preserving simplicity.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Keyword spotting (KWS) has widely been used in personal devices like mobile phones and home appliances for detecting keywords which are usually compounded of one or two words. The goal is to detect the keywords from real-time audio stream. For practical use, it is required to achieve low false rejection rate (FRR) while keeping low false alarms (FAs) per hour.

Many previous works consider predefined keywords to reach promising performance. Keywords such as “Alexa”, “Okay/Hey Google”, “Hey Siri” and “Xiaovi Xiaovi” are the examples. They collect numerous variations of a specific keyword utterance and train neural networks (NNs) which have been promising method in the field.

[9, 4]

have acoustic encoder and sequence matching decoder as separate modules. The NN-based acoustic models (AMs) predict senone-level posteriors. Sequence matching, traditionally modeled by hidden Markov models (HMMs), interprets the AM outputs into keyword and background parts. Meanwhile,

[19, 21, 6, 18]

have end-to-end NN architectures to directly determine the presence of keywords. They use recurrent neural networks (RNN) with attention layers

[19, 21], dilated convolution network [6]

, or filters based on singular value decomposition

[18].

On the other hand, there have been query-by-example approaches which detect query keywords of any kinds. Early approaches use automatic speech recognition (ASR) phonetic posterior as a posteriorgram and exploit dynamic time warping (DTW) to compare keyword samples and test utterances [10, 23, 1]. [24]

also used posteriorgram and an edit distance metric while using Long short-term memory (LSTM) - connectionist temporal classification (CTC) ASR. Furthermore,

[3]

computes a simple similarity scores of LSTM output vectors between enrollment and test utterance. Recently, end-to-end NN based query-by-example systems are suggested

[11, 2]. [11] uses a recurrent neural network transducer (RNN-T) model biased with attention over keyword. [2] suggests to use text query instead of audio.

Meanwhile, there have been other groups who explored keyword spotting problem. [14, 20, 16, 7] solve multiple keyword detection. [12, 5] focus on KWS tasks with small dataset. [12] use DTW to augment the data, and [5] suggests a few-shot meta-learning approach.

In this paper, we propose a simple yet powerful query-by-example on-device KWS approach using user-specific queries. Our system provides user-specific model by utilizing a few keyword utterances spoken by a single user. The system uses posteriorgram based graph matching algorithm using a small-footprint ASR. An ASR based CTC [8] outputs phonetic posteriors, and we build a hypothesis graph of finite-state transducer (FST). The posteriorgram consists of phonetic output which frees the model from out-of-vocabulary problem. On testing, the system determines whether an input audio contains the keyword or not through a log-likelihood score according to the graph which includes constraints of phonetic hypothesis. Despite of the score normalization, score-based query-by-example on-device KWS systems usually suffer from threshold decision, because there are not enough negative examples in on-device system. We predict user-specific threshold by keyword hypothesis graphs. We generate query-specific negatives by rearranging positives in waveform. Then we predicts a threshold by using positives and generated negatives. While keeping this simplicity, our approach shows comparable performances with recent KWS systems.

The rest of the paper is organized as follows. In Section 2, the KWS system is described including the acoustic model, the FST in the decoder, and the threshold prediction method. The performance evaluation results are discussed in Section 3 followed by the conclusion in Section 4.

2 Query-by-example KWS system

Our system consists of three parts, acoustic model, decoder, and threshold prediction part. In subsections, we denote acoustic model input features as where and is a time frame index. Corresponding label utterances are and usually .

2.1 Acoustic model

We exploit a CTC acoustic model [8]. We denote activation of ASR as where and let as activation of unit n at time t. Thus

is a probability of observing n at time t. CTC uses a extra blank output

. We denote where is the set of 39 context-independent phonemes. The space output implies a short pause between words. We let as sequence set of length T, where their elements are in . Then, conditional probability of path given is where .

[8] suggests a many-to-one mapping which maps activation to label sequence . The mapping collapses repeats and removes blank output , e.g. . The conditional probability is marginalizing of possible paths for Y and is defined as,

(1)

2.2 Keyword spotting decoder

The keyword spotting decoder operates in two phases: an enrollment step and testing. In the enrollment step, using AM output of the query utterance, the model finds the hypothesis and build FSTs for the path. While testing, the model calculates the score and determines whether the input utterance contains the keyword using the hypothesis.

2.2.1 Query enrollment

In the enrollment step, the system uses a few clean utterances of a keyword spoken by a single user. We use simple and heuristic method, max-decoding. We follow the component of maximum-posterior at each time frame. For each time step

, we choose and get a path P. The hypothesis is defined by the mapping , as .

A keyword ‘Hey Snapdragon’ gives a hypothesis like ‘HH.EY. .S.N.AE.P.T. .A.AE.G.AH.N’. With the hypothesis as a sequential phonetic constraint, we generate left-to-right FST systems.

Figure 1: Example of a generated negative from a query utterance, ‘Hey Snapdragon’. The query utterance is divided into three in waveform and shuffled.

2.2.2 Keyword spotting

In testing, the system calculates a score of a test utterance for hypothesis FSTs. Assume that the FST has distinct possible states where denotes the blank state. The FST is left-to-right, therefore, has an ordered label Hypothesis where . Given the hypothesis, the score is log likelihood of a test input, .

At time step , the activation of AM is and we denote the corresponding FST state as . The transition probability is . The hypothesis limits the transition probability as Eq.(2), where . If , then , i.e. remaining in the previous state. Hypothesis is usually shorter than because we use the mapping to get . Therefore it is more likely to remain at a current state than moving to the next. We naively choose the transition probabilities to reflect the scenario.

(2)
Snapdragon is a registered trademark of Qualcomm Incorporated.

A log likelihood is,

(3)

where denotes the initial state probability, and for a given path. The is product of transition probabilities, and the likelihood, is proportional to the posteriors of the AM. Here and the state prior are assumed to be uniform.

We normalize the score by dividing Eq.(3) by the number of non-blank states, . We find and which maximize Eq.(3) by beam searching. During the search, we consider each time step as a initial time . By doing this, the system can spot the keyword in a long audio stream.

Figure 2: A histogram of query, negative and generated negative log likelihood scores for hypothesis FSTs of a single speaker. Colored histogram shows generated negatives.

2.3 On-device threshold prediction

In this section, query set is , and corresponding hypothesis set is . is a mapping from a test utterance, X, to log likelihood score for a hypothesis . We denote negative utterances as . The hypothesis computes positive scores from each other’s query. A threshold is defined as,

(4)

where

is a hyperparameter in

, and . Eq.(4) means the threshold as a score between mean of positive scores and that of negative scores.

We generate query-specific negatives from queries. Figure 1 shows an example of a keyword, ’Hey Snapdragon’. Each positive is divided to sub-parts and shuffled in waveform. We overlap 16 samples of each part boundary and apply them one-sided triangular windows to guarantee smooth waveform transition and to prevent undesirable discontinuities, i.e. impulsive noises. Figure 2

plots an example of histograms of queries, negatives, and generated negatives of hypothesis FSTs from a single speaker. A probability distribution is drawn in histogram while assuming Gaussian distribution for better visualization. We used the generated negatives as

.

Figure 3: Comparison of baseline, the S-DTW with the FST constrained by phonetic hypothesis.

3 Experiments

3.1 Experimental setup

3.1.1 Query and testing data

Many previous works experiment with their own data which are not accessible. In some literature, only relative performances are reported, thus the results are hard to compare with each other and are not reproducible. To be free from this issue, we use public and well-known data.

 

Method Keyword clean 10 dB 6 dB 0 dB Avg.
S-DTW Hey Snapdragon 1.35 3.84 8.01 21.6 8.70
Hey Snips 10.5 15.8 20.7 32.8 19.9
FST Hey Snapdragon 0.53 0.83 3.22 12.2 4.19
Hey Snips 1.85 5.36 8.59 24.7 10.13

 

Table 1: FRR (%) at 0.05 FAs per hour for clean and SNR levels {10 dB, 6 dB, 0 dB} of positives.

 

Method Keyword Params SNR FRR @ 1 FA/ hr FRR @ 0.5 FA/ hr FRR @ 0.05 FA/ hr
Shan et al. [3] Xiao ai tong xue 84 k - 1.02 - -
Coucke et al. [5] Hey snips 222 k 5 dB222Coucke et al. [6] augmented the positive dev and test datasets by only 5 dB, while our 6 dB is only for positive dev. Our test dataset are augmented by {10, 6, 0} dB. - 1.60 -
Wang et al. [6] Hai xiao wen - - 4.17 - -
He et al. [11] Personal Name333He et al. [11] used queries like ’Olivia’ and ’Erica’. - - - - 8.9

 

S-DTW Hey Snapdragon 211 k 6 dB 3.12 4.46 8.01
Hey Snips 13.30 15.07 20.69
FST Hey Snapdragon 0.62 1.04 3.22
Hey Snips 2.79 3.77 8.58

 

Table 2: Comparison of FRR (%) of various KWS systems at given FAs per hour levels.

We use two query keywords in English, ‘Hey Snapdragon’ and ‘Hey Snips’. The audio data of ‘Hey Snips’ is introduced at [6]. We select 61 speakers who have at least 11 ‘Hey Snips’ utterances each. We use 993 utterances from the data. ‘Hey Snapdragon’ utterances are from a publicly available dataset111Will be published with the publication of this work in ASRU 2019.

. There are 50 speakers and each of them speaks the keyword 22 or 23 times. In total, there are 1,112 ‘Hey Snapdragon’. At each user-specific test, 3 query utterance are randomly picked and rest are used as positive test samples. We augment the positive utterances using five types of noises, {babble, car, music, office, typing} at three signal-to-noise ratios (SNR) {10 dB, 6 dB and 0 dB}.

We use WSJ-SI200 [17] as negative samples. We sampled 24 hrs of WSJ-SI200 and segmented the whole audio stream into 2 seconds long. We augment each data with one of the five noise types, {babble, car, music, office, typing} and one SNR ratio among {10 dB, 6 dB and 0 dB}. Noise type and SNR are randomly selected.

3.1.2 Acoustic model details

The model is trained with Librispeech [15] data. Noises, {babble, music, street}, are added at uniform random SNRs in dB range. For more generalized model, we distorted the data by speech rate, power and reverberation. We changeed the speech rate with uniform random rates between and . For reverberation, we used several measured room impulse responses in a moderately reverberant meeting rooms in an office building. From the term ‘power’, we meant the input level augmentation for which we changed the peak amplitudes of the input waveforms to have a random value between dB and dB in the normalized full scale.

Input features are 40-dimensional Per-channel energy normalization (PCEN) mel-filterbank energy [22]

with 30 ms window and 10 ms frame shift. The model has two convolutional layers followed by five unidirectional LSTM layers. Each covolutional layer is followed by batch normalization and activation function. Each LSTM layer has 256 LSTM cells. On top, there are a fully-connected layer and a softmax layer. Through the trade-off between ASR performance and network size, the model has 211 k number of parameters and shows 16.61 % phoneme-error-rate (PER) and 48.04 % word-error-rate (WER) on Librispeech test-clean dataset without prior linguistic information.

3.2 Results

We tested 111 user-specific KWS systems. 50 are from the query ‘Hey Snapdragon’ and the rest are from ‘Hey Snips’. We used three queries from a given speaker for an enrollment. When we use one or two queries instead, the relative increase of FRR (%) at 0.5 FA per hour are 222.05 % or 2.47 % respectively at 6 dB SNR. The scores from three hypothesis are averaged for each test.

3.2.1 Baseline

Some previous works exploit DTW to compare the query and test sample [10, 23, 1]. We exploit DTW as our baseline while using the CTC-based AM model. We use KL-divergence as DTW distance, and allow a subsequence as an optmial path, which refers to subsequence DTW (S-DTW) [13]. The score is normalized by input length of DTW corresponding to the optimal path.

3.2.2 FST constrained by phonectic hypothesis

We build 3 hypothesis FST for each system. We tested all 111 user-specific models and average them by keywords. Table  1 compares the baseline, the S-DTW with the FST method, and we average the performances for the four SNR levels to plot a ROC curve, shown in Figure  3. The method using FST consistently outperforms the S-DTW while using a same query, and ‘Hey Snapdragon’ stands out than ‘Hey Snips’. The query word, ‘Hey Snips’ is short and false alarms are more likely to occur. The performance is heavily influenced by the type of keyword and this result is also specified in [11].

In Figure 4

, we plot a histogram which shows the FRR by users. Most user models show low FRR except some outliers.

Due to the limited data access, direct result comparison with previous works became difficult. Nevertheless, we compared our results with others in Table 2 to show that the results are comparable to that of predefined KWS systems [19, 6, 18] and query-by-example system [11]. Blanks in the table implies unknown information.

Figure 4: Histograms of FRRs (%) at 0.05 FA/ hr per user model.
Figure 5: Comparison of baseline with query-specific generated negatives. The graphs show relationship between the mean positives and the mean negatives and their best-fit in lines.

3.2.3 On-device threshold prediction

We tested a naive threshold prediction approach as a baseline. The baseline assumes a scenario that a device stores randomly chosen 100 general negatives. 50 negatives are from clean and the rest are from augmented data mentioned in section 3.1.1. and in Eq.(4).

The proposed method exploits query-specific negatives. For each query, we divide the waveform into three parts with the same lengths, thus there are five ways to shuffle to make it different from the original signal. There are three queries for each enrollment and, therefore we have 15 generated negatives. Each hypothesis from a query uses other two queries as positives and their generated negatives as negatives, thus and .

Figure 5

shows mean of positive and that of negative scores for 111 user-specific models. The baseline shows low and even negative correlation coefficient (R) value. R values for ‘Hey Snapdragon’ and ‘Hey Snips’ are -0.04 and -0.21 respectively. Meanwhile, the proposed method shows positive R values, 0.25 for ‘Hey Snapdragon’ and 0.40 for ‘Hey Snips’. If there is a common tendencies between positives and negatives across keywords, we can expect useful threshold decision rules from them. Here we tried a simple linear interpolation introduced in Section 2.3.

We search in Eq.(4) leveraging brute-force to get near 0.05 FAs per hour on average for 111 models. We set to 0.82 for baseline and 0.38 for the proposed method, and resulting FAs per hour are 0.049 for baseline and 0.050 for the proposed method on average.

Both method find the and reach target FAs per hour level, however, these two methods have dramatic difference in inter keywords. Inter keyword difference should be small in order to query-by-example system to work on any kind of keywords. For the baseline, ‘Hey Snapdragon’ shows 0.001 FAs per hour while ‘Hey Snips’ is not shows 0.088 FA per hour. Despite of using 6 to 7 times lower , the proposed method shows exact same, 0.050 FAs per hour for both keyword ‘hey Snapdragon’ and ‘Hey Snips’. Baseline shows 17.77 % FRR at 6 dB noisy positives due to the low FAs per hour while the proposed method shows 3.95 % FRR for ’hey Snapdragon’. The result is different from Table 2, because it uses given FAs per hour level for each model while this session use averaged FAs per hour.

4 Conclusions

In this paper, we suggest a simple and powerful approach for query-by-example on-device keyword spotting task. Our system uses user-specific queries, and CTC based AM outputs phonetic posteriorgram. We decode the output and build left-to-right FSTs as a hypothesis. The log likelihood is calculated as a score for testing. For on-device test, we suggest a method to predict a proper user and query specific threshold with the hypothesis. We generate query-specific negatives by shuffling the query in waveform. While many previous KWS approaches are not reproducible due to the limited data access, we tested our methods on public and well-known data. In the experiments, our approach showed promising and comparable performances to the latest predefined and query-by-example methods. There is a limit to this work due to lack of public data, and we suggest naive approach for utilizing generated negatives. As a future work, we will study advanced way to predict threshold using the query-specific negatives, and test various keywords.

References

  • [1] X. Anguera and M. Ferrarons (2013) Memory efficient subsequence dtw for query-by-example spoken term detection. In 2013 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6. Cited by: §1, §3.2.1.
  • [2] K. Audhkhasi, A. Rosenberg, A. Sethy, B. Ramabhadran, and B. Kingsbury (2017) End-to-end asr-free keyword search from speech. IEEE Journal of Selected Topics in Signal Processing 11 (8), pp. 1351–1359. Cited by: §1.
  • [3] G. Chen, C. Parada, and T.N. Sainath (2015) Query-by-example keyword spotting using long short-term memory networks. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, pp. 5236–5240. Cited by: §1.
  • [4] M. Chen, S. Zhang, M. Lei, Y. Liu, H. Yao, and J. Gao (2018) Compact feedforward sequential memory networks for small-footprint keyword spotting. In INTERSPEECH 2018 – 19th Annual Conference of the International Speech Communication Association, pp. 2663–2667. Cited by: §1.
  • [5] Y. Chen, T. Ko, L. Shang, X. Chen, X. Jiang, and Q. Li (2018) Meta learning for few-shot keyword spotting. In arXiv preprint arXiv: 1812.10233, Cited by: §1.
  • [6] A. Coucke, M. Chlieh, T. Gisselbrecht, D. Leroy, M. Poumeyrol, and T. Lavril (2018) Efficient keyword spotting using dilated convolutions and gating. In arXiv preprint arXiv:1811.07684, Cited by: §1, §3.1.1, §3.2.2, footnote 2.
  • [7] S. Fernández, A. Graves, and J. Schmidhuber (2007) An application of recurrent neural networks to discriminative keyword spotting. In International Conference on Artificial Neural Networks, pp. 220–229. Cited by: §1.
  • [8] A. Graves, S. Fernández, F. Gomez, and J. Schmidhuber (2006) Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In

    Proceedings of the 23rd international conference on Machine learning

    ,
    pp. 369–376. Cited by: §1, §2.1, §2.1.
  • [9] J. Guo, K. Kumatani, M. Sun, M. Wu, A. Raju, N. Strom, and A. Mandal (2018) Time-delayed bottleneck highway networks using a dft feature for keyword spotting. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, pp. 5489–5493. Cited by: §1.
  • [10] T.J. Hazen, W. Shen, and C. White (2009) Query-by-example spoken term detection using phonetic posteriorgram templates. In 2009 IEEE Workshop on Automatic Speech Recognition & Understanding, pp. 421–426. Cited by: §1, §3.2.1.
  • [11] Y. He, R. Prabhavalkar, K. Rao, W. Li, A. Bakhtin, and I. McGraw (2017) Streaming small-footprint keyword spotting using sequence-to-sequence models. In 2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pp. 474–481. Cited by: §1, §3.2.2, §3.2.2, footnote 3.
  • [12] R. Menon, H. Kamper, J. Quinn, and T. Niesler (2018) Fast asr-free and almost zero-resource keyword spotting using dtw and cnns for humanitarian monitoring. In INTERSPEECH 2018 – 19th Annual Conference of the International Speech Communication Association, pp. 2608–2612. Cited by: §1.
  • [13] M. Müller (2007) Dynamic time warping. Information retrieval for music and motion, pp. 69–84. Cited by: §3.2.1.
  • [14] S. Myer and V.S. Tomar (2018) Efficient keyword spotting using time delay neural networks. In INTERSPEECH 2018 – 19th Annual Conference of the International Speech Communication Association, pp. 1264–1268. Cited by: §1.
  • [15] V. Panayotov, G. Chen, D. Povey, and S. Khudanpur (2015) Librispeech: an asr corpus based on public domain audio books. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5206–5210. Cited by: §3.1.2.
  • [16] L. Pandey and K. Nathwani (2018) LSTM based attentive fusion of spectral and prosodic information for keyword spotting in hindi language. In INTERSPEECH 2018 – 19th Annual Conference of the International Speech Communication Association, pp. 112–116. Cited by: §1.
  • [17] D.B. Paul and J.M. Baker (1992) The design for the wall street journal-based csr corpus. In Proceedings of the workshop on Speech and Natural Language, pp. 357–362. Cited by: §3.1.1.
  • [18] A. Raziel and H. Park (2019) End-to-end streaming keyword spotting. In arXiv preprint arXiv: 1812.02802, Cited by: §1, §3.2.2.
  • [19] C. Shan, J. Zhang, Y. Wang, and L. Xie (2018) Attention-based end-to-end models for small-footprint keyword spotting. In INTERSPEECH 2018 – 19th Annual Conference of the International Speech Communication Association, pp. 2037–2041. Cited by: §1, §3.2.2.
  • [20] R. Tang and J. Lin (2018) Deep residual learning for small-footprint keyword spotting. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, pp. 5484–5488. Cited by: §1.
  • [21] X. Wang, S. Sun, C. Shan, J. Hou, L. Xie, S. Li, and X. Lei (2019) Adversarial examples for improving end-to-end attention-based small-footprint keyword spotting. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, Cited by: §1.
  • [22] Y. Wang, P. Getreuer, T. Hughes, R.F. Lyon, and R.A. Saurous (2017) Trainable frontend for robust and far-field keyword spotting. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, pp. 5670–5674. Cited by: §3.1.2.
  • [23] Y. Zhang and J.R. Glass (2009) Unsupervised spoken keyword spotting via segmental dtw on gaussian posteriorgrams. In 2009 IEEE Workshop on Automatic Speech Recognition & Understanding, pp. 398–403. Cited by: §1, §3.2.1.
  • [24] Y. Zhuang, X. Chang, Y. Qian, and K. Yu (2016) Unrestricted vocabulary keyword spotting using lstm-ctc.. In Interspeech, pp. 938–942. Cited by: §1.