DeepAI
Log In Sign Up

Speech Augmentation Based Unsupervised Learning for Keyword Spotting

05/28/2022
by   Jian Luo, et al.
NetEase
0

In this paper, we investigated a speech augmentation based unsupervised learning approach for keyword spotting (KWS) task. KWS is a useful speech application, yet also heavily depends on the labeled data. We designed a CNN-Attention architecture to conduct the KWS task. CNN layers focus on the local acoustic features, and attention layers model the long-time dependency. To improve the robustness of KWS model, we also proposed an unsupervised learning method. The unsupervised loss is based on the similarity between the original and augmented speech features, as well as the audio reconstructing information. Two speech augmentation methods are explored in the unsupervised learning: speed and intensity. The experiments on Google Speech Commands V2 Dataset demonstrated that our CNN-Attention model has competitive results. Moreover, the augmentation based unsupervised learning could further improve the classification accuracy of KWS task. In our experiments, with augmentation based unsupervised learning, our KWS model achieves better performance than other unsupervised methods, such as CPC, APC, and MPC.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

08/12/2021

Text Anchor Based Metric Learning for Small-footprint Keyword Spotting

Keyword Spotting (KWS) remains challenging to achieve the trade-off betw...
06/30/2022

Learning Audio-Text Agreement for Open-vocabulary Keyword Spotting

In this paper, we propose a novel end-to-end user-defined keyword spotti...
09/30/2022

Minimalistic Unsupervised Learning with the Sparse Manifold Transform

We describe a minimalistic and interpretable method for unsupervised lea...
10/04/2018

Unsupervised Learning via Meta-Learning

A central goal of unsupervised learning is to acquire representations fr...
03/29/2020

Learning by Analogy: Reliable Supervision from Transformations for Unsupervised Optical Flow Estimation

Unsupervised learning of optical flow, which leverages the supervision f...
06/25/2018

An Unsupervised Learning Classifier with Competitive Error Performance

An unsupervised learning classification model is described. It achieves ...
03/05/2021

Unsupervised Learning for Robust Fitting:A Reinforcement Learning Approach

Robust model fitting is a core algorithm in a large number of computer v...

I Introduction

Keyword Spotting (KWS) is a useful speech application in real-world scenarios. KWS aims at detecting a relatively small set of pre-defined keywords in an audio stream, which usually exists on the interactive agents. The KWS systems usually have two kinds of applications: Firstly, it can detect the startup commands, such as “hey Siri” or “OK, Google”, providing explicit cues for interactions. Secondly, KWS can help to detect some sensitive words to protect the privacy of the speaker. Therefore, highly accurate and robust KWS systems can be of great significance to real speech applications [15, 17, 30].

Recently, extensive literature research on KWS has been published [38, 28, 32]

. As a traditional solution, keyword/filler Hidden Markov Model (HMM) has been widely applied to KWS tasks, and remains competitive results 

[33]

. In this generative approach, an HMM model is trained for each keyword, while another HMM model is trained from not-keyword speech segments. At inference, the Viterbi decoding is required, which might be computationally expensive depending on the HMM topology. In recent years, deep learning models have gained popularity on the KWS task, which show better performance than traditional approaches. Google proposed to use Deep Neural Networks (DNN) to predict sub-keyword targets. It uses the posterior processing method to generate the final confidence score, and outperforms the HMM-based system 

[2]

. In contrast, Convolutional Neural Networks (CNN) is more attractive, because DNN ignores the input topology, but audio features could have a strong dependency in time or frequency domains

[35, 20, 42]

. However, there is a potential drawback that CNN might not model much contextual information. Also, Recurrent Neural Networks (RNN) with Connectionist Temporal Classification (CTC) loss was also investigated for KWS. However, the limitation of RNN is that it directly models the speech features without learning local structure between successive time series and frequency steps 

[34]

. There are also some works that combined CNN and RNN to improve the accuracy of KWS. For example, Convolutional Recurrent Neural Networks (CRNN) and Gated Convolutional Long Short-Term Memory (LSTM), achieved better performance than that of only using CNN or RNN 

[1]

. In recent years, many researchers focus on the transformer-based models with self-attention mechanism. As a typical model, Bidirectional Encoder Representations from Transformer (BERT) has been proven to be an effective model in many Natural Language Processing (NLP) tasks 

[36, 10, 5]

. The transformer-based models have also obtained much application in Automatic Speech Recognition (ASR) tasks 

[12, 18]. In this work, we introduced transformer to the network architecture of KWS. We think that transformer encoder has great advantage on the speech representation, and established a CNN-Attention based network to deal with the KWS task. The CNN helps network to learn the local feature, and the self-attention mechanism of transformer focuses on the long-time information.

The above supervised approaches have acquired good performance, but these models require a lot of labeled datasets. Obviously, for KWS task, the negative samples could be more procurable than positive samples, meaning that the positive samples might not be obtained easily. Especially when the keyword changes, it requires much time to collect the positive target samples, and the existing models might not easily transfer to other KWS models. In this paper, we focus on the unsupervised learning approach to alleviate this problem. The unsupervised learning mechanism allows the neural network to be trained on unlabeled datasets. With unsupervised learning, the performance of downstream task could be improved with limited labeled datasets. Unsupervised learning has made great success in the audio, image and text tasks [7]. In speech area, researchers also proposed some unsupervised pre-training algorithms [39, 19, 31]. Contrastive Predictive Coding (CPC) is one of those unsupervised approaches, extracting speech representation by predicting future information [22]. Apart from CPC, the Autoregressive Predictive Coding (APC) is another pre-training model, which also gets comparable results on phoneme classification and speaker verification tasks [3]. Meanwhile, Masked Predictive Coding (MPC) designs a Masked Language Model (MLM) objective in the unsupervised pre-training, and enables the model to incorporate context from both directions [11]. Based on these unsupervised learning methods, lots of unlabeled audio data can be used to obtain a better audio representation and this representation can be applied to the follow-up tasks through fine-tuning mechanism. For a robust KWS system, it should deal with different styles of speech in real-world applications. Speed and volume are major variations of the speech style. Unlike traditional unsupervised learning focuses on the general audio representation, we proposed an augmentation based approach. Our approach is to improve the model performance on KWS task with different speed and intensity situations. We designed an unsupervised loss based on the distance between the original and augmented speech, as well as the audio reconstructing information for auxiliary training. We think that speech utterances with the same keyword but at different speeds or volumes should have similar high-level feature representations for KWS tasks.

This paper investigated unsupervised speech representative methods to conduct KWS task. The unsupervised learning methods could utilize a lot of unlabeled audio datasets to improve the performance of downstream KWS task when labeled data are limited. In addition, speech augmentation based unsupervised representation might help the network to learn the speech information in various speech styles, and get a more robust performance. In summary, our major contributions of this work are the followings:

  • Propose a CNN-Attention architecture for keyword spotting task, having competitive results on Google Speech Commands V2 Dataset.

  • Design an unsupervised loss based on the Mean Square Error (MSE) to measure the distance between the original and augmented speech.

  • Define a speech augmentation based unsupervised learning approach, utilizing the similarity between the bottleneck layer feature, as well as the audio reconstructing information for auxiliary training.

The rest of the paper is organized as follows. Sec. II highlights the related prior works about data augmentation, unsupervised learning, and other methodologies of KWS tasks. Sec. III describes the proposed model architecture and augmentation based unsupervised learning loss. Sec. IV reports the experimental results compared with other pre-training methods. We also discuss relationship between pre-training steps and performance of downstream KWS tasks. In Sec. V, we conclude with the summary of the paper and future works.

Ii Related Work

Data augmentation is a common strategy to enlarge the training set of speech applications, such as Automatic Speech Recognition (ASR) and Keyword Spotting (KWS). The work [9] studied the vocal tract length perturbation method to improve the performance of ASR systems. The work [14] investigated a speed-perturbation technique to change the speed of the audio signal. Noisy audio signals have been used in [8], corrupting clean training speech with noise signal, to improve the robustness of the speech recognizer. SpecAugment [24] is a spectral-domain augmentation whose effect is to mask bands of frequency and/or time axes. SpecAugment is also explored further on large scale dataset in [25]. WavAugment [13] combines pitch modification, additive noise and reverberation to increase the performance of Contrastive Predictive Coding (CPC). In this work, we apply the speed and volume perturbation in our speech augmentation method.

Although supervised learning has been the major approach in keyword spotting area, current supervised learning models require large amounts of labeled data. Those high quality labeled datasets require substantial effort and are hardly available for the less frequently used languages. For this reason, recently there has been a great surge of interest in weakly supervised solutions that use datasets with few human annotations. Noisy student training, a semi-supervised learning method was proposed to ASR 

[26] and later used for robust keyword spotting [27]. There also have been related researches investigating the use of unsupervised methods to perform keyword spotting [6, 16, 43]. [6] proposed a self-organizing speech recognizer, and minimal transcriptions are used to train a grapheme-to-sound-unit converter. [16] presented a prototype KWS system that doesn’t need manually transcribed data to train the acoustic model. In [43], the authors proposed an unsupervised learning framework without transcription. A GMM model is used to label keyword samples and test utterances by Gaussian posteriorgram. After that, segmental dynamic time warping (SDTW) gives a relevant score, and ranks the score to figure out the output. The feasibility and effectiveness of these results encourage us to introduce unsupervised learning framework to the task of keyword spotting.

Google Speech Commands V2 Dataset, is a well-studied and benchmarked dataset for novel ideas in KWS. A lot of previous works perform experiments on this dataset. [4] introduced a convolutional recurrent network with attention on multiple KWS tasks. MatchboxNet [21]

is a deep residual network composed from 1D time-channel separable convolution, batch-norm layers, ReLU and dropout layers. Inspired by

[4] and [21], EdgeCRNN [41] was proposed, an edge-computing oriented model of acoustic feature enhancement for keyword spotting. Recently, [37]

combined a triplet loss-based embedding and a variant of K-Nearest Neighbor (KNN) for classification. We also evaluated our speech augmentation based unsupervised learning method on this dataset, and compared with other unsupervised approaches, including CPC 

[22], APC [3] and MPC [11].

Iii Proposed Method

Iii-a KWS Model Architecture

The keyword spotting task could be described as a sequence classification task. The keyword spotting network maps an input audio sequence to a limited of keyword classes . In which, is the number of audio frames and is the number of classes. Our proposed model architecture for keyword spotting is shown in Fig 1

. The network contains five parts: (1) CNN Block, (2) Transformer Block, (3) Feature Selecting Layer, (4) Bottleneck Layer, and (5) Project Layer.

Fig. 1: The Architecture of our CNN-Attention model for keyword spotting task. The network is composed of CNN layers, self-attention layers, feature selecting layer, bottleneck layer, and project layer. In the feature selecting layer, the last few frames are selected. Finally, the project layer maps the features to predict the keyword classification.

The CNN block consists of several 2D-convolutional layers, handling the local variance on time and spectrum axes.

(1)

In which, is the number of convolutional layers. Then, the CNN output is inputted to the transformer block, to capture long-time information with self-attention mechanism.

(2)

In which, is the number of self-attention layers. After transformer block, we designed a feature selecting layer to extract keyword information from sequence .

(3)

In feature selecting layer, we firstly collect last frames of

. And then, we concatenate all the collected frames together, into one feature vector

. After feature selecting layer, we use a bottleneck layer and a project layer, projecting the hidden states to the predicted classification classes .

(4)
(5)

Finally, the the cross-entropy (CE) loss for supervised learning and model fine-tuning is calculated via predicted classes and ground truth classes .

(6)

Iii-B Augmentation Method

Data augmentation are the most common used methods to promote the robustness and performance of the model in speech tasks. In this work, speed and volume based augmentation are investigated in the unsupervised learning of keyword spotting. For a given audio sequence , we denote it as the amplitude and time index .

(7)

For speed augmentation, we set a speed ratio to adjust the speed of .

(8)

For volume augmentation, we also set an intensity ratio to change the volume of .

(9)

With different ratios and , we could obtain multiple speech sequence pairs , to train the audio representation network with unsupervised learning. We think that speech utterances at different speed or volume should have similar high-level feature representation for KWS tasks.

Unit Name Hyperparameters
#CNN Blocks layers, kernel, stride, channels
#Transformer Block layers, dimension = , head, feedforward =
#Feature Selecting Layer Last frames, dimension
#Bottleneck Layer one FC layer, dimension
#Project Layer one FC layer, dimension softmax
#Reconstruct Layer one FC layer, dimension softmax
#Factor Ratio , ,
TABLE I: Model Configurations

Iii-C Unsupervised Learning Loss

The overall architecture of augmentation based unsupervised learning is shown in Fig 2. Similar to other unsupervised methods, the proposed approach also consists of two stages: (1) pre-training on unsupervised data, and (2) fine-tuning on supervised KWS data. In the pre-training stage, the bottleneck feature was obtained through training the unlabeled speech. In fine-tuning stage, the extracted bottleneck features are used for KWS prediction.

Fig. 2: The proposed speech augmentation based audio unsupervised learning method. In the pre-training stage, the pair of original and augmented speech will be inputted into the network separately but with the same model parameters. The network will output the average speech feature values and the bottleneck feature. The two bottleneck features are calculated by MSE loss, since the augmented and original speech should output similar high-level features for keyword spotting.

In the pre-training stage, the pair speech data are inputted into the CNN-Attention models respectively, but with the same model parameters. Because comes from , our designed unsupervised methods expect that and will output similar high-level bottleneck features. It means that no matter how fast or how loud a speaker says, the content of the speech is the same. Thus, the optimization of network needs to reflect the similarity of and . We choose the Mean Square Error (MSE) to measure the distance between the output of and .

(10)

Where represents the dimension of the bottleneck feature vector. and are the output of bottleneck layer of original speech and augmented speech respectively.

In addition, the designed network has another branch for auxiliary training, which predicts the average feature of the input speech segment. This branch guides the network to learn the intrinsic feature of the speech utterance. We firstly compute the average vector of the input Fbank vector alongside the time axis . Then, we use another reconstructing layer attached to the bottleneck layer, to reconstruct the average Fbank vector . We also use MSE loss to calculate the similarity between these two audio vectors alongside the feature dimension .

(11)

In which, represents the dimension of Fbank feature vector, and denotes the average vector of . Similarly, the loss between the augmented average audio and ured feature could be defined as follows:

(12)

Therefore, the final loss function

of the unsupervised learning (UL) consists of the above three losses , , and .

(13)

Where , , are factor ratio of each loss component.

In fine-tuning stage, the branch of average feature prediction is removed. A project layer and a softmax layer are added after the bottleneck layer to make the KWS prediction. In the fine-tuning, the parameters of original network could be fixed or updated. In our experiments, we found that updating all the parameters could help to improve the performance. Thus, we choose to update all parameters in the fine-tuning stage.

Iv Experiments

In this section, we evaluated the proposed method in keyword spotting tasks. We implemented our CNN-Attention model with supervised training and compared it with Google’s model. We also made an ablation study, to explore the effect of speed and volume augmentation on unsupervised learning. What’s more, other unsupervised learning methods are compared with our approach, including CPC, APC, MPC. When implementing these approaches, we used the network and hyperparameters in their publications, but all experimental tricks were not leveraged[11, 3, 22]. We also discuss the impact of different pre-training steps on the performance and convergence of downstream KWS task.

Iv-a Datasets

We used Google’s Speech Commands V2 Dataset [40] for evaluating the proposed models. The dataset contains about one-second or more long utterances. Total

short words were recorded by thousands of different people, as well as background noise such as pink noise, white noise, and human-made sounds. The KWS task is to discriminate among

classes: “yes”, “no”, “up”, “down”, “left”, “right”, “on”, “off”, “stop”, “go”, unknown, or silence. The dataset was split into training, validation, and test sets, with training, validation, and test. This results in about samples for training, and each for validation and testing. We used the real noisy data HuNonspeech111http://web.cse.ohio-state.edu/pnl/corpus/HuNonspeech/ to corrupt the original speech. In the experiments, the Aurora4 tools were used to implement this strategy222http://aurora.hsnr.de/index-2.html. Each utterance will be randomly corrupted by public kinds of noise in HuNonspeech. Each utterance has a level of -dB Signal Noise Ratio (SNR), and all datasets have an average dB SNR.

Similar to other unsupervised methods, a large unlabeled corpus, hours of Librispeech [23] clean speech were also leveraged to pre-train the network by unsupervised learning. Firstly, the long utterances were split up into second segments, keeping consistent with Speech Commands datasets. Nextly, the clean segments were also mixed with noisy HuNonspeech data by Aurora 4 tools, and the corrupted mechanism was as same as the Speech Commands.

Iv-B Experimental Setups

Model Name Supervised Training Data Dev Eval
Sainath and Parada (Google) Speech Commands - 84.7
CNN-Attention (ours) Speech Commands 86.4 85.3
CNN-Attention + volume & speed augment (ours) Speech Commands 87 85.7
TABLE II: Results Comparison of KWS Model, Classification Accuracy (%)
Model Name Pre-training Data Fine-tuning Data Dev Eval
CNN-Attention + volume pre-training Speech Commands Speech Commands 86.1 85.9
CNN-Attention + speed pre-training Speech Commands Speech Commands 87.8 86.9
CNN-Attention + volume & speed pre-training Speech Commands Speech Commands 87.9 87.2
CNN-Attention + volume pre-training Librispeech-100 Speech Commands 86.3 86.0
CNN-Attention + speed pre-training Librispeech-100 Speech Commands 87.9 87.9
CNN-Attention + volume & speed pre-training Librispeech-100 Speech Commands 88.2 88.1
TABLE III: Ablation Study, the effect of speed and volume augmentation, Classification Accuracy (%)

The acoustic features were -dimensional log-mel filterbank with ms frame length and ms frame shift. The detailed hyperparameters of our proposed network were shown in Table I. For training the KWS model, all of the matrix weights are initialized with random uniform initialization, and the bias parameters are initialized with the constant value . In our experiments, we trained all the networks with Adam optimizer for k steps with a batchsize until the loss becomes little change. In addition, the factor ratios of loss , , and are set to , , respectively.

To demonstrate the effectiveness of our proposed model, we investigated several other approaches for comparison. For supervised learning, we used Sainath and Parada’s model by Google [29]

as the baseline model. The Google blog post released the Sainath and Parada’s model implemented by TensorFlow. For unsupervised learning, we compared our method with other pre-training models:

  • Contrastive Predictive Coding (CPC) [22]: Through an unsupervised mechanism by utilizing next step prediction, CPC learns representations from high-dimensional signal. The CPC network mainly contains a non-linear encoder and an autoregressive decoder. An input sequence is embedded to a latent space, producing a context representation. Targeting at predicting future observations, the density ratio is established to maximize the mutual information between future observations and current context representation.

  • Autoregressive Predictive Coding (APC) [3]: APC also belongs to the family of predictive models. APC directly optimizes L1 loss between input sequence and output sequence. APC has proved an effective method in recent language model pre-training task and speech representation.

  • Masked Predictive Coding (MPC) [11]: Inspired by BERT, MPC uses Masked Language Model (MLM) structure to perform predictive coding on Transformer based models. Similar to BERT, of feature frames in each utterance are chosen to be masked during the pre-training procedure. Among these chosen frames, are replaced with zero vectors, are replaced with random positions, and the rest remain unchanged. L1 loss is computed between masked input features and encoder output at corresponding position. Dynamic masking was also adopted where the masking pattern is generated when a sequence is fed into the model.

Iv-C Results

Model Name Pre-training Data Fine-tuning Data Dev Eval
Contrastive Predictive Coding (CPC) [22] Speech Commands Speech Commands 87.6 86.9
Autoregressive Predictive Coding (APC) [3] Speech Commands Speech Commands 87.2 86.5
Masked Predictive Coding (MPC) [11] Speech Commands Speech Commands 87.0 86.7
CNN-Attention + volume & speed pre-training (ours) Speech Commands Speech Commands 87.9 87.2
Contrastive Predictive Coding (CPC) [22] Librispeech-100 Speech Commands 87.8 87.4
Autoregressive Predictive Coding (APC) [3] Librispeech-100 Speech Commands 87.7 87.5
Masked Predictive Coding (MPC) [11] Librispeech-100 Speech Commands 87.9 87.0
CNN-Attention + volume & speed pre-training (ours) Librispeech-100 Speech Commands 88.2 88.1
TABLE IV: Compared with Other Unsupervised Learning Methods, Classification Accuracy (%)

Table II lists the experimental results of supervised learning with Speech Commands dataset. We firstly implemented the Google’s Sainath and Parada model by the original TensorFlow recipes, achieving the accuracy of . Secondly, our CNN-Attention model is implemented by supervised loss without any augmented data and achieved higher accuracy than Google’s model. It is proved that our designed CNN-Attention architecture is effective for KWS task. Finally, after adding speed and volume augmentation to speech, we got a higher accuracy. It corresponds with the existing research that augmented dataset is helpful for improving the performance of the model. It also inspires our motivation for building augmentation based unsupervised learning methods.

To analyze the effect of speed and volume augmentation on unsupervised learning, we also made an ablation study in our experiments. The experimental results are shown in Table III. The volume pre-training model means that the augmented speech pairs only contain the intensity augment data. Meanwhile, the speed pre-training model is trained only by speed augmented pairs. For better investigation, we pre-trained the model with two datasets by unsupervised learning loss . The results indicate that speed augmented unsupervised learning has better performance than intensity based augmented pre-training. With both volume and speed augmentation, we could achieve better classification accuracy than only with single augmentation method. In addition, large datasets pre-training (Librispeech-100) results in better performance than small datasets (Speech Commands). Our proposed augmentation based unsupervised method (Eval in Table III) also promotes the accuracy of adding augmentation to supervised training (Eval in Table II) even with the same training data.

After that, we established the CPC, APC, MPC and made the comparison with these unsupervised learning methods. As depicted in Table IV, CPC achieves better performance than APC and MPC. Our augmentation based approach outperforms all of the other unsupervised methods on both two pre-training datasets (Speech Commands and Librispeech-100). The comparison demonstrated that our proposed augmentation based unsupervised learning is capable of extracting the speech information, and is an effective approach for KWS tasks.

Iv-D Pre-training Analysis

More pre-training steps usually help to improve the performance of downstream tasks. To get a better understanding of our unsupervised approach, we also conducted experiments with different pre-training steps. The pre-training steps were used for making this comparison. The performance of different steps is plotted in Fig 3.

Fig. 3: The results comparison with different pre-training steps. Different pre-training steps of unsupervised learning result in different accuracy performance and fine-tuning convergence. In our experiments, pre-training steps have the highest classification accuracy, and fastest convergence.

We show the model training of supervised learning with these different steps of pre-training. Our experiments demonstrated that more pre-training steps are not only helpful for achieving better performance but also making downstream KWS task converge faster. Unsupervised learning with steps has the highest classification accuracy and the fastest convergence. It also should be noted that the difference between and was very close, meaning that the pre-training steps are enough to obtain the desired performance.

V Conclusion

This paper investigated unsupervised learning method for keyword spotting task. We designed a CNN-Attention architecture and achieved competitive results on the Speech Commands dataset. In addition, we proposed a speech augmentation based unsupervised learning approach for KWS. Our method uses speed and intensity augmentation to establish training pairs, and pre-trains the network via the similarity loss between the speech pair and the speech reconstructed loss. In our experiments, the proposed unsupervised approach could further improve the model performance, and outperform other unsupervised methods, such as CPC, APC and MPC. We also found that more pre-training steps are not only helpful for better performance but also for faster convergence. In future works, we are interested in applying the augmentation based unsupervised learning approach to other speech tasks, such as speaker verification and speech recognition.

Vi Acknowledgement

This paper is supported by the Key Research and Development Program of Guangdong Province under grant No. 2021B0101400003. Corresponding author is Jianzong Wang from Ping An Technology (Shenzhen) Co., Ltd (jzwang@188.com).

References

  • [1] S. O. Arik, M. Kliegl, R. Child, J. Hestness, A. Gibiansky, C. Fougner, R. Prenger, and A. Coates (2017) Convolutional recurrent neural networks for small-footprint keyword spotting. In Conference of the International Speech Communication Association (INTERSPEECH), Cited by: §I.
  • [2] G. Chen, C. Parada, and G. Heigold (2014) Small-footprint keyword spotting using deep neural networks. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Cited by: §I.
  • [3] Y. Chung, W. Hsu, H. Tang, and J. Glass (2019)

    An unsupervised autoregressive model for speech representation learning

    .
    In Conference of the International Speech Communication Association (INTERSPEECH), Cited by: §I, §II, 2nd item, TABLE IV, §IV.
  • [4] D. C. de Andrade, S. Leo, M. L. D. S. Viana, and C. Bernkopf (2018) A neural attention model for speech command recognition. In arXiv preprint:1808.08929, Cited by: §II.
  • [5] J. Devlin, M. Chang, K. Lee, and K. Toutanova (2018) Bert: pre-training of deep bidirectional transformers for language understanding. In arXiv preprint:1810.04805, Cited by: §I.
  • [6] A. Garcia and H. Gish (2006) Keyword spotting of arbitrary words using minimal speech resources. In IEEE International Conference on Acoustics Speech and Signal Processing Proceedings (ICASSP), Cited by: §II.
  • [7] S. Gidaris, P. Singh, and N. Komodakis (2018) Unsupervised representation learning by predicting image rotations. In arXiv preprint:1803.07728, Cited by: §I.
  • [8] A. Hannun, C. Case, J. Casper, B. Catanzaro, G. Diamos, E. Elsen, R. Prenger, S. Satheesh, S. Sengupta, A. Coates, et al. (2014) Deep speech: scaling up end-to-end speech recognition. In arXiv preprint:1412.5567, Cited by: §II.
  • [9] N. Jaitly and G. E. Hinton (2013) Vocal tract length perturbation (vtlp) improves speech recognition. In ICML Workshop on Deep Learning for Audio, Speech and Language, Cited by: §II.
  • [10] X. Jia, J. Wang, Z. Zhang, N. Cheng, and J. Xiao (2020)

    Large-scale transfer learning for low-resource spoken language understanding

    .
    In IEEE Conference of the International Speech Communication Association (INTERSPEECH), Cited by: §I.
  • [11] D. Jiang, X. Lei, W. Li, N. Luo, Y. Hu, W. Zou, and X. Li (2019) Improving transformer-based speech recognition using unsupervised pre-training. In arXiv preprint:1910.09932, Cited by: §I, §II, 3rd item, TABLE IV, §IV.
  • [12] S. Karita, N. Chen, T. Hayashi, T. Hori, H. Inaguma, Z. Jiang, M. Someki, N. E. Y. Soplin, R. Yamamoto, X. Wang, et al. (2019) A comparative study on transformer vs rnn in speech applications. In IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), Cited by: §I.
  • [13] E. Kharitonov, M. Rivière, G. Synnaeve, L. Wolf, P. Mazaré, M. Douze, and E. Dupoux (2021) Data augmenting contrastive learning of speech representations in the time domain. In 2021 IEEE Spoken Language Technology Workshop (SLT), Cited by: §II.
  • [14] T. Ko, V. Peddinti, D. Povey, and S. Khudanpur (2015) Audio augmentation for speech recognition. In Sixteenth annual conference of the international speech communication association (INTERSPEECH), Cited by: §II.
  • [15] B. Li, T. N. Sainath, A. Narayanan, J. Caroselli, M. Bacchiani, A. Misra, I. Shafran, H. Sak, G. Pundak, K. K. Chin, et al. (2017) Acoustic modeling for google home.. In Conference of the International Speech Communication Association (INTERSPEECH), Cited by: §I.
  • [16] P. Li, J. Liang, and B. Xu (2007) A novel instance matching based unsupervised keyword spotting system. In Second International Conference on Innovative Computing, Informatio and Control (ICICIC), Cited by: §II.
  • [17] J. Luo, J. Wang, N. Cheng, G. Jiang, and J. Xiao (2021) End-to-end silent speech recognition with acoustic sensing. In IEEE Spoken Language Technology Workshop (SLT), Cited by: §I.
  • [18] J. Luo, J. Wang, N. Cheng, G. Jiang, and J. Xiao (2021) Multi-quartznet: multi-resolution convolution for speech recognition with multi-layer feature fusion. In IEEE Spoken Language Technology Workshop (SLT), Cited by: §I.
  • [19] J. Luo, J. Wang, N. Cheng, and J. Xiao (2021) Dropout regularization for self-supervised learning of transformer encoder speech representation. In IEEE Conference of the International Speech Communication Association (INTERSPEECH), Cited by: §I.
  • [20] J. Luo, J. Wang, N. Cheng, and J. Xiao (2021) Unidirectional memory-self-attention transducer for online speech recognition. In IEEE International Conference on Acoustics Speech and Signal Processing Proceedings (ICASSP), Cited by: §I.
  • [21] S. Majumdar and B. Ginsburg (2020) MatchboxNet: 1d time-channel separable convolutional neural network architecture for speech commands recognition. In arXiv preprint:2004.08531, Cited by: §II.
  • [22] A. v. d. Oord, Y. Li, and O. Vinyals (2018) Representation learning with contrastive predictive coding. In arXiv preprint:1807.03748, Cited by: §I, §II, 1st item, TABLE IV, §IV.
  • [23] V. Panayotov, G. Chen, D. Povey, and S. Khudanpur (2015) Librispeech: an asr corpus based on public domain audio books. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Cited by: §IV-A.
  • [24] D. S. Park, W. Chan, Y. Zhang, C. Chiu, B. Zoph, E. D. Cubuk, and Q. V. Le (2019) Specaugment: a simple data augmentation method for automatic speech recognition. In arXiv preprint:1904.08779, Cited by: §II.
  • [25] D. S. Park, Y. Zhang, C. Chiu, Y. Chen, B. Li, W. Chan, Q. V. Le, and Y. Wu (2020) Specaugment on large scale datasets. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Cited by: §II.
  • [26] D. S. Park, Y. Zhang, Y. Jia, W. Han, C. Chiu, B. Li, Y. Wu, and Q. V. Le (2020) Improved noisy student training for automatic speech recognition. In arXiv preprint:2005.09629, Cited by: §II.
  • [27] H. Park, P. Zhu, I. L. Moreno, and N. Subrahmanya (2021) Noisy student-teacher training for robust keyword spotting. In arXiv preprint:2106.01604, Cited by: §II.
  • [28] A. Rosenberg, K. Audhkhasi, A. Sethy, B. Ramabhadran, and M. Picheny (2017) End-to-end speech recognition and keyword search on low-resource languages. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Cited by: §I.
  • [29] T. N. Sainath and C. Parada (2015) Convolutional neural networks for small-footprint keyword spotting. In Conference of the International Speech Communication Association (INTERSPEECH), Cited by: §IV-B.
  • [30] J. Schalkwyk, D. Beeferman, F. Beaufays, B. Byrne, C. Chelba, M. Cohen, M. Kamvar, and B. Strope (2010) Your word is my command: google search by voice: a case study. Springer Advances in speech recognition. Cited by: §I.
  • [31] S. Schneider, A. Baevski, R. Collobert, and M. Auli (2019) Wav2vec: unsupervised pre-training for speech recognition. In arXiv preprint:1904.05862, Cited by: §I.
  • [32] C. Shan, J. Zhang, Y. Wang, and L. Xie (2018) Attention-based end-to-end models for small-footprint keyword spotting. In arXiv preprint:1803.10916, Cited by: §I.
  • [33] M. Silaghi (2005)

    Spotting subsequences matching an hmm using the average observation probability criteria with application to keyword spotting

    .
    In

    The Association for the Advancement of Artificial Intelligence (AAAI)

    ,
    Cited by: §I.
  • [34] M. Sun, A. Raju, G. Tucker, S. Panchapagesan, G. Fu, A. Mandal, S. Matsoukas, N. Strom, and S. Vitaladevuni (2016) Max-pooling loss training of long short-term memory networks for small-footprint keyword spotting. In IEEE Spoken Language Technology Workshop (SLT), Cited by: §I.
  • [35] R. Tang and J. Lin (2018) Deep residual learning for small-footprint keyword spotting. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Cited by: §I.
  • [36] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin (2017) Attention is all you need. In Advances in neural information processing systems (NIPS), Cited by: §I.
  • [37] R. Vygon and N. Mikhaylovskiy (2021) Learning efficient representations for keyword spotting with triplet loss. In arXiv preprint:2101.04792, Cited by: §II.
  • [38] X. Wang, S. Sun, C. Shan, J. Hou, L. Xie, S. Li, and X. Lei (2019) Adversarial examples for improving end-to-end attention-based small-footprint keyword spotting. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Cited by: §I.
  • [39] Y. Wang, S. Venkataramani, and P. Smaragdis (2020) Self-supervised learning for speech enhancement. In

    Proceedings of the 37-th International Conference on Machine Learning (ICML)

    ,
    Cited by: §I.
  • [40] P. Warden (2018) Speech commands: a dataset for limited-vocabulary speech recognition. arXiv preprint:1804.03209. Cited by: §IV-A.
  • [41] Y. Wei, Z. Gong, S. Yang, K. Ye, and Y. Wen (2021) EdgeCRNN: an edge-computing oriented model of acoustic feature enhancement for keyword spotting. In Journal of Ambient Intelligence and Humanized Computing, Cited by: §II.
  • [42] M. Xu and X. Zhang (2020) Depthwise Separable Convolutional ResNet with Squeeze-and-Excitation Blocks for Small-Footprint Keyword Spotting. In Conference of the International Speech Communication Association (INTERSPEECH), Cited by: §I.
  • [43] Y. Zhang and J. R. Glass (2009) Unsupervised spoken keyword spotting via segmental dtw on gaussian posteriorgrams. In IEEE Workshop on Automatic Speech Recognition & Understanding (ASRU), Cited by: §II.