. It costs huge memory footprint and has a high latency, so that it is often used for the keyword search in large databases. Another approach is based on keyword/filler hidden Markov models (HMMs), which requires a high computational cost and is therefore difficult to be applied to on-device applications. This paper focuses on small-footprint KWS, which requires small memory footprint and low computational power. This kind of technology is able to run on low-resource devices. It provides a fully hands-free way for users to control intelligent devices.
Recently, deep neural network (DNN) based approaches yield significant improvement over the conventional methods in small-footprint KWS. DeepKWS  regards keyword spotting as a classification problem and trains a DNN to directly predict the subword units of keywords. It achieves significant improvement over the HMM-based methods, e.g. , with small footprint and low computational cost. Because DNN does not consider the local temporal and spectral correlation of speech, Sainath and Parada  proposed to replace DNN by convolutional neural network (CNN), which results in better performance with smaller memory footprint than DNN. However, the size of the receptive field of CNN is usually limited, which cannot grasp enough temporal correlation of speech. To overcome this problem, Tang and Lin  proposed a residual network (ResNet) based KWS system where they used dilated convolution to enlarge the size of the receptive field exponentially with the depth of the network. However, the ResNet based method still needs several hundreds of thousands of parameters to achieve the state-of-the-art performance. To further reduce the memory footprint, a number of recent works applied time delay neural network (TDNN), attention mechanism, and temporal convolutional network (TCN) to KWS, see e.g. [7, 8, 9]. In , Zhang et al. adapted MobileNet  that was originally designed for image classification to KWS, where MobileNet reduces the number of parameters and computational costs by a so-called depthwise separable convolution structure 13], and moreover, the method adopts a conventional convolution structure, which is inefficient in propagating gradients across layers. To summarize, although a number of new architectures have been proposed, they still needs a lot of parameters, which does not fully meet the requirement of modern low-resource devices.
(DS-ResNet), which is a stack of depthwise separable convolution layers with residual connections. This structure not only improves the representation ability over but also results in smaller memory footprint than . To further improve the performance of the proposed method, we add a squeeze-and-excitation block  over the output of the bottom convolutional layer of DS-ResNet. We compared DS-ResNet with ResNet , TC-ResNet , DS-CNN , DenseNet-BiLSTM  and tdnn-swsa . Experimental results on the Google speech commands dataset demonstrate that DS-ResNet outperforms the comparison methods in terms of classification errors with fewer parameters than the latter.
2 Algorithm description
2.1 Network structure
As shown in Figure 1, our entire architecture starts with a standard bias-free convolution layer (Conv) with weight , where and are the height and width of the convolution kernel respectively, and
is the number of filters (i.e., the number of the output channels). The model takes the output of the first convolution layer as the input of a squeeze-and-excitation layer (SE) which is used to reweight the output feature maps. Then, the output of the squeeze-and-excitation layer is the input of a chain of residual blocks, followed by a separate non-residual depthwise separable convolution layer (DS-Conv) which consists of a depthwise convolution layer (depth-Conv) and a pointwise convolution layer (point-Conv). Finally, the output of the model is composed of an average-pooling layer (Avg-pool) followed by a fully-connected softmax layer (Softmax). Additionally, aconvolution dilation was used to increase the receptive field of the depthwise separable convolution layers.
2.2 Depthwise separable convolution
Depthwise separable convolution considers the channel realm and space realm separately. It factorizes a standard convolution into two simplified steps. The first step is a spatial feature learning step, named depthwise convolution. The second step is a channel combination step, named pointwise convolution. The most attractive property of the depthwise separable convolution is its low computational cost and small amount of parameters.
Before describing the depthwise separable convolution, we first take a look at the computational cost of a standard convolution. As shown in Figure 1(a), given a input feature maps of a certain convolution layer where is the number of the input channels, and and are the spatial height and width of the input feature maps, a standard convolution operation operates over a joint “space-cross-channels realm” which applies filters of size to compute the output feature maps, where is the spatial dimension of the convolution filters, is number of the input channels, and is the predefined number of the filters (i.e., channels of the output feature maps). The computational cost and amount of parameters of the standard convolution are:
Different form the standard convolution which filters and combines the input feature in one step, the filtering and combination step in the depthwise separable convolution is split into two successive steps. First, the depthwise convolution applies a single filter to each input channel (see Figure 1(b)). Then, the pointwise convolution applies a convolution to combine the outputs of the depthwise convolution (see Figure 1(c)). The computational cost and amount of parameters of the depthwise convolution and pointwise convolution are
respectively, where we have made the same assumption as the standard convolution.
To show the advantage of the depthwise separable convolution over the standard convolution apparently, we give an example as follows. When we set and , the computational cost and model size of the depthwise separable convolution are only about 1/8 of those of the standard convolution.
2.3 Squeeze-and-excitation block
Squeeze-and-excitation block is a new architectural that aims to recalibrate the channel-wise feature responses adaptively by modeling the interdependency between the channels. As illustrated in Figure 3, it consists of two successive operations—squeeze and excitation. The squeeze operation compresses the feature maps along the spatial dimension which extracts the mean of the feature maps for each channel. The excitation operation models the correlation between the channels, and then generates a weight for each channel. Finally, the output of the squeeze-and-excitation block is generated by multiplying the input feature maps of the block with the output weights of the excitation operation.
In our implementation, the squeeze operation is an average-pooling layer. The excitation operation consists of two fully-connected layers that take the rectified linear units and sigmoid activation units as the hidden units respectively. The dimension between the two fully-connected layers can be adjusted by a hyperparameter. We set in this paper as  did.
2.4 Model implementation
In this subsection, we configure the models for the experiment. The first model, named DS-ResNet18, achieves the highest accuracy with a small model size. It consists of 7 residual blocks, each of which contains 2 depthwise separable convolution layers and 64 input and output channels. Because there is an independent depthwise separable convolution layer before the average-pooling layer, the total number of the depthwise separable convolution layers is 15. The dilation in the th layer was set to . DS-ResNet18 has roughly 72K parameters. It needs 285M multiplication operations to generate an output from an input time-frequency spectrum. The details of the model are listed in Table 1.
To reduce the model footprint, the most efficient way is to use fewer filters in each convolution layer. Here we reduced the number of the input and output channels to for each depthwise separable convolution layer. To further reduce the model footprint, we further reduced the number of the depthwise separable convolution layers to 11. When the the number of the depthwise separable convolution layers is less than 13, the receptive field of the entire network cannot cover the entire input. To cover the entire input with a large enough receptive field, we added a average-pooling layer after the squeeze-and-excitation layer. The model, named DS-ResNet14, has roughly 15.2K parameters, and needs 15.7M multiplies to generate an output. The details of the model are listed in Table 2.
The smallest model we have implemented contains 8 convolution layers. When the number of the convolution layer is less than 10, the residual connections seem unnecessary anymore. Therefore, we removed the residual connections. To keep the receptive field wide enough, we added a average-pooling layer after the squeeze-and-excitation layer. This compact model, named DS-ResNet10, consists of 7 separable convolution layers without residual connections. It has about 10K parameters, and needs 5.8M multiplies to generate an output. The details of the model are listed in Table 3.
3.1 Experimental setup
We evaluated the proposed models using Google s Speech Commands Dataset version 1 
. The dataset consists of 64721 one-second long recordings of 30 words by thousands of different speakers, as well as background noise samples such as pink noise, white noise, and human-made sounds. Among the 30 words, 10 words (including “yes”, “no”, “up”, “down”, “left”, “right”, “on”, “off”, “stop”, “go”) were used as keywords, and the rest 20 words were used as fillers which were labeled as “unknown”.
We followed the way in 
to add noise and random shift to each segment. Then, we extracted 40 dimensional Mel-frequency cepstrum coefficient features from each frame with a frame length of 25ms and a frame shift of 10ms. We used the stochastic gradient descent with a momentum of 0.9 as the optimizer of the proposed networks, and added theweight decay of as the regularization. The batch size was set to 100. All models were trained from scratch for roughly 30000 steps. The initial learning rate was set to 0.1, and multiplied by 0.1 for every 10000 steps. The network was evaluated on the validation set for every 1000 steps. The model that achieved the highest accuracy on the validation set was used as the final model. We ran all experiments for five independent times with different random seeds, and reported the average performance.
3.2 Results of the first experiment
In the first experiment, we followed the experimental setup of . Specifically, we additionally added some silent segments which are random noise only. We assigned the silent segments a keyword “silence”. We randomly selected a number of segments from the keyword “unknown”, which keeps the ratio of “silence” and “unknown” to about 10% of the total segments. According to the SHA1-hashed name, the audio files was split to three parts for training, validation, and test, which contain roughly 22000, 2700, and 2700 segments, respectively.
We compared the proposed DS-ResNet with ResNet , TC-ResNet , and DS-CNN . The three models of ResNets are denoted as res15, res15-narrow, and res8-narrow. The three models of TC-ResNets are denoted as TC-ResNet14-1.5, TC-ResNet14, and TC-ResNet8. The three models of DS-CNNs are denoted as DS-CNN(L), DS-CNN(M), and DS-CNN(S). See [6, 9, 10] for the description of the aforementioned comparison models. All comparison methods followed the same settings as in [6, 9, 10].
” is the 95% confidence interval.
Figure 4 shows the comparison results in terms of the error rate (in the vertical ordinate) and model parameters (in the horizontal ordinate). From the figure, we see that the proposed DS-ResNet yields better performance curve than the comparison methods.
Table 4 lists the comparison results in terms of the error rate, model parameters, and the number of multiplication operations per inference pass. From the table, we see that DS-ResNet18 achieves a relative 21.7% error rate reduction over res15, with its number of parameters being only 1/3 of that of the latter. When the number of the model parameters is roughly the same, DS-ResNet14 achieves a relative 58.4% error rate reduction over res8-narrow. At last, DS-ResNet10 reaches an error rate of 4.76% with only 10K parameters.
3.2.1 Effect of the squeeze-and-excitation block
To investigate the effect of the squeeze-and-excitation block, we proposed three additional variants of DS-ResNet18. The first one, named DS-ResNet18-n, does not use the squeeze-and-excitation block. The second one, named DS-ResNet18-d, added the squeeze-and-excitation block after each depthwise convolution layer. The third one, named DS-ResNet18-p, added the squeeze-and-excitation block after each pointwise convolution layer.
Table 5 lists the effect of the squeeze-and-excitation block on performance. From the table, we see that DS-ResNet18 outperforms DS-ResNet18-n, which demonstrates the effectiveness of the block. However, adding more squeeze-and-excitation blocks, as the DS-ResNet18-d and DS-ResNet18-p did, does not lead to improved performance.
3.3 Results of the second experiment
To further investigate the effectiveness of the proposed method, we used the standard configuration of the dataset [15, 16], where we used 51088 utterances for training, 6798 utterances for validation, and 6835 utterances for testing. We compared with the DenesNet-BiLSTM  and tdnn-swsa . All comparison methods followed the same settings as in [15, 16].
Table 6 lists the comparison results. From the table, we see that DS-ResNet10 is competitive with tdnn-swsa in the low-resource condition. If we slightly relaxed the restrictions on the model size, DS-ResNet14 achieves a relative 32.2% error rate reduction over tdnn-swsa. Finally, DS-ResNet18 achieves similar performance with DenesNet-BiLSTM, with the number of parameters being only 1/3 of the latter.
In this paper, we have proposed the depthwise separable convolution based ResNet for the small-footprint keyword spotting problem, which contains two novel components. The first component concatenates the depthwise separable convolution with ResNet. This concatenation significantly reduces the number of parameters without suffering performance degradation. The second component applies the squeeze-and-excitation block to the output of the first convolution layer, which is able to further improve the performance without increasing the number of parameters dramatically. We have compared DS-ResNet with 5 referenced methods on two settings of the public available Google Speech Commands dataset. Experimental results show that the proposed DS-ResNet achieves the state-of-the-art performance in various experimental settings.
-  D. R. Miller, M. Kleber, C.-L. Kao, O. Kimball, T. Colthurst, S. A. Lowe, R. M. Schwartz, and H. Gish, “Rapid and accurate spoken term detection,” in Eighth Annual Conference of the international speech communication association, 2007.
-  Y. Bai, J. Yi, H. Ni, Z. Wen, B. Liu, Y. Li, and J. Tao, “End-to-end keywords spotting based on connectionist temporal classification for mandarin,” in 2016 10th International Symposium on Chinese Spoken Language Processing (ISCSLP). IEEE, 2016, pp. 1–5.
M.-C. Silaghi, “Spotting subsequences matching an hmm using the average observation probability criteria with application to keyword spotting,” inAAAI, 2005, pp. 1118–1123.
-  G. Chen, C. Parada, and G. Heigold, “Small-footprint keyword spotting using deep neural networks,” in 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2014, pp. 4087–4091.
-  T. N. Sainath and C. Parada, “Convolutional neural networks for small-footprint keyword spotting,” in Sixteenth Annual Conference of the International Speech Communication Association, 2015.
-  R. Tang and J. Lin, “Deep residual learning for small-footprint keyword spotting,” in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018, pp. 5484–5488.
-  M. Sun, D. Snyder, Y. Gao, V. K. Nagaraja, M. Rodehorst, S. Panchapagesan, N. Strom, S. Matsoukas, and S. Vitaladevuni, “Compressed time delay neural network for small-footprint keyword spotting.” in INTERSPEECH, 2017, pp. 3607–3611.
-  C. Shan, J. Zhang, Y. Wang, and L. Xie, “Attention-based end-to-end models for small-footprint keyword spotting,” Proc. Interspeech 2018, pp. 2037–2041, 2018.
-  S. Choi, S. Seo, B. Shin, H. Byun, M. Kersner, B. Kim, D. Kim, and S. Ha, “Temporal convolution for real-time keyword spotting on mobile devices,” Proc. Interspeech 2019, pp. 3372–3376, 2019.
-  Y. Zhang, N. Suda, L. Lai, and V. Chandra, “Hello edge: Keyword spotting on microcontrollers,” arXiv preprint arXiv:1711.07128, 2017.
-  A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, “Mobilenets: Efficient convolutional neural networks for mobile vision applications,” arXiv preprint arXiv:1704.04861, 2017.
-  L. Sifre and S. Mallat, “Rigid-motion scattering for image classification,” Ph. D. thesis, 2014.
-  M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “Mobilenetv2: Inverted residuals and linear bottlenecks,” in
-  J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7132–7141.
-  M. Zeng and N. Xiao, “Effective combination of densenet and bilstm for keyword spotting,” IEEE Access, pp. 1–1.
-  Y. Bai, J. Yi, J. Tao, Z. Wen, Z. Tian, C. Zhao, and C. Fan, “A time delay neural network with shared weight self-attention for small-footprint keyword spotting,” Proc. Interspeech 2019, pp. 2190–2194, 2019.
-  P. Warden, “Launching the speech commands dataset,” Google Research Blog, 2017.