Language identification (LID) can be defined as a utterance level paralinguistic speech attribute classification task, in compared with automatic speech recognition, which is a “sequence-to-sequence” tagging task. There is no constraint on the lexicon words thus the training utterances and testing segments may have completely different content. The goal, therefore, might be to find a robust and duration-invariant utterance level vector representation describing the distributions of local features.
In recent decades, in order to get the utterance level vector representation, dictionary learning procedure is widely used. A dictionary, which contains several temporal orderless center components ( or units, words), can encode the variable-length input sequence into a single utterance level vector representation. Vector quantization (VQ) model, is one of the simplest text-independent dictionary models . It was introduced to speaker recognition in the 1980s 
. The average quantization distortion is aggregated from the frame-level residual towards to the K-means clustered codebook. The Gaussian Mixture Model (GMM) can be considered as an extension of the VQ model, in which the posterior assignments are soft[3, 4]. Once we have a trained GMM, we can simply average the frame-level likelihood to generate the encoded utterance level likelihood score. Besides, we can move forward to accumulate the and order Baum-Welch statistics, and encode them into a high dimensional GMM Supervector . VQ codebook and GMM are unsupervised and there is no exact physical meaning on its components. Another way to learn the dictionary is through phonetically-aware supervised training [6, 7]
. In this method, a deep neural network (DNN) based acoustic model is trained. Each component in the dictionary represents a phoneme (or senone) physically, and the statistics is accumulated through senone posteriors, as is done in recently popular DNN i-vector approach[8, 9, 10]. A phonotactic tokenizer can be considered as a dictionary doing hard assignments with top-1 score 
. Once we have a trained tokenizer, usually a bag-of-words (BoW) or N-gram model is used to form the encoded representation[11, 12].
These existing approaches have the advantage of accepting variable-length input and the encoded representation is in utterance level. However, when we move forward to modern the end-to-end learning pipeline, e.g. the neural network, especially for the fully-connected (FC) network, it usually requires a fixed-length input. In order to feed into the network, as is done in [13, 14, 15, 16], the original input feature sequence has to be resized or cropped into multiple small fixed-size segments in frame level. This might be theoretically and practically not ideal for recognizing language, speaker or other paralinguistic information due to the need of a time-invariant representation from the entire arbitrary and potentially long duration length.
To deal with this issue, recently, in both [17, 18], similar temporal average pooling(TAP) layer is adopted in their neural network architectures. With the merit of TAP layer, the neural network have the ability to train input segments with random duration. In testing stage, the whole speech segments with arbitrary duration can be fed into the neural network.
Compared with the simple TAP, the conventional dictionary learning have the ability to learn a finer global histogram to demonstrate the feature distribution better, and it can accumulate high order statistics. In computer vision community, especially in image scene classification, texture recognition, action recognition tasks, modern convolutional neural network (CNN) usually bound to the conventional dictionary learning methods together to get a better encoding representation. For example, NetVLAD, NetFV , Bilinear Pooling , and Deep TEN  are proposed and achieved great success.
This motivates us to implement the conventional GMM and Supervector mechanism into our end-to-end LID neural network. As the major contribution of this paper, we introduce a novel learnable dictionary encoding (LDE) layer, which combines the entire dictionary learning and vector encoding pipeline into a single layer for end-to-end deep CNN. The LDE layer imitates the mechanism of conventional GMM and GMM Supervector, but learned directly from the loss function. This representation is orderless which might be suitable for LID and many other test-independent paralinguistic speech attribute recognition tasks. The LDE layer acts as a smart pooling layer integrated on top of convolutional layers, accepting variable length inputs and providing output as an utterance level vector representation. By allowing variable-length inputs, the LDE layer makes the deep learning framework more flexible to train utterances with arbitrary duration. In these sense, it is inline with the classical GMM i-vector method both theoretically and practically.
2.1 GMM Supervector
In conventional GMM Supervector approach , all frames of features in training dataset are grouped together to estimate a universal background model (UBM). Given acomponent GMM UBM model with and an utterance with a frame feature sequence , the and centered order Baum-Welch statistics on the UBM are calculated as follows:
where is the GMM component index and
is the occupancy probability foron . denotes as a residual between frame feature and the mean of the GMM’s component.
The corresponding centered mean supervector is generated by concatenating all the together:
2.2 LDE layer
Motivated by GMM Supervector encoding procedure, the proposed LDE layer has the similar input-output structure. As demonstrated in Fig. 1, given an input temporal ordered feature sequence with the shape (where denotes the feature coefficients dimension, and denotes the temporal duration length), LDE layer aggregates them over time. More specifically, it transforms them into an utterance level temporal orderless vector representation, which is independent of length .
Different from conventional approaches, we combine the dictionary learning and vector encoding into a single LDE layer on top of the front-end CNN, as shown in Fig. 2. The LDE layer simultaneously learns the encoding parameters along with an inherent dictionary in a fully supervised manner. The inherent dictionary is learned from the distribution of the descriptors by passing the gradient through assignment weights. During the training process, the updating of extracted convolutional features can also benefit from the encoding representations.
The LDE layer is a directed acyclic graph and all the components are differentiable the input3 illustrates the forward diagram of LDE layer. Here, we introduce two groups of learnable parameters. One is the dictionary component center, noted as . The other one is assigned weights, which is designed to imitate the , noted as .
Consider assigning weights from the features to the dictionary components. Hard-assignment provides a binary weight for each feature , which corresponds to the nearest dictionary components. The element of the assigning vector is given by , where is the indicator function (outputs 0 or 1). Hard-assignment does not consider the dictionary component ambiguity and also makes the model nondifferentiable. Soft-weight assignment addresses this issue by assigning the feature to each dictionary component. The non-negative assigning weight is given by a softmax function,
where is the smoothing factor for the assignment. Soft-assignment assumes that different clusters have equal scales. Inspired by GMM, we further allow the smoothing factor for each dictionary center to be learnable:
which provides a finer modeling of the feature distributions.
Given a set of frames feature sequence and a learned dictionary center , each frame of feature can be assigned with a weight to each component and the corresponding residual vector is denoted by , where and . Given the assignments and the residual vector, similar to conventional GMM Supervector, the residual encoding model applies an aggregation operation for every dictionary component center :
It’s complicated to compute the the explicit expression for the gradients of the loss with respect to the layer input . In order to facilitate the derivation， we simplified it as
The LDE layer concatenates the aggregated residual vectors with assigned weights. The resulted encoder outputs a fixed dimensional representation (independent of the sequence length L). As is typical in conventional GMM Supervector/i-vector, the resulting vectors are normalized using the length normalization .
2.3 Relation to traditional dictionary learning and TAP layer
Dictionary learning is usually learned from the distribution of the descriptors in an unsupervised manner. K-means learns the dictionary using hard-assignment grouping. GMM is a probabilistic version of K-means, which allows a finer modeling of the feature distributions. Each cluster is modeled by a Gaussian component with its own mean, variance and mixture weight. The LDE layer makes the inherent dictionary differentiablethe loss function and learns the dictionary in a supervised manner. To see the relationship of the LDE to K- means, consider Fig. 3 with omission of the residual vectors and let smoothing factor . With these modifications, the LDE layer acts like K-means. The LDE layer can also be regarded as a simplified version of GMM, that allows different scaling (smoothing) of the clusters.
Letting = 1 and fixing = 0, the LDE layer simplifies to TAP layer ( .
3.1 Data description
We conducted experiments on 2007 NIST Language Recognition Evaluation(LRE). Our training corpus including Callfriend datasets, LRE 2003, LRE 2005, SRE 2008 datasets, and development data for LRE07. The total training data is about 37000 utterances.
The task of interest is the closed-set language detection. There are totally 14 target languages in testing corpus, which included 7530 utterances split among three nomial durations: 30, 10 and 3 seconds.
3.2 GMM i-vector system
For better result comparison, we built a referenced GMM i-vector system based on Kaldi toolkit 
. Raw audio is converted to 7-1-3-7 based 56 dimensional shifted delta coefficients (SDC) feature, and a frame-level energy-based voice activity detection (VAD) selects features corresponding to speech frames. All the utterances are split into short segments no more than 120 seconds long. A 2048 components full covariance GMM UBM is trained, along with a 600 dimensional i-vector extractor, followed by length normalization and multi-class logistic regression.
3.3 End-to-end system
Audio is converted to 64-dimensional log mel-filterbank coefficients with a frame-length of 25 ms, mean-normalized over a sliding window of up to 3 seconds. The same VAD processing as in GMM i-vector baseline system is used here. For improving the data loading efficiency, all the utterances are split into short segments no more than 60s long , according to the VAD flags.
The receptive field size of a unit can be increased by stacking more layers to make the network deeper or by sub-sampling. Modern deep CNN architectures like Residual Networks  use a combination of these techniques. Therefore, in order to get higher abstract representation better for utterances with long duration, we design a deep CNN based on the well-known ResNet-34 layer architecture, as is described in Table 2.
For CNN-TAP system, a simple average pooling layer followed with FC layer is built on top of the font-end CNN. For CNN-LDE system, the average pooling layer is replaced with a LDE layer.
The network is trained using a cross entropy loss. The model is trained with a mini-batch, whose size varies from 96 to 512 considering different model parameters. The network is trained for 90 epochs using stochastic gradient descent with momentum 0.9 and weight decay 1e-4. We start with a learning rate of 0.1 and divide it by 10 and 100 at 60th and 80th epoch. Because we have no separated validation set, even though there might exist some model checkpoints can achieve better performance, we only use the model after the last step optimization. For each training step, an integerwithin interval is randomly generated, and each data in the mini-batch is cropped or extended to frames. The training loss tendency of our end-to-end CNN-LDE neural network is demonstrated in Fig. 4. It shows that our neural network with LDE layer is traninable and the loss can converge to a small value.
In testing stage, all the 3s, 10s, and 30s duration data is tested on the same model. Because the duration length is arbitrary, we feed the testing speech utterance to the trained neural network one by one.
In order to get the system fusion results of ID8 in Table 1, we randomly crop several additional training data corresponding to the separated 30s, 10s, 3s duration tasks. The score level system fusion weights are all trained on them.
Table 1 shows the performance on the 2007 NIST LRE closed-set task. The performance is reported in average detection cost and equal error rate (EER). Both CNN-TAP and CNN-LDE system achieve significant performance improvement comparing with conventional GMM i-vector system.
For our purpose in exploring encoding method for end-to-end neural network, we focus the comparison on system ID2 and ID3-ID7. The CNN-LDE system outperforms the CNN-TAP system with all different number of dictionary components. When the numbers of dictionary component increased from 16 to 64, the performance improved insistently. However, once dictionary component numbers are larger than 64, the performance decreased perhaps because of overfitting.
Comparing with CNN-TAP, the best CNN-LDE-64 system achieves significant performance improvement especially with regard to EER. Besides, their score level fusion result further improves the system performance significantly.
In this paper, we imitate the GMM Supervector encoding procedure and introduce a LDE layer for end-to-end LID neural network. The LDE layer acts as a smart pooling layer integrated on top of convolutional layers, accepting arbitrary input lengths and providing output as a fixed-length representation. Unlike the simple TAP, it rely on a learnable dictionary and can accumulate more discriminative statistics. The experiment results show the superior and complementary of LDE comparing with TAP.
-  T. Kinnunen and H. Li, “An overview of text-independent speaker recognition: From features to supervectors,” Speech Communication, vol. 52, no. 1, pp. 12–40, 2010.
-  F. Soong, A. E. Rosenberg, J. Bling‐Hwang, and L. R. Rabiner, “Report: A vector quantization approach to speaker recognition,” At & T Technical Journal, vol. 66, no. 2, pp. 387–390, 1985.
-  D.A. Reynolds and R.C. Rose, “Robust text-independent speaker identification using gaussian mixture speaker models,” IEEE Transactions on Speech & Audio Processing, vol. 3, no. 1, pp. 72–83, 1995.
-  D.A. Reynolds, T.F. Quatieri, and R.B. Dunn, “Speaker verification using adapted gaussian mixture models,” in Digital Signal Processing, 2000, p. 19–41.
W.M. Campbell, D.E. Sturim, and DA Reynolds,
“Support vector machines using gmm supervectors for speaker verification,”IEEE Signal Processing Letters, vol. 13, no. 5, pp. 308–311, 2006.
-  Y. Lei, N. Scheffer, L. Ferrer, and M. McLaren, “A novel scheme for speaker recognition using a phonetically-aware deep neural network,” in ICASSP 2014.
-  M. Li and W. Liu, “Speaker verification and spoken language identification using a generalized i-vector framework with phonetic tokenizations and tandem features,” in INTERSPEECH 2014.
-  M. Mclaren, Y. Lei, and L. Ferrer, “Advances in deep neural network approaches to speaker recognition,” in IEEE International Conference on Acoustics, Speech and Signal Processing, 2015, pp. 4814–4818.
-  F. Richardson, D. Reynolds, and N. Dehak, “Deep neural network approaches to speaker and language recognition,” IEEE Signal Processing Letters, vol. 22, no. 10, pp. 1671–1675, 2015.
-  D. Snyder, D. Garcia-Romero, and D. Povey, “Time delay deep neural network-based universal background models for speaker recognition,” in ASRU 2016, pp. 92–97.
G. Gelly, J. L. Gauvain, V. B. Le, and A. Messaoudi,
“A divide-and-conquer approach for language identification based on recurrent neural networks,”in INTERSPEECH 2016, pp. 3231–3235.
-  M. Li, L. Liu, W. Cai, and W. Liu, “Generalized i-vector representation with phonetic tokenizations and tandem features for both text independent and text dependent speaker verification,” Journal of Signal Processing Systems, vol. 82, no. 2, pp. 207–215, 2016.
-  I. Lopez-Moreno, J. Gonzalez-Dominguez, O. Plchot, D. Martinez, J. Gonzalez-Rodriguez, and P. Moreno, “Automatic language identification using deep neural networks,” in ICASSP 2014.
J. Gonzalez-Dominguez, I. Lopez-Moreno, H. Sak, J. Gonzalez-Rodriguez, and P. J
“Automatic language identification using long short-term memory recurrent neural networks,”in Proc. INTERSPEECH 2014, 2014.
-  R. Li, S. Mallidi, L. Burget, O. Plchot, and N. Dehak, “Exploiting hidden-layer responses of deep neural networks for language recognition,” in INTERSPEECH, 2016.
-  M. Tkachenko, A. Yamshinin, N. Lyubimov, M. Kotov, and M. Nastasenko, “Language identification using time delay neural network d-vector on short utterances,” 2016.
-  L. Chao, M. Xiaokong, J. Bing, L. Xiangang, Z. Xuewei, L. Xiao, C. Ying, K. Ajay, and Z. Zhenyao, “Deep speaker: an end-to-end neural speaker embedding system,” 2017.
-  D. Snyder, P. Ghahremani, D. Povey, D. Garcia-Romero, Y. Carmiel, and S. Khudanpur, “Deep neural network-based speaker embeddings for end-to-end speaker verification,” in SLT 2017, pp. 165–170.
-  R. Arandjelovic, P. Gronat, A. Torii, T. Pajdla, and J. Sivic, “Netvlad: Cnn architecture for weakly supervised place recognition,” in CVPR 2016, 2016, pp. 5297–5307.
-  S. Simonyan, A. Vedaldi, and A. Zisserman, “Deep fisher networks for large-scale image classification,” in NIPS 2013, pp. 163–171.
-  T. Lin, A. Roychowdhury, and S. Maji, “Bilinear cnns for fine-grained visual recognition,” in IEEE International Conference on Computer Vision, 2016, pp. 1449–1457.
-  H. Zhang, J. Xue, and K. Dana, “Deep ten: Texture encoding network,” in CVPR 2017.
-  N. Dehak, P. Kenny, R. Dehak, P. Dumouchel, and P. Ouellet, “Front-end factor analysis for speaker verification,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 19, no. 4, pp. 788–798, 2011.
-  Daniel Garcia-Romero and Carol Y Espy-Wilson, “Analysis of i-vector length normalization in speaker recognition systems.,” in INTERSPEECH 2011, pp. 249–252.
-  D. Povey and A. et al. Ghoshal, “The kaldi speech recognition toolkit,” in ASRU 2011.
-  K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in CVPR 2016, 2016, pp. 770–778.