Speech recognition technologies have been very successful today. But for good accuracy in real-world applications, machines still have to learn from huge quantities of annotated data. This makes the development of speech technologies for a new language challenging. For low-resourced languages, collecting huge quantities of data is difficult, and having them annotated is even prohibitively hard. But more then 95% of the languages all over the world are low-resourced, and many of them even without linguistic analysis, or without written forms. Compared to annotating audio data, obtaining unannotated audio data of reasonable size is relatively achievable. If the machines can automatically learn the acoustic patterns for linguistic units (words, syllables, phonemes, etc.) within the speech signals from an unannotated speech data set of reasonable size, recognition models for those units may be constructed, and speech recognition may become possible for a new language under a new environment with minimum supervision. Imagine a Hokkien-speaking family obtaining an intelligent device at home: this device does not know Hokkien at all in the beginning, by hearing people speaking Hokkien for some time, it may automatically learn the language. The goal of this paper is one step forward towards this vision .
With the above long-term goal in mind, it is highly desired to embed audio signal segments (probably correspond to linguistic units of words, syllables or phonemes) of variable length into vectors of fixed dimensionality, serving as the latent representations for the signal segments[2, 3, 4, 5, 6, 7, 8, 9, 10, 11]. This naturally makes all following processing easier such as computation, clustering, modeling classification, indexing, etc. Good application examples include speaker identification , emotion classification , and spoken term detection (STD) [14, 15, 16, 17, 18, 19]. In these applications, standard processing mechanisms can be easily performed over such vector representations for the audio segments, achieving the purpose much more efficiently than processing over the signal segments of variable lengths [5, 6, 20, 21, 22].
Audio Word2Vec was proposed and can be trained in an unsupervised way with a sequence-to-sequence autoencoder, with the embeddings for the input audio segments extracted from the bottleneck layer[4, 15]. It has been shown that the vector representations obtained in this way carry the phonetic information about the audio segments . It was then further shown that dividing utterances into audio segments and embedding them as sequences of vectors can be jointly learned in an unsupervised way in Segmental Audio Word2Vec 
. Such unsupervised approaches for audio segment embedding are attractive because no annotation is needed. However, each linguistic unit (word, syllable, phoneme) corresponds to unlimited number of audio realizations each with its own vector representations, and the spread of these vector representations inevitably lead to confusion, especially when no human labels are available. For example, although embeddings for the realizations of the word ”brother” are very close to each other in the vector space, so do those for the word ”bother”, spread of those two groups causes some confusion. A Siamese convolutional neural network[24, 25, 26, 27]
was trained using side information to obtain embeddings for which same-word pairs were closer and different-word pairs were better separated. But human annotation is required under this supervised learning scenario.
Siamese networks learning from same-word and different-word pairs  can be useful in learning better audio embeddings for linguistic units which are discrete, but labeled data is needed. In this paper, inspired by the concept of Siamese networks, we propose a set of approaches to learn better audio embeddings based on the adjacency relationships among data points. This includes identifying positive and negative pairs from unlabeled data for Siamese style training, disentangling acoustic factors such as speaker characteristics from the audio embedding, and handling the unbalanced data distribution. All these can be done in an unsupervised way, and very encouraging results were observed in the initial experiments.
2 Proposed Approach
Because the goal of improving audio embedding is challenging, in the initial effort here we slightly simplify the task by assuming all training utterances have been properly segmented into spoken linguistic units (words, syllables, phonemes). Many approaches for segmenting utterances automatically have been developed , and automatic segmentation plus audio embedding has been jointly trained successfully and reported before , so such an assumption is reasonable here.
Below we denote the audio corpus as , which consists of spoken linguistic units, each represented as an acoustic feature sequence of length , . In the subsections below, we try to perform the Siamese style training in an unsupervised way over audio segments.
2.1 Siamese Networks Considered for Unlabeled Audio Data
The overview of the proposed approach is shown in Fig. 1. Siamese networks are typically trained on a collection of positive and negative pairs of data points to make sure positive pairs are closer and negative pairs far apart. We wish to use such a concept to improve the audio embeddings considered here.
With labeled data, pairs with the same label are considered positive, and negative otherwise. Here we consider unlabeled data sets. One way to achieve this is to learn such pairs directly from Euclidean proximity, e.g., by ”labeling” points positive if is small or taking the nearest neighbors of each point throughout the whole data set and negative otherwise. Such a Siamese network can then be trained by minimizing the contrastive loss,
where is a margin, and denotes positive and negative pair sets.
There exist basic problems for the above concept to be used in the scenario considered here: (a) we cannot define positive or negative pairs from raw data, due to the variable length of the audio segments and the disturbance caused by of speaker characteristics, and (b) it is time-consuming to find nearest neighbors for each point, since a large amount of data points is required for training audio embeddings. These problems will be taken care of below.
2.2 Phonetic Embedding with Speaker Characteristics Disentangled
Audio embedding is to represent each audio segment for a linguistic unit (a word, syllable or phoneme) as a vector of a fixed dimensionality. This partly solves the first problem mentioned above, i.e., the audio segments have variable lengths. Even with the fixed dimensionality, we note a linguistic unit with a given phonetic content corresponds to infinite number of audio realizations with varying acoustic factors such as speaker characteristics, microphone characteristics, background noise, etc. All the latter acoustic factors are jointly referred to as speaker characteristics here for simplicity, which obviously disturb the goal of embedding signals for the same linguistic unit to vectors very close to each other. This is why we wish to disentangle such factors here.
As shown in the middle of Figure 2 following exactly the prior work , a sequence of acoustic features is entered to a phonetic encoder and a speaker encoder to obtain a phonetic vector in orange and a speaker vector in green. Then the phonetic and speaker vectors , are used by the decoder together to reconstruct the acoustic features . This phonetic vector will be used as the phonetic embedding, or the audio embedding considered here carrying primarily the phonetic information in the signal. The two encoders , and the decoder are jointly learned by minimizing the reconstruction loss.
The training of the speaker encoder requires speaker information for the audio segments. Assume the audio segment is uttered by speaker . When the speaker information is not available, we can simply assume that the audio segments in the same utterance are produced by the same speaker. As shown in the lower part of Figure 2, is learned to minimize the contrastive loss. That is, if and are uttered by the same speaker (), we want their speaker embeddings and to be as close as possible. But if , we want the distance between and larger than a threshold.
As shown in the upper right corner of Figure 2, a speaker discriminator takes two phonetic vectors and as input and tries to tell if the two vectors come from the same speaker. The learning target of the phonetic encoder is to ”fool” this speaker discriminator , keeping it from discriminating the speaker identity correctly. In this way, only the phonetic information is learned in the phonetic vector , while only the speaker characteristics is encoded in the speaker vector .
2.3 Identify Positive and Negative Pairs within each Mini-Batch
Finding nearest neighbors for each data point is costly, with time complexity of , where is the corpus size. To reduce the time and computing costs, we alternatively create a -nearest neighbors graph among all the data points in each mini-batch, and use it to approximate the distribution for the full data set. In this way, time complexity could be reduced to , where is the mini-batch size.
2.4 Siamese Style Training
We can simply apply the Siamese style loss function as an extra requirement in training the phonetic embedding in subsection2.2, or the Siamese requirement is jointly trained:
where are positive and negative pairs selected in each mini-batch, and are the phonetic embeddings obtained in this way.
On the other hand, we can also pretrain the audio embeddings with speaker characteristics disentangled as in subsection 2.2, then on top of the obtained phonetic embeddings , train another Siamese model to further transform them to a new space where the similar points are more compact by clustered, or adjacency-based clustering. The training loss function for this extra model is:
where the phonetic embeddings obtained in subsection 2.2, , are transformed to the new embeddings .
2.5 Dealing with Unbalanced Data
The distribution of linguistic units is unbalanced. For low frequency units we may not be able to find more than two audio segments in a mini-batch. When we create a -nearest neighbor graph within the mini-batch, audio segments for such units would be forced to reduce their distance to audio segments corresponding to different linguistic units. On the other hand, for high frequency units with more than corresponding audio segments in a mini-batch, such audio segments would be separated into two or more clusters.
Therefore, instead of creating a -nearest neighbor graph for a mini-batch, we alternatively create a fully-connected graph for each mini-batch, labeling pairs of data points with top- shortest distance in between as the positive pairs, and randomly select other pairs of data points as negative pairs. In this way the probability that data points for positive pairs correspond to the same linguistic unit may be higher.
3 Experimental Setup
We used LibriSpeech  as the audio corpus in the experiments, which is a corpus of read speech in English derived from audiobooks. This corpus contained 1000 hours of speech sampled at 16 kHz uttered by 2484 speakers. We randomly sampled 100 speakers from the “clean” and “others” sets, about 40 hours of speech for training, another 40 hours for testing, and 39-dim MFCCs were extracted as the acoustic features . The audio signals were segmented into three levels of linguistic units, word, syllable, and phoneme.
3.2 Model Implementation
The phonetic encoder , speaker encoder and decoder were either 2-layer bi-directional GRUs or 3-layer CNNs with dense layers. The size of embedding vectors is 256. The speaker discriminator is a fully-connected feedforward network with 2 hidden layers with size 128. The value of we used in Eqs (2) (3) was set to 1.
4 Experimental Results
In the following subsections, we evaluate four kinds of audio embeddings: (a) Audio Word2Vec , which is simply an auto-encoder; (b) Audio Word2Vec with speaker characteristics disentangled as in subsection 2.2 [30, 28]; (c) proposed approach as in Eq.(2); and (d) proposed approach as in Eq.(3).
4.1 Analysis of Embedding Characteristics
We first compared the averaged cosine similarity of intra- and inter-class pairs for three different levels of linguistic units. Intra-class pairs were evaluated between audio segments corresponding to the same linguistic units, while inter-class pairs were evaluated between segments belonging to different units. Except for the four kinds of audio embeddings mentioned, we also provided two kinds of audio embedding as baselines: audio embedding obtained in subsection2.2 with minimizing overall L1 loss as an extra requirement in training process ((b)+L1), and audio embedding obtained in subsection 2.2 with minimizing overall L2 loss as an extra requirement in training process ((b)+L2).
The results are listed in Table 1. It can be clearly found that row (d) for the proposed approach of Eq.(3) gave the highest intra-class average cosine similarity and the maximum difference between intra- and inter-class average cosine similarity for all the three linguistic units, which indicated that this approach offered better clustered audio embeddings. In other words, those corresponding to the same linguistic units were more compactly distributed even without extra annotation.
4.2 Analysis of Unsupervised Clustering
In the experiment here, we use k-means, an unsupervised clustering method, for the first experiment for analysis. All three different levels of linguistic units were tested for comparison, so the numbers of labels were fixed to 70, which is the total class number of phoneme. For word and syllable, the top 70 frequent units were selected as labels for experiments.
Given clustering results, we could construct a confusion matrix, where is the number of linguistic units tested (70), is the number of clusters (tested up to 280), and indicates the count of data points having label but assigned to cluster . This count was first normalized,
For each label we obtained a cluster yielding the highest and assumed it was the cluster for label ,
The total accuracy was then evaluated by summing over all labels ,
Higher total accuracy would be obtained if all the data points of the same label were in the same cluster.
The results of three different levels of linguistic units are similar, so only the clustering results of word are shown in Fig. 3 for clarity. It is known that the clustering performance of k-means depends on both the number of clusters , so various values of were tested for overall results. First of all, we found that the feature disentanglement improved the total accuracy (curves (b) v.s. (a)), and the proposed Siamese style training in Eq.(3) gave further progress (curves (d) v.s. (a)). As shown in Fig. 3, curve (d) for the proposed approach of Eq.(3) gave the highest total accuracy at nearly all values of , which proved that this approach greatly improved the performance in unsupervised clustering.
4.3 Analysis of Spoken Term Detection
|top_||(a)||(b)||(c)||(d)||(d) - (a)||(d) - (b)|
We used the 960 hours of “clean” and “other” parts of LibriSpeech data set as the target archive for detection, which consisted of 1478 audio books with 5466 chapters. Each chapter included 1 to 204 utterances or 5 to 6529 spoken words. In our experiments, 80 queries were chosen from the words used in these 960 hours of speech with the top 80 TF-IDF scores, and the chapters were taken as the spoken documents to be retrieved. The audio realization of each query was randomly sampled from LibriSpeech data set, and our goal was to retrieve documents containing those queries (words, not necessarily the exact audio realizations). We used mean average precision (MAP) as the evaluation metric for the spoken term detection test.
For each query and each document , the relevance score of with respect to , , is defined as follows:
where is the audio embedding of a spoken word , represents cosine similarity, and is the set of top spoken words in with the highest cosine similarity values , is a parameter. In other words, the documents were ranked by the average of top cosine similarity between each spoken word in and the query .
The results are listed in Table 2. As can be found from this table, colomn (d) for the proposed approach of Eq.(3) offered the best detection performance than all the other kinds of audio embedding at all values of . Because the queries are high frequency terms and they usually appear with multiple times in the documents, the detection performances of all kinds of audio embeddings gradually improved as increased in the range tested. The right most two columns also listed the differences between the proposed approach in Eq.(3) (column (d)) and columns (a) and (b). We see the proposed method gained large improvements. As shown in the table, at 40 gave the maximum difference between columns (d) and (b). This is probably related to the fact that the number of utterances in a document is roughly 40 in average. Larger difference was achieved as was increased from 1 to 40, but less as was increased over 40.
5 Conclusions and Future Work
In this paper we propose a framework to embed audio segments into better clustered vector representations with fixed dimensionality, including identifying positive and negative pairs from unlabeled data for Siamese style training, disentangling acoustic factors such as speaker characteristics from the audio embedding, handling unbalanced data distribution. Our proposed methods gave great improvement in both clustering analysis and spoken term detection. For the future work, we have committed ourselves to distilling only linguistic information from audio segments.
-  Aren Jansen et al., “A summary of the 2012 JHU CLSP workshop on zero resource speech technologies and models of early language acquisition,” in ICASSP, 2013.
-  Wanjia He, Weiran Wang, and Karen Livescu, “Multi-view recurrent neural acoustic word embeddings,” arXiv preprint arXiv:1611.04496, 2016.
-  Shane Settle and Karen Livescu, “Discriminative acoustic word embeddings: Recurrent neural network-based approaches,” arXiv preprint arXiv:1611.02550, 2016.
-  Yu-An Chung, Chao-Chung Wu, Chia-Hao Shen, Hung-Yi Lee, and Lin-Shan Lee, “Audio word2vec: Unsupervised learning of audio segment representations using sequence-to-sequence autoencoder,” arXiv preprint arXiv:1603.00982, 2016.
-  Samy Bengio and Georg Heigold, “Word embeddings for speech recognition,” in Fifteenth Annual Conference of the International Speech Communication Association, 2014.
-  Keith Levin, Katharine Henry, Aren Jansen, and Karen Livescu, “Fixed-dimensional acoustic embeddings of variable-length segments in low-resource settings,” in Automatic Speech Recognition and Understanding (ASRU), 2013 IEEE Workshop on. IEEE, 2013, pp. 410–415.
-  Shane Settle, Keith Levin, Herman Kamper, and Karen Livescu, “Query-by-example search with discriminative neural acoustic word embeddings,” arXiv preprint arXiv:1706.03818, 2017.
-  Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio, “Learning phrase representations using rnn encoder-decoder for statistical machine translation,” arXiv preprint arXiv:1406.1078, 2014.
Wei-Ning Hsu, Yu Zhang, and James Glass,
“Unsupervised learning of disentangled and interpretable representations from sequential data,”in Advances in neural information processing systems, 2017, pp. 1876–1887.
-  Aren Jansen, Manoj Plakal, Ratheet Pandya, Daniel PW Ellis, Shawn Hershey, Jiayang Liu, R Channing Moore, and Rif A Saurous, “Unsupervised learning of semantic audio representations,” arXiv preprint arXiv:1711.02209, 2017.
-  Shigeki Karita, Shinji Watanabe, Tomoharu Iwata, Atsunori Ogawa, and Marc Delcroix, “Semi-supervised end-to-end speech recognition,” Proc. Interspeech 2018, pp. 2–6, 2018.
Najim Dehak, Reda Dehak, Patrick Kenny, Niko Brümmer, Pierre Ouellet, and
“Support vector machines versus fast scoring in the low-dimensional total variability space for speaker verification,”in Tenth Annual conference of the international speech communication association, 2009.
-  Björn Schuller, Stefan Steidl, and Anton Batliner, “The interspeech 2009 emotion challenge,” in Tenth Annual Conference of the International Speech Communication Association, 2009.
-  Hung-yi Lee and Lin-shan Lee, “Enhanced spoken term detection using support vector machines and weighted pseudo examples,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 21, no. 6, pp. 1272–1284, 2013.
-  I-Fan Chen and Chin-Hui Lee, “A hybrid hmm/dnn approach to keyword spotting of short words.,” in INTERSPEECH, 2013, pp. 1574–1578.
-  Atta Norouzian, Aren Jansen, Richard C Rose, and Samuel Thomas, “Exploiting discriminative point process models for spoken term detection,” in Thirteenth Annual Conference of the International Speech Communication Association, 2012.
-  Dhananjay Ram, Lesly Miculicich, and Hervé Bourlard, “Cnn based query by example spoken term detection,” Proc. Interspeech 2018, pp. 92–96, 2018.
-  Yougen Yuan, Cheung-Chi Leung, Lei Xie, Hongjie Chen, Bin Ma, and Haizhou Li, “Learning acoustic word embeddings with temporal context for query-by-example speech search,” arXiv preprint arXiv:1806.03621, 2018.
-  Ravi Shankar, CM Vikram, and SRM Prasanna, “Spoken keyword detection using joint dtw-cnn,” .
-  Herman Kamper, Weiran Wang, and Karen Livescu, “Deep convolutional acoustic word embeddings using word-pair side information,” in Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on. IEEE, 2016, pp. 4950–4954.
-  Keith Levin, Aren Jansen, and Benjamin Van Durme, “Segmental acoustic indexing for zero resource keyword search,” in Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on. IEEE, 2015, pp. 5828–5832.
Guoguo Chen, Carolina Parada, and Tara N Sainath,
“Query-by-example keyword spotting using long short-term memory networks,”in Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on. IEEE, 2015, pp. 5236–5240.
-  Yu-Hsuan Wang, Hung-yi Lee, and Lin-shan Lee, “Segmental audio word2vec: Representing utterances as sequences of vectors with applications in spoken term detection,” in Acoustics, Speech and Signal Processing (ICASSP), 2018 IEEE International Conference. IEEE, 2018.
-  Raia Hadsell, Sumit Chopra, and Yann LeCun, “Dimensionality reduction by learning an invariant mapping,” in null. IEEE, 2006, pp. 1735–1742.
-  Jonas Mueller and Aditya Thyagarajan, “Siamese recurrent architectures for learning sentence similarity.,” in AAAI, 2016, vol. 16, pp. 2786–2792.
-  Wen-tau Yih, Kristina Toutanova, John C Platt, and Christopher Meek, “Learning discriminative projections for text similarity measures,” in Proceedings of the Fifteenth Conference on Computational Natural Language Learning. Association for Computational Linguistics, 2011, pp. 247–256.
-  Uri Shaham and Roy R Lederman, “Learning by coincidence: Siamese networks and common variable learning,” Pattern Recognition, vol. 74, pp. 52–63, 2018.
-  Yi-Chen Chen, Sung-Feng Huang, Chia-Hao Shen, Hung-yi Lee, and Lin-shan Lee, “Phonetic-and-semantic embedding of spoken words with applications in spoken content retrieval,” arXiv preprint arXiv:1807.08089, 2018.
-  Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur, “Librispeech: an asr corpus based on public domain audio books,” in Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on. IEEE, 2015, pp. 5206–5210.
-  Yi-Chen Chen, Chia-Hao Shen, Sung-Feng Huang, and Hung-yi Lee, “Towards unsupervised automatic speech recognition trained by unaligned speech and text only,” arXiv preprint arXiv:1803.10952, 2018.