Unsupervised Speech Recognition via Segmental Empirical Output Distribution Matching

12/23/2018 ∙ by Chih-Kuan Yeh, et al. ∙ Tencent 6

We consider the problem of training speech recognition systems without using any labeled data, under the assumption that the learner can only access to the input utterances and a phoneme language model estimated from a non-overlapping corpus. We propose a fully unsupervised learning algorithm that alternates between solving two sub-problems: (i) learn a phoneme classifier for a given set of phoneme segmentation boundaries, and (ii) refining the phoneme boundaries based on a given classifier. To solve the first sub-problem, we introduce a novel unsupervised cost function named Segmental Empirical Output Distribution Matching, which generalizes the work in (Liu et al., 2017) to segmental structures. For the second sub-problem, we develop an approximate MAP approach to refining the boundaries obtained from Wang et al. (2017). Experimental results on TIMIT dataset demonstrate the success of this fully unsupervised phoneme recognition system, which achieves a phone error rate (PER) of 41.6 supervised systems, we show that with oracle boundaries and matching language model, the PER could be improved to 32.5 supervised system of the same model architecture, demonstrating the great potential of the proposed method.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Over the past years, the performance of automatic speech recognition (ASR) has been improved greatly and the recognition accuracy in certain scenarios could be on par with human performance (Xiong et al., 2016)

. Most of the state-of-the-art ASR systems are constructed by training deep neural networks on large-scale labeled data using

supervised learning (Hinton et al., 2012; Dahl et al., 2012; Xiong et al., 2016; Graves et al., 2013, 2006; Graves, 2012); they rely on a large number of human labeled data to train the recognition model. In this paper, we are working towards the grand mission of training speech recognition models without any human annotated data. Such an approach could potentially save a huge amount of human labeling costs for developing ASR systems by leveraging massive unlabeled speech data. It is especially valuable when developing ASR systems for low-resource languages, where labeled data are more expensive to obtain.

Specifically, we consider the phoneme recognition problem, for which we learn a sequential classifier that maps speech waveform into a sequence of phonemes. In our unsupervised learning setting, the learning algorithm can only access (i) the input speech acoustic features, and (ii) a pretrained phoneme language model (LM). There is no human supervision presented to the algorithm at any level; that is, we do not provide any (frame-level) label for input samples, nor do we provide any (sentence-level) transcription for input utterances. The language model could be trained from a separate (text) corpus in an unsupervised manner with the help of a pre-defined lexicon

111A lexicon in ASR is a pre-defined dictionary that maps word sequences into phoneme sequences..

There have been some recent successes in developing fully unsupervised method for neural machine translation

(Artetxe et al., 2018; Lample et al., 2018) and sequence classifications (Liu et al., 2017). However, different from these problems, speech recognition problem has segmental structures that impose unique challenges for developing unsupervised learning algorithms. First, each phoneme generally consists of a segment of consecutive input samples (frames) that are associated to the same phoneme label. Second, the lengths and the boundaries of these segments are usually unknown a priori. For this reason, we could not directly apply the previous techniques to develop unsupervised ASR algorithms. To address the first challenge, we develop a novel unsupervised learning cost function for ASR systems by extending the Empirical Output Distribution Matching (Empirical-ODM) cost in (Liu et al., 2017) to segmental structures. The key ideas of our Segmental Empirical-ODM are: (i) the distribution of the predicted outputs across consecutive segments shall match the phoneme language model and (ii) the predicted outputs within each segment should be equal to each other as they belong to the same phoneme. This cost function allows us to learn the classifier without labeled data for a given set of phoneme segmentation boundaries. To address the second challenge, we develop a novel unsupervised approach to estimate (and refine) the segmentation boundaries using the current classification model. Our algorithm alternates between these two steps of learning classifier and estimating the boundaries to successively improve the performance of each other. Therefore, unlike previous works in (Liu et al., 2018), which relies on an oracle or forced alignment methods to obtain the phoneme segmentation boundaries, our method is fully unsupervised in both segmentation and classification. Furthermore, we also adapt the semi-supervised HMM learning technique (Zavaliagkos et al., 1998; Kemp and Waibel, 1999; Nallasamy et al., 2012) to our unsupervised setting to further improve the performance. In our experiments on TIMIT phoneme recognition task, our unsupervised learning method achieves a promising phone error rate (PER) of . To our best knowledge, this is the first empirical success of a fully unsupervised speech recognition that does not use any oracle segmentation or labels. Furthermore, when the oracle phoneme segmentation boundaries are given (similar to the setting in Liu et al. (2018)), our method achieves a PER of

with matching language model, which approaches supervised learning with the same model architecture, demonstrating a great potential of our method.

2 Fully Unsupervised Speech Recognition

2.1 Problem formulation

We consider the unsupervised phoneme recognition problem. Specifically, for a given sequence of input feature vectors

, we want to map it into a sequence of phonemes , where is an -dimensional input acoustic feature vector (e.g., mel-frequency cepstral coefficients (MFCC)),

is a categorical variable representing the phoneme class,

denotes the set of phonemes, is the length of the input sequence, and is the length of the output sequence. Note that the length of the input sequence is usually much larger than that of the output sequence.222We will use notation to index input frames and use notation to index segments (or phonemes). This is because speech data have a special segmental structure where a segment of consecutive input frames are associated with one phoneme class, as shown in Figure 1

. Furthermore, the length and boundaries of each phoneme segment are generally varying and unknown a priori. We introduce a binary variable

to characterize the segment boundaries: denotes the start of a new phoneme segment (see Figure 1). Let be the frame-wise phoneme label indicating the phoneme class that the -th input frame belongs to. In this work, we focus on learning a framewise phoneme classifier that maps the input sequence into its frame-wise label sequence . Once this is done, we could use a standard speech decoder to obtain the desired phoneme sequence from . We model the framewise phoneme classifier

(i.e., the posterior probability of the frame label

given the input ) by a context dependent DNN (Dahl et al., 2012), where denotes the model parameter and the input feature vector is a concatenation of the acoustic feature vectors within a context window around time

. We may also use other model architectures such as recurrent neural network (RNN), which are left as the future work. The objective of our unsupervised learning algorithm is to learn the model parameter

from: (i) a training set of input sequences , and (ii) a pretrained phoneme language model . Note that the language model could be trained from a separate (text) corpus in an unsupervised manner so that there is no supervision at any level.

There are two main challenges for unsupervised speech recognition: (i) how to learn the classifier from and for a given set of segmentation boundaries, and (ii) how to estimate the segmentation boundaries in an unsupervised manner. Unlike text where content can be broken into word units relatively easily, speech inputs are continuous and thus it is difficult to obtain the phoneme boundaries. This challenge is unique to unsupervised speech recognition and does not appear in e.g. unsupervised machine translation. In the following sections, we address the above challenges by developing a new unsupervised learning cost function by extending the Empirical-ODM cost in (Liu et al., 2017) to segmental structures. Furthermore, we develop a maximum a posteriori (MAP) estimator to refine the segmentation boundaries based on the current . Our algorithm alternates between these two steps, and after the iteration completes, we employ an unsupervised HMM training technique to further boost our unsupervised results.

2.2 Unsupervised frame classification with given segmentation boundaries

Figure 1: Segmental structure of speech data. Circles of the same color denote the same frame label in each segment. The shaded input vectors represent the sampled vectors to compute .

In this section, we develop an unsupervised algorithm to learn the classification model with a given set of segmentation boundaries . To this end, we define a new unsupervised learning cost function that exploits the segmental structure of the problem. Specifically, our new cost function is based on the following two observations: (i) the distribution of the predicted outputs across consecutive segments shall match the phoneme language model , and (ii) the predicted outputs within each segment should be equal to each other as they belong to the same phoneme. Accordingly, our unsupervised cost function consists of two parts, characterizing the above inter-segment and intra-segment distributions, respectively.

We first define the cost function associated with the inter-segment distribution. Before that, we introduce the following terms and notations, which are also illustrated in Figure 1. To simplify notation, we assume that all the utterances in are concatenated into one long sequence. Let there be a total of segments in the entire training set and let be a set that includes all the time indexes in the -th segment. We use to denote a sequence of time indexes sampled from , one per segment, i.e., . Without loss of generality, we consider -gram phoneme language models throughout this work and define , where denotes a particular -gram. Furthermore, let be a length- contiguous subsequence of that ends at . We use the compact notation and to represent and , respectively. Then, the cost function that characterizes the inter-segment output distribution match is defined as:

(1)

where is defined as the inter-segment output distribution with . The cost function (1) generalizes the Empirical-ODM cost in Liu et al. (2017) to the segmental structures, and it degenerates into the original Empirical-ODM cost when there is only one frame in each segment. The probability characterizes the empirical -gram frequency of the predicted output across consecutive segments, and the cost function measures the cross-entropy between the pretrained -gram LM and . This form of cost function enjoys several properties that are suitable for unsupervised learning of sequence classifiers, and the readers are referred to Liu et al. (2017) for more detailed discussions.

Next, we define the cost function that characterizes the intra-segment distribution matching as:

(2)

where the subscript “FS” stands for frame-wise smoothness. The cost (2) encourages the predictions for adjacent frames within the same segment to be similar. It captures the strong intra-segment temporal structure in speech signals that complements the cost (1). Our final unsupervised cost function combines the inter-segment and intra-segment distribution matching via:

(3)

where is a parameter controlling the trade-off between the two parts. We call the cost function Segmental Empirical-ODM as it captures the segmental structure through the inter-segment and intra-segment terms. To optimize this cost function, we sample a sequence

at the beginning of each epoch and applies stochastic gradient descent (SGD) with momentum to update

. Note that in , there is an empirical average over all segments in , which is inside the logarithmic function. This makes stochastic gradient descent intrinsically biased if we also sample this empirical average by a mini-batch average. To alleviate this effect, we use a large mini-batch size to estimate the stochastic gradients.

Note that our method directly optimizes the classifier that takes the raw acoustic feature vector (e.g., MFCC features) and maps it into output space. This is different from the previous work (Liu et al., 2018), which first performs clustering in the speech space and then maps the clusters into output space using adversarial training. This makes its performance upper-bounded by the purity of the initial clusters since input frames of different phonemes may be mapped into the same cluster. In contrast, our algorithm is end-to-end trained without using a separate clustering algorithm. This enables us to outperform the cluster purity upper bound, as shown in our experiment section.

2.3 Segmentation boundary refinement using the classification model

In this section, we develop an approach to refining the estimated segmentation boundaries using a learned framewise phoneme classifier . More formally, for each input utterance sequence , we would like to infer the corresponding boundary sequence . We propose a simple yet effective MAP estimation strategy by recognizing the fact that . Therefore, we can perform an MAP estimate for and then predict the boundaries by . The MAP estimator of can be expressed as (see Appendix A):

(4)

Note that is the transition probability of the frame labels. Assuming that belongs to the -th segment, we can express as:

(5)

where denotes the previous phonemes that the sequence has traversed. The first term in (5) characterizes the probability that stays in the same phoneme segment as and the second term defines the probability that belongs to a new phoneme segment. Note that in (5) could be obtained from the phoneme language model. It remains to estimate and , which we approximate by and , respectively. To obtain , we leverage the work of Wang et al. (2017), which shows that the temporal structure of the gate signals in a gated RNN (GRNN) auto-encoder is highly correlated with phoneme boundaries. Therefore, we apply a softmax function to the temporal gate signal to obtain . With all these information, we substitute (5) into (4) and perform a beam search to solve (4) for an approximate MAP estimate of . It follows that and we have refined the boundaries using .

2.4 Alternating Training Algorithm

Our unsupervised learning algorithm for alternates between the above two steps of estimating for a given and refining for a given . The overall algorithm is summarized in Algorithm 1. We initialize the algorithm by thresholding the temporal update gate activation in Wang et al. (2017) to obtain an initial rough estimate for . After the training converges,333We observe in our experiments that two iterations are sufficient to converge. we could apply the unsupervised HMM training technique discussed in Section 3 to further boost the performance. Note that although the training process requires boundary estimation, at testing stage, it is not necessary because the learned could be used in standard speech decoders just as supervised models.

Input: Phoneme language model , Training data , initial boundary obtained by using techniques proposed in Wang et al. (2017).
Output: Model parameter
1 Initialization for parameters .
2 while not converged do
3        Given a set of boundaries , obtain a new by optimizing (3).
4        Given the model parameter , obtain a new estimate for the boundaries by optimizing (4).
5 end while
Algorithm 1 Training Algorithm

2.5 Unsupervised Model Selection

Figure 2: Self-validation metric. (a) The learning curves of the self-validation loss and the validation FER. (b) The self-validation loss and the validation FER for different values of in (3).

Since there are no labeled data during the training process, we need to develop an unsupervised self-validation metric to perform model selection. We propose to use the value of the loss function (

1) on a heldout validation set (including only input features) to perform model selection. This self-validation loss gives us an estimate of which model configuration is better, and is used to a) determine when to stop training, and b) to select the best hyper-parameters. To validate the effectiveness for our self-validation loss, we show the learning curves of this self-validation loss and the validation frame error rate in Figure 2. We observe that the self-validation loss aligns well with the true validation error. Furthermore, in Figure 2 we plot the self-validation loss and the validation FER for different values of , which shows that the two metrics are highly correlated. The results demonstrate that the self-validation loss can be effectively used to select a good model.

3 Unsupervised HMM Training

To further improve the performance of proposed unsupervised speech recognition system, we explore the semi-supervised hidden Markov model (HMM) training strategy

(Zavaliagkos et al., 1998; Kemp and Waibel, 1999)

that has commonly been used in speech recognition. The semi-supervised HMM training is an effective technique where a seed model trained on a relatively small amount of labeled speech data is used for providing labels for larger amount of non-transcribed speech data for iterative model refinement. A major difference of the HMM training strategy used in this work compared to the ones used in semi-supervised learning is that we use the transcripts generated from proposed unsupervised speech recognition system (i.e., predicted labels for 3969 TIMIT training utterances) for bootstrapping the training of HMM-based models. Therefore, the training of HMM models in this work does not require any manually provided supervised information. The training of HMM based speech recognition models follows the standard recipes in Kaldi speech recognition toolkit

(Povey et al., 2011). We experimented with monophone and triphone models with MFCC feature as input as well as more advanced speaker adaptive training (SAT) (Matsoukas et al., 1997)

approach with feature space maximum likelihood linear regression (fMLLR)

(Gales et al., ) as input.

4 Experiments

4.1 Experiment setting

We perform experiments on the TIMIT dataset where 6300 prompted English speech sentences are recorded. The preparation of training and test sets follow the standard protocol of the TIMIT dataset. The phoneme transcription of these utterances are manually segmented and labelled with a lexicon of 61 distinct phoneme classes. These phoneme labels are mapped to 39 phoneme classes for scoring phone error rate (Lee and Hon, 1989). We use 39 dimensional feature vectors including 13 mel-frequency cepstral coefficients (MFCC) plus its acceleration features that are extracted with 25 ms Hamming window at 10 ms interval. The classifier is modeled by a fully connected neural network with one hidden layer of ReLU units. The input to the neural network is a concatenation of frames within a context window of size . We follow the default hyper-parameters in Wang et al. (2017) to estimate the phoneme boundaries, which are used to initialize our algorithm. The optimization of (3) is performed with momentum SGD with a fixed schedule of increasing batch size from 5000 to 20000. in (3) is chosen to be

. We use both frame error rate (FER) and phone error rate (PER) as our evaluation metrics. Details of the experiment setting and other hyper-parameters can be found in Appendix

B.

4.2 Baseline Methods

Adversarial Mapping

The first baseline we consider is the work by Liu et al. (2018)

, which learns an unsupervised embedding by a sequence-to-sequence autoencoder followed by k-means clustering. Each cluster is then mapped to a phoneme by adversarial training between the cluster sequences and the phoneme sequence. The phoneme boundaries are given by a supervised oracle.

Cluster Purity

The accuracy of Adversarial Mapping (Liu et al., 2018) is upper-bounded by the cluster purity, which is the frame accuracy when assigning all the frames in each cluster to its most frequent phonemes. It is a supervised baseline since it relies on the phoneme labels. We show the cluster purity for 1000 clusters, which is the largest number of clusters used by Liu et al. (2018).

Supervised Neural Network

We train a supervised neural network with the same architecture as our unsupervised model with standard cross-entropy loss.

Supervised RNN Transducer

It is one of the state-of-the-art methods, which learns a BiLSTM-RNN Transducer with supervised learning (Graves et al., 2013).

Language Model Matcthing Non-Matcthing
Evaluation Metric FER* FER PER FER PER
Supervised Methods
RNN Transducer (Graves et al., 2013)
Supervised Neural Network
Cluster Purity (1000) (Liu et al., 2018)
Unsupervised Methods
Adversarial Mapping (Liu et al., 2018)
Our Model
Table 1: Phoneme classification results when phoneme boundaries are given by a supervised oracle.

4.3 Experiment Results

Unsupervised speech recognition with oracle boundary

In our proposed unsupervised learning algorithm, we use the cost function (3) to train the classifier, which is different from the cross entropy cost in supervised learning. To examine the effect of replacing cross entropy with this new unsupervised cost function, we first conduct experiments under the oracle phoneme boundaries. This setting also allows us to compare our method to the one in Liu et al. (2018), which assumes oracle phoneme boundaries. Specifically, we consider two settings. In the first setting, we follow the standard TIMIT partition to divide the training data into a training and validation sets of 3696 and 400 utterances, respectively. We use the phoneme transcription of the 3696 utterances to train our language model and call this setting “matching language model”. Then we use this learned together with the input utterances to train our model by minimizing (3). In the second setting, we divide the data into a training and a validation sets of and utterances, respectively. We train our language model on the phoneme transcription of the utterances, and use the learned with the other input utterances to train our model by minimizing (3). In this setting, the training corpus for does not overlap with the input training utterances, and we call it “non-matching language model”. Note that both settings are unsupervised since we do not use any phoneme label in our training process. The only difference is the source of the language model. The results of our algorithm and other baselines are summarized in Table 1. Although we are still far away from the state-of-the-art, in the matching LM setting, the performance of our algorithm ( PER) is approaching that of the supervised system with the same model architecture ( PER). This is an encouraging result, showing that replacing the supervised cross entropy loss with our unsupervised learning cost does not degrade the performance much. On the other hand, when we use non-matching LM, the gap becomes larger ( vs in PER). We think the reason is due to the discrepancy of the output distributions between the two sets and the reduced training corpus for . We believe such a discrepancy could be alleviated by using a large-scale dataset. Other than the standard FER and PER, we additionally show the evaluation result for FER* where the starting and ending silences are removed following the setting in Liu et al. (2018). We observe that our approach significantly outperforms the unsupervised Adversarial Mapping method in Liu et al. (2018), and even outperforms the Cluster Purity (supervised upper bound in Liu et al. (2018)). This result is not surprising since the clustering does not exploit the output distribution, and may group inputs of different phonemes into the same cluster.

Language Model Matcthing Non-Matcthing
Evaluation Metric FER PER FER PER
Supervised Methods
RNN Transducer (Graves et al., 2013)
Supervised Neural Network
Unsupervised Methods
Our Model: 1st iteration
Our Model: 2nd iteration
Our Model: 2nd iteration + HMM (mono)
Our Model: 2nd iteration + HMM (tri)
Our Model: 2nd iteration + HMM (tri + SAT)
Table 2: Results for fully unsupervised phoneme classification.

Fully unsupervised speech recognition

We now consider the fully unsupervised setting where only input speech features and a language model is given. The phoneme boundaries are not given and has to be estimated in an unsupervised manner using our Algorithm 1. We show the quality of the learned model after each iteration of the learning process in Table 2. And we observe that our iteration process improves the results by a great margin especially in the non-matching LM case, significantly lowering the FER and PER by over . This demonstrates that our boundary refining process has resulted in a better set of boundaries, which greatly improves the output distribution matching. Moreover, we also report the results of using unsupervised HMM training where the PER can be further improved. In the matching LM setting, HMM training with monophone, triphone, and speaker adaptation training (SAT) improves the PER by a similar amount. In the non-matching LM setting, HMM training significantly improves the PER, and SAT additionally improves in PER. Overall, our hybrid system with matching and non-matching LM achieved and PER, respectively, which is only below the supervised system of the same architecture.

Evaluation Metric Recall Precision F-score R-value
Dusan and Rabiner (2006) 75.2 66.8 70.8
Qiao et al. (2008) 77.5 76.3 76.9
Lee and Glass (2012) 76.2 76.4 76.3
Rasanen (2014) 74.0 70.0 73.0 76.0
Hoang and Wang (2015) 78.2 81.1
Michel et al. (2016) 74.8 81.9 78.2 80.1
Wang et al. (2017) 78.2 82.2 80.1 82.6
Ours refined boundaries 80.9 84.3 82.6 84.8
Table 3: Results for unsupervised phoneme boundary segmentation.

Unsupervised Phoneme Segmentation

To understand how much our proposed boundary refinement method in Section 2.3 improves the segmentation quality, we follow the setting in previous works and report in Table 3 the recall, precision, F-score, and R-value with a 20-ms tolerance window on TIMIT’s training set (Scharenborg et al., 2010; Versteegh et al., 2016; Rasanen, 2014). We compare our results (obtained with matching LM) with several unsupervised phoneme segmentation methods (Dusan and Rabiner, 2006; Qiao et al., 2008; Lee and Glass, 2012; Rasanen, 2014; Hoang and Wang, 2015; Michel et al., 2016; Wang et al., 2017). Note that our refined segmentation significantly improves over the initial boundaries generated by Wang et al. (2017) and also outperforms other baselines. This result also confirms its contribution to the much improved phoneme recognition performance in the 2nd iteration (see “Our Model: 2nd iteration” in Table 2) to the 1st iteration. However, we emphasize that our method is designed towards unsupervised speech recognition rather than unsupervised phoneme segmentation. Estimating the segmentation boundary only serves as an auxiliary task to enable the unsupervised learning of the recognition model. And in the testing stage, there is no need to estimate the segmentation boundaries. Instead, our trained model could be directly used with a speech decoder just as any supervised recognition model would do.

Further analysis

We include some further experiments and analysis in Appendix C, where we show the importance of the frame smoothness term in (3). We also compare the performance of our unsupervised algorithm to supervised learning with different amounts of labeled training data.

5 Related Work

Unsupervised sequence-to-sequence learning

Recently, unsupervised sequence-to-sequence learning has achieved great success in several problems. Liu et al. (2017) showed that it is possible to learn a sequence classifier without any labeled data by exploiting the output sequential structure using an unsupervised cost function named Empirical-ODM. Artetxe et al. (2018) and Lample et al. (2018) showed that unsupervised neural machine translation (uNMT) systems can be achieved by utilizing cross-lingual alignments and an adversarial structure without any form of parallel information. The success in the unsupervised sequence-to-sequence learning in various applications shed light on building our fully unsupervised speech recognition system. In particular, our work extends the Empirical-ODM in Liu et al. (2017) to problem with segmental structures.

Unsupervised speech segmentation

One line of unsupervised segmentation methods designs robust acoustic features that are likely to remain stable within a phoneme, and capture the change of features for phoneme boundaries (Esposito and Aversano, 2005; Hoang and Wang, 2015; Khanagha et al., 2014; Rasanen et al., 2011; Michel et al., 2016; Wang et al., 2017). Another line of research uses a simpler segmentation method as an initialization, and jointly trains the segmenting and acoustic models for phonemes or words (Kamper et al., 2015; Glass, 2003; Siu et al., 2014; Lee and Glass, 2012). Qiao et al. (2008) use dynamic programming methods in order to the derive optimal segmentation, but requires the number of segments and is not fully unsupervised. In Wang et al. (2017), the authors use the update gate of a GRNN autoencoder to discover the phoneme boundaries.

Unsupervised spoken term discovery

Recently, the discovery of acoustic tokens including subword and word units has become a popular research topic (Dunbar et al. (2017); Versteegh et al. (2016); Burget et al. ). The term “Spoken term discovery” includes lexicon discovery, word segmentation, and subword matching (Dunbar et al. (2017)). The standard approaches segment audio signals that are acoustically similar, and cluster the obtained segmented signals (Lee and Glass, 2012; Glass, 2012; Park and Glass, 2008; Driesen et al., 2012). Walter et al. (2013) uses the discovered unit index sequence as the transcription for the acoustic model training, similar to the HMM training in section 3. Kamper et al. (2017) iterates between the clustering and segmentation steps. Ondel et al. (2016) improves upon previous methods by replacing Gibbs sampling by variational inference, and Ondel et al. (2017) further improves the result by including a bigram language model. The effectiveness of these approaches has been demonstrated on query-by-example spoken term detection or by calculating the normalized mutual information between the self-discovered units and the actual labels. Overall, these methods differ from our method in that they segment and cluster the raw speech signals to self-discovered units, but does not recognize them into phoneme or word labels directly. More recently, Chung et al. (2018) show that unsupervised spoken word classification is possible by using adversarial cross-modal alignments similar to that in uNMT systems.

Unsupervised speech recognition with oracle segmentation

There have been several attempts (Liu et al., 2018; Chen et al., 2018) on building an unsupervised speech recognition model inspired by the success of the uNMT. These methods first learn an embedding from the acoustic data, and then map the clustered embeddings to the output space by either adversarial training or iterative mapping. In contrast, our approach learns a neural network model that directly maps the raw acoustic features into the output space by optimizing the Segmental Empirical-ODM cost, and outperforms the upper bound of the above cluster-based approaches. Furthermore, all methods in Liu et al. (2018); Chen et al. (2018) assume that the phoneme boundaries are given by a supervised oracle. In contrast, our method iteratively estimates the boundaries without any labeled data, making it fully unsupervised.

6 Conclusion

We have developed a fully unsupervised learning algorithm for phoneme recognition. The algorithm alternates between two steps: (i) learn a phoneme classifier for a given set of phoneme segmentation boundaries, and (ii) refining the phoneme boundaries based on a given classifier. For the first step, we developed a novel unsupervised cost function named Segmental Empirical-ODM by generalizing the work (Liu et al., 2017) to segmental structures. For the second step, we developed an approximate MAP approach to refining the boundaries obtained from Wang et al. (2017). Our experimental results on TIMIT phoneme recognition task demonstrate the success of a fully unsupervised phoneme recognition system. Although the fully unsupervised system is still far away from the state-of-the-art supervised methods (e.g., supervised RNN transducer), we show that with oracle boundaries the performance of our algorithm could approach that of the supervised system with the same model architecture. This demonstrates the potential of our method if, in future work, we can further improve the accuracy of boundary estimation. We want to further point out that the techniques we proposed in this paper, although was evaluated in speech recognition, can be exploited to attack other similar sequence recognition problems where the source and destination sequences have different lengths and labels are not available or hard to get.

References

  • Liu et al. (2017) Yu Liu, Jianshu Chen, and Li Deng. Unsupervised sequence classification using sequential output statistics. In Advances in Neural Information Processing Systems, pages 3550–3559, 2017.
  • Wang et al. (2017) Yu-Hsuan Wang, Cheng-Tao Chung, and Hung-yi Lee. Gate activation signal analysis for gated recurrent neural networks and its correlation with phoneme boundaries. arXiv preprint arXiv:1703.07588, 2017.
  • Xiong et al. (2016) Wayne Xiong, Jasha Droppo, Xuedong Huang, Frank Seide, Mike Seltzer, Andreas Stolcke, Dong Yu, and Geoffrey Zweig. Achieving human parity in conversational speech recognition. arXiv preprint arXiv:1610.05256, 2016.
  • Hinton et al. (2012) Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal processing magazine, 29(6):82–97, 2012.
  • Dahl et al. (2012) George E Dahl, Dong Yu, Li Deng, and Alex Acero. Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. IEEE Transactions on audio, speech, and language processing, 20(1):30–42, 2012.
  • Graves et al. (2013) Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recurrent neural networks. In Acoustics, speech and signal processing (icassp), 2013 ieee international conference on, pages 6645–6649. IEEE, 2013.
  • Graves et al. (2006) Alex Graves, Santiago Fernández, Faustino Gomez, and Jürgen Schmidhuber. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In Proceedings of the 23rd international conference on Machine learning, pages 369–376. ACM, 2006.
  • Graves (2012) Alex Graves. Sequence transduction with recurrent neural networks. arXiv preprint arXiv:1211.3711, 2012.
  • Artetxe et al. (2018) Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. Unsupervised neural machine translation. In Proc. ICLR, 2018.
  • Lample et al. (2018) Guillaume Lample, Ludovic Denoyer, and Marc’Aurelio Ranzato. Unsupervised machine translation using monolingual corpora only. In Proc. ICLR, 2018.
  • Liu et al. (2018) Da-Rong Liu, Kuan-Yu Chen, Hung-yi Lee, and Lin-Shan Lee. Completely unsupervised phoneme recognition by adversarially learning mapping relationships from audio embeddings. In Interspeech 2018, 19th Annual Conference of the International Speech Communication Association, Hyderabad, India, 2-6 September 2018., pages 3748–3752, 2018.
  • Zavaliagkos et al. (1998) George Zavaliagkos, Man-Hung Siu, Thomas Colthurst, and Jayadev Billa. Using untranscribed training data to improve performance. In Fifth International Conference on Spoken Language Processing, 1998.
  • Kemp and Waibel (1999) Thomas Kemp and Alex Waibel. Unsupervised training of a speech recognizer: Recent experiments. In Sixth European Conference on Speech Communication and Technology, 1999.
  • Nallasamy et al. (2012) Udhyakumar Nallasamy, Florian Metze, and Tanja Schultz. Active learning for accent adaptation in automatic speech recognition. In Spoken Language Technology Workshop (SLT), 2012 IEEE, pages 360–365. IEEE, 2012.
  • Povey et al. (2011) Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, et al. The kaldi speech recognition toolkit. In IEEE 2011 workshop on automatic speech recognition and understanding, number EPFL-CONF-192584. IEEE Signal Processing Society, 2011.
  • Matsoukas et al. (1997) Spyros Matsoukas, Rich Schwartz, Hubert Jin, and Long Nguyen. Practical implementations of speaker-adaptive training. In DARPA Speech Recognition Workshop. Citeseer, 1997.
  • (17) Mark JF Gales et al.

    Maximum likelihood linear transformations for hmm-based speech recognition.

  • Lee and Hon (1989) K-F Lee and H-W Hon. Speaker-independent phone recognition using hidden markov models. IEEE Transactions on Acoustics, Speech, and Signal Processing, 37(11):1641–1648, 1989.
  • Dusan and Rabiner (2006) Sorin Dusan and Lawrence Rabiner. On the relation between maximum spectral transition positions and phone boundaries. In Ninth International Conference on Spoken Language Processing, 2006.
  • Qiao et al. (2008) Yu Qiao, Naoya Shimomura, and Nobuaki Minematsu. Unsupervised optimal phoneme segmentation: Objectives, algorithm and comparisons. In Acoustics, Speech and Signal Processing, 2008. ICASSP 2008. IEEE International Conference on, pages 3989–3992. IEEE, 2008.
  • Lee and Glass (2012) Chia-ying Lee and James Glass. A nonparametric bayesian approach to acoustic model discovery. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1, pages 40–49. Association for Computational Linguistics, 2012.
  • Rasanen (2014) Okko Rasanen. Basic cuts revisited: Temporal segmentation of speech into phone-like units with statistical learning at a pre-linguistic level. In Proceedings of the Annual Meeting of the Cognitive Science Society, volume 36, 2014.
  • Hoang and Wang (2015) Dac-Thang Hoang and Hsiao-Chuan Wang. Blind phone segmentation based on spectral change detection using legendre polynomial approximation. The Journal of the Acoustical Society of America, 137(2):797–805, 2015.
  • Michel et al. (2016) Paul Michel, Okko Räsänen, Roland Thiolliere, and Emmanuel Dupoux. Blind phoneme segmentation with temporal prediction errors. arXiv preprint arXiv:1608.00508, 2016.
  • Scharenborg et al. (2010) Odette Scharenborg, Vincent Wan, and Mirjam Ernestus. Unsupervised speech segmentation: An analysis of the hypothesized phone boundaries. The Journal of the Acoustical Society of America, 127(2):1084–1095, 2010.
  • Versteegh et al. (2016) Maarten Versteegh, Xavier Anguera, Aren Jansen, and Emmanuel Dupoux. The zero resource speech challenge 2015: Proposed approaches and results. Procedia Computer Science, 81:67–72, 2016.
  • Esposito and Aversano (2005) Anna Esposito and Guido Aversano. Text independent methods for speech segmentation. In Nonlinear Speech Modeling and Applications, pages 261–290. Springer, 2005.
  • Khanagha et al. (2014) Vahid Khanagha, Khalid Daoudi, Oriol Pont, and Hussein Yahia. Phonetic segmentation of speech signal using local singularity analysis. Digital Signal Processing, 35:86–94, 2014.
  • Rasanen et al. (2011) Okko Rasanen, Unto Laine, and Toomas Altosaar. Blind segmentation of speech using non-linear filtering methods. In Speech Technologies. InTech, 2011.
  • Kamper et al. (2015) Herman Kamper, Aren Jansen, and Sharon Goldwater. Fully unsupervised small-vocabulary speech recognition using a segmental bayesian model. In Sixteenth Annual Conference of the International Speech Communication Association, 2015.
  • Glass (2003) James R Glass. A probabilistic framework for segment-based speech recognition. Computer Speech & Language, 17(2-3):137–152, 2003.
  • Siu et al. (2014) Man-hung Siu, Herbert Gish, Arthur Chan, William Belfield, and Steve Lowe. Unsupervised training of an hmm-based self-organizing unit recognizer with applications to topic classification and keyword discovery. Computer Speech & Language, 28(1):210–223, 2014.
  • Dunbar et al. (2017) Ewan Dunbar, Xuan Nga Cao, Juan Benjumea, Julien Karadayi, Mathieu Bernard, Laurent Besacier, Xavier Anguera, and Emmanuel Dupoux. The zero resource speech challenge 2017. In Automatic Speech Recognition and Understanding Workshop (ASRU), 2017 IEEE, pages 323–330. IEEE, 2017.
  • (34) Lukáš Burget, Sanjeev Khudanpur, Najim Dehak, Jan Trmal, Reinhold Haeb-Umbach, Graham Neubig, Shinji Watanabe, Daichi Mochihashi, Takahiro Shinozaki, Ming Sun, et al. Building speech recognition system from untranscribed data report from jhu workshop 2016.
  • Glass (2012) James Glass. Towards unsupervised speech processing. In Information Science, Signal Processing and their Applications (ISSPA), 2012 11th International Conference on, pages 1–4. IEEE, 2012.
  • Park and Glass (2008) Alex S Park and James R Glass. Unsupervised pattern discovery in speech. IEEE Transactions on Audio, Speech, and Language Processing, 16(1):186–197, 2008.
  • Driesen et al. (2012) Joris Driesen et al. Fast word acquisition in an nmf-based learning framework. In Acoustics, Speech and Signal Processing (ICASSP), 2012 IEEE International Conference on, pages 5137–5140. IEEE, 2012.
  • Walter et al. (2013) Oliver Walter, Timo Korthals, Reinhold Haeb-Umbach, and Bhiksha Raj. A hierarchical system for word discovery exploiting dtw-based initialization. In Automatic Speech Recognition and Understanding (ASRU), 2013 IEEE Workshop on, pages 386–391. IEEE, 2013.
  • Kamper et al. (2017) Herman Kamper, Karen Livescu, and Sharon Goldwater. An embedded segmental k-means model for unsupervised segmentation and clustering of speech. In Automatic Speech Recognition and Understanding Workshop (ASRU), 2017 IEEE, pages 719–726. IEEE, 2017.
  • Ondel et al. (2016) Lucas Ondel, Lukáš Burget, and Jan Černockỳ. Variational inference for acoustic unit discovery. Procedia Computer Science, 81:80–86, 2016.
  • Ondel et al. (2017) Lucas Ondel, Lukaš Burget, Jan Černockỳ, and Santosh Kesiraju. Bayesian phonotactic language model for acoustic unit discovery. In Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on, pages 5750–5754. IEEE, 2017.
  • Chung et al. (2018) Yu-An Chung, Wei-Hung Weng, Schrasing Tong, and James Glass. Unsupervised cross-modal alignment of speech and text embedding spaces. arXiv preprint arXiv:1805.07467, 2018.
  • Chen et al. (2018) Yi-Chen Chen, Chia-Hao Shen, Sung-Feng Huang, and Hung-yi Lee. Towards unsupervised automatic speech recognition trained by unaligned speech and text only. arXiv preprint arXiv:1803.10952, 2018.

Supplementary Material

Appendix A Derivation of the MAP estimate for the segmentation boundaries

In this appendix, we derive the MAP estimate for given an input utterance sequence . Specifically, we have

(6)

where in step (a) we approximate the by its factored form and in step (b) we dropped the constant term that is independent of .

Appendix B Detailed Experiment setting

We perform experiments on the TIMIT dataset where 6300 prompted English speech utterances are recorded. The phoneme transcription of these utterances are manually segmented and labelled with a lexicon of 61 distinct phoneme classes, where we compact the 61 phoneme classes into 48 phone classes and train the language model with the validation dataset, which is later used to train our main algorithm. These 48 phoneme classes are mapped to 39 phoneme classes for scoring phone error rate [Lee and Hon, 1989].

The 39 dimensional feature vectors including 13 mel-frequency cepstral coefficients (MFCC) plus its acceleration features that are extracted with 25 ms Hamming window at 10 ms interval. The classifier is modeled by a fully connected neural network with one hidden layer of ReLU units. The input to the neural network is a concatenation of frames within a context window of size , and we repeat the starting or ending frames if the window has reached the start or end of the sentence.

The optimization of (3) is performed with momentum SGD with momentum 0.9 and learning rate of with learning rate decay with a fixed schedule of increasing batchsize from 5000 to 20000 and decreasing temperature for softmax. The scheduler parameter is listed below: first 200 epochs with batchsize 500 and temperature 1.0, next 300 epochs with batchsize 5000 and temperature 0.9, followed by 300 epochs with batchsize 10000 and temperature 0.8, finally 300 epochs with batchsize 20000 and temperature 0.7. Whenever the batchsize is increased, we set the learning rate to the inital learning rate value. The scheduling procedure is determined by self-validation, and is not extensively tuned during the experiments.

In our experiments, we chose

for the N-gram in (

1), and for computational issues we only consider the most frequent 10000 5-grams. We do not observe any noticeable performance drop by considering only the 10000 5-grams. Among the 5-gram language model , 69553 5-grams (out of possible 5-grams) are non-zero, and the top 10000 5-grams account for almost half of the probability. To sample

, we use a standard truncated normal distribution for sampling the frame in every segment, with some necessary scaling and rounding. The truncated distribution is to ensure that our sampling will give us bounded frames that lie in the correct segment. This distribution can also be replaced by the uniform distribution.

We randomly sample 10000 continuous frames to optimize (2), which is sampled every batch. in (3) is chosen to be . We use both frame error rate (FER) and phone error rate (PER) as our evaluation metrics. All phone error rate (PER) results reported has been obtained by a Kaldi decoder by considering the per-frame softmax value and the language model, and the weight between the two set to 1, which is fixed in all unsupervised setting.

Appendix C Additional experiments and analysis

First, we examine the importance of the frame smoothness term in (3) in the fully unsupervised setting. In Figure 3, we show the FER of our model after the first iteration of Algorithm 1 for different values of . Note that when is close to the order of , the result does not differ a lot from the best result. However, when is set to zero, the performance degrades significantly. This confirms the importance of incorporating the temporal structure of speech data into the cost function, as discussed in Section 2.2. Second, we would like to study another important question regarding our unsupervised learning method: how much labeled data is it equivalent to? In Figure 3, we show the supervised neural network with different sizes of training data, where x-axis is the percentage of the original labeled set being used to train the model. We observe that with oracle boundary and matching LM, our algorithm is equivalent to supervised learning with labeled data. With unsupervised boundary estimation, we still see a big performance loss. Therefore, it is critical to improve the boundary estimation performance in our future work.

Figure 3: Further analysis of our algorithm. (a) Influence of after the 1st iteration of fully unsupervised learning with matching language model. (b) Equivalent amount of labeled data.