Kaggle Competitions - TensorFlow Speech Recognition Challenge
Convolutional Neural Networks (CNNs) are effective models for reducing spectral variations and modeling spectral correlations in acoustic features for automatic speech recognition (ASR). Hybrid speech recognition systems incorporating CNNs with Hidden Markov Models/Gaussian Mixture Models (HMMs/GMMs) have achieved the state-of-the-art in various benchmarks. Meanwhile, Connectionist Temporal Classification (CTC) with Recurrent Neural Networks (RNNs), which is proposed for labeling unsegmented sequences, makes it feasible to train an end-to-end speech recognition system instead of hybrid settings. However, RNNs are computationally expensive and sometimes difficult to train. In this paper, inspired by the advantages of both CNNs and the CTC approach, we propose an end-to-end speech framework for sequence labeling, by combining hierarchical CNNs with CTC directly without recurrent connections. By evaluating the approach on the TIMIT phoneme recognition task, we show that the proposed model is not only computationally efficient, but also competitive with the existing baseline systems. Moreover, we argue that CNNs have the capability to model temporal correlations with appropriate context information.READ FULL TEXT VIEW PDF
Connectionist temporal classification (CTC) is a popular sequence predic...
In this paper, we summarize recent progresses made in deep learning base...
Enhancing coded speech suffering from far-end acoustic background noise,...
Computationally efficient classification system architecture is proposed...
Connectionist temporal classification (CTC) has matured as an alignment ...
We approach the singing phrase audio to score matching problem by using
In hybrid hidden Markov model/artificial neural networks (HMM/ANN) autom...
Kaggle Competitions - TensorFlow Speech Recognition Challenge
Speech Recognition using Connectionist Temporal Classification
Recently, Convolutional Neural Networks (CNNs)  have achieved great success in acoustic modeling [2, 3, 4]. In the context of Automatic Speech Recognition, CNNs are usually combined with HMMs/GMMs [5, 6], like regular Deep Neural Networks (DNNs), which results in a hybrid system [2, 3, 4]. In the typical hybrid system, the neural net is trained to predict frame-level targets obtained from a forced alignment generated by an HMM/GMM system. The temporal modeling and decoding operations are still handled by an HMM but the posterior state predictions are generated using the neural network.
This hybrid approach is problematic in that training the different modules separately with different criteria may not be optimal for solving the final task. As a consequence, it often requires additional hyperparameter tuning for each training stage which can be laborious and time consuming. Furthermore, these issues have motivated a recent surge of interests in training ‘end-to-end’ systems[7, 8, 9]. End-to-end neural systems for speech recognition typically replace the HMM with a neural network that provides a distribution over sequences directly. Two popular neural network sequence models are Connectionist Temporal Classification (CTC)  and recurrent models for sequence generation [8, 11].
To the best of our knowledge, all end-to-end neural speech recognition systems employ recurrent neural networks in at least some part of the processing pipeline. The most successful recurrent neural network architecture used in this context is the Long Short-Term Memory (LSTM)[12, 13, 14, 15]. For example, a model with multiple layers of bi-directional LSTMs and CTC on top which is pre-trained with the transducer networks [12, 13] obtained the state-of-the-art on the TIMIT dataset. After these successes on phoneme recognition, similar systems have been proposed in which multiple layers of RNNs were combined with CTC to perform large vocabulary continuous speech recognition [7, 16]. It seems that RNNs have become somewhat of a default method for end-to-end models while hybrid systems still tend to rely on feed-forward architectures.
While the results of these RNN-based end-to-end systems are impressive, there are two important downsides to using RNNs/LSTMs: (1) The training speed can be very slow due to the iterative multiplications over time when the input sequence is very long; (2) The training process is sometimes tricky due to the well-known problem of gradient vanishing/exploding [17, 18]. Although various approaches have been proposed to address these issues, such as data/model parallelization across multiple GPUs [7, 19] and careful initializations for recurrent connections , those models still suffer from computationally intensive and otherwise demanding training procedures.
Inspired by the strengths of both CNNs and CTC, we propose an end-to-end speech framework in which we combine CNNs with CTC without intermediate recurrent layers. We present experiments on the TIMIT dataset and show that such a system is able to obtain results that are comparable to those obtained with multiple layers of LSTMs. The only previous attempt to combine CNNs with CTC that we know about , led to results that were far from the state-of-the-art. It is not straightforward to incorporate CNN into an end-to-end manner since the task may require the model to incorporate long-term dependencies. While RNNs can learn these kind of dependencies and have been combined with CTC for this very reason, it was not known whether CNNs were able to learn the required temporal relationships.
In this paper, we argue that in a CNN of sufficient depth, the higher-layer features are capable of capturing temporal dependencies with suitable context information. Using small filter sizes along the spectrogram frequency axis, the model is able to learn fine-grained localized features, while multiple stacked convolutional layers help to learn diverse features on different time/frequency scales and provide the required non-linear modeling capabilities.
Unlike the time windows applied in DNN systems [2, 3, 4], the temporal modeling is deployed within convolutional layers, where we perform a 2D convolution similar to vision tasks, and multiple convolutional layers are stacked to provide a relatively large context window for each output prediction of the highest layer. The convolutional layers are followed by multiple fully connected layers and, finally, CTC is added on the top of the model. Following the suggestion from , we only perform pooling along the frequency band on the first convolutional layer. Specifically, we evaluate our model on phoneme recognition for the TIMIT dataset.
Most of the CNN models [2, 3, 4] in the speech domain have large filters and use limited weight sharing which splits the features into limited frequency bands while performing convolution separately and the convolution is usually applied with no more than 3 layers. In this section, we describe our CNN acoustic model whose architecture is different from the above. The complete CNN includes stacked convolutional and pooling layers, at the top of which are multiple fully-connected layers.
Since CNNs are adept at modeling local structures in the inputs, we use log mel-filter-bank (plus energy term) coefficients with deltas and delta-deltas which preserve the local correlations of the spectrogram.
As shown in Figure 1, given a sequence of acoustic feature values with number of channels , frequency bandwidth , and time length , the convolutional layer convolves with filters where each is a tensor with its width along the frequency axis equal to and its length along frame axis equal to . The resulting pre-activation feature maps consist of a D tensor , in which each feature map is computed as follows:
The symbol denotes the convolution operation and is a bias parameter. There are three points that are worth mentioning:(1) The sequence length of after convolution is guaranteed to be equal to the input ’s sequence lengthfor all the convolution operations in our model; (3) We do not use limited weight sharing which splits the frequency bands into groups of limited bandwidths and convolution is done within each group separately. Instead, we perform the convolution over
not only along the frequency axis but also along the time axis, which results in a simple 2D convolution commonly used in computer vision.
The pre-activation feature maps
are passed through non-linear activation functions. We introduce three activation functions in the following and show their functionalities in the convolutional layer as an example, notice that all the operations below are element-wise.
The Parametric Rectifier Linear Unit (PReLU)  is an extension of the ReLU in which the output of the model in the regions that input is negative is a linear function of the input with a slope of . PReLU is formalized as:
The extra parameter
is usually initialized to 0.1 and can be trained using backpropagation.
Another type of activation function which has been shown to improve the results for the task of speech recognition [16, 24, 25, 26] is the maxout function . Following the same computational process as in , we take the number of piece-wise linear functions as for example. Then for we have:
where for and we have:
which are two linear feature map candidates after the convolution, and is the input of the convolutional layer at . Figure 2 depicts the ReLU, PReLU, and Maxout activation functions.
After the element-wise non-linearities, the features will pass through a max-pooling layer which outputs the maximum unit from adjacent units. We do pooling only along the frequency axis since it helps to reduce spectral variations within the same speaker and between different speakers , while pooling in time has been shown to be less helpful . Specifically, suppose that the th feature map before and after pooling are and , then at position is computed by:
where is the step size and is the pooling size, and all the values inside the max have the same time index . Consequently, the feature maps after pooling have the same sequence lengths as the ones before pooling. As shown in Figure 3, we follow the suggestions from  that the max pooling is performed only once after the first convolutional layer. Our intuition is that as more pooling layers are applied, units in higher layers would be less discriminative with respect to the variations in input features.
Consider any sequence to sequence mapping task in which is the input sequence and is the target sequence. In the case of speech recognition, X is the acoustic signal and Z is a sequence of symbols. In order to train the neural acoustic model, must be maximized for each input-output pair.
One way to provide a distribution over variable length output sequences given some much longer input sequence, is to introduce a many-to-one mapping of latent variable sequences
to shorter sequences that serve as the final predictions. The probability of some sequenceZ can then be defined to be the sum of the probabilities of all the latent sequences that map to that sequence. Connectionist Temporal Classification (CTC)  specifies a distribution over latent sequences by applying a softmax function to the output of the network for every time step, which provides a probability for emitting each label from the alphabet of output symbols at that time step . An extra blank output class ‘-’ is introduced to the alphabet for the latent sequences to represent the probability of not outputting a symbol at a particular time step. Each latent sequence sampled from this distribution can now be transformed into an output sequence using the many-to-one mapping function which first merges the repetitions of consecutive non-blank labels to a single label and subsequently removes the blank labels as shown in Equation 7:
Therefore, the final output sequence probability is a summation over all possible sequences that yield to Z after applying the function :
A dynamic programming algorithm similar to the forward algorithm for HMMs  is used to compute the sum in Equation 8 in an efficient way. The intermediate values of this dynamic programming can also be used to compute the gradient of with respect to the neural network outputs efficiently.
To generate predictions from a trained model using CTC, we use the best path decoding algorithm. Since the model assumes that the latent symbols are independent given the network outputs in the framewise case, the latent sequence with the highest probability is simply obtained by emitting the most probable label at each time-step. The predicted sequence is then given by applying to that latent sequence prediction:
in which is the concatenation of the most probable output and is formalized by . Note that this is not necessarily the output sequence with the highest probability. Finding this sequence is generally not tractable and requires some approximate search procedure like a beam-search.
In this section, we evaluate the proposed model on phoneme recognition for the TIMIT dataset. The model architecture is shown in Figure 3.
We evaluate our models on the TIMIT 
corpus where we use the standard 462-speaker training set with all SA records removed. The 50-speaker development set is used for early stopping. The evaluation is performed on the core test set (including 192 sentences). The raw audio is transformed into 40-dimensional log mel-filter-bank (plus energy term) coefficients with deltas and delta-deltas, which results in 123 dimensional features. Each dimension is normalized to have zero mean and unit variance over the training set. We use 61 phone labels plus a blank label for training and then the output is mapped to 39 phonemes for scoring.
Our best model consists of 10 convolutional layers and 3 fully-connected hidden layers. Unlike the other layers, the first convolutional layer is followed by a pooling layer, which is described in section 2. The pooling size is , which means we only pool over the frequency axis. The filter size is across the layers. The model has 128 feature maps in the first four convolutional layers and 256 feature maps in the remaining six convolutional layers. Each fully-connected layer has 1024 units. Maxout with 2 piece-wise linear functions is used as the activation function. Some other architectures are also evaluated for comparison, see section 4.4 for more details.
To optimize the model, we use Adam  with learning rate
. Stochastic gradient descent with learning rateis then used for fine-tuning. Batch size 20 is used during training. The initial weight values were drawn uniformly from the interval . Dropout  with a probability of is added across the layers except for the input and output layers . L2 norm with coefficient is applied at fine-tuning stage. At test time, simple best path decoding (at the CTC frame level) is used to get the predicted sequences.
Our model achieves phoneme error rate on the core test set, which is slightly better than the LSTM baseline model and the transducer model with an explicit RNN language model. The details are presented in Table 1. Notice that the CNN model could take much less time to train in comparison with the LSTM model when keeping roughly the same number of parameters. In our setup on TIMIT, we get faster training speed by using the CNN model without deliberately optimizing the implementation. We suppose that the gain of the computation efficiency might be more dramatic with a larger dataset.
To further investigate the different structural aspects of our model, we disentangle the analysis into three sub-experiments considering the number of convolutional layers, the filter sizes and the activation functions, as shown in table 1. It turns out that the model may benefit from (1) more layers, which results in more nonlinearities and larger input receptive fields for units in the top layers; (2) reasonably large context windows, which help the model to capture the spatial/temporal relations of input sequences in reasonable time-scales; (3) the Maxout unit, which has more functional freedoms comparing to ReLU and parametric ReLU.
|Model||NP||Dev PER||Test PER|
Our results showed that convolutional architectures with CTC cost can achieve results comparable to the state-of-the-art by adopting the following methodology: (1) Using a significantly deeper architecture that results in a more non-linear function and also wider receptive fields along both frequency and temporal axes; (2) Using maxout non-linearities in order to make the optimization easier; and (3) Careful model regularization that yields better generalization in test time, especially for small datasets such as TIMIT, where over-fitting happens easily.
We conjecture that the convolutional CTC model might be easier to train on phoneme-level sequences rather than the character-level. Our intuition is that the local structures within the phonemes are more robust and can easily be captured by the model. Additionally, phoneme-level training might not require the modeling of many long-term dependencies in comparison with character-level training. As a result, for a convolutional model, learning the phonemes structure seems to be easier, but empirical research needs to be done to investigate if this is indeed the case.
Finally, an important point that favors convolutional over recurrent architectures is the training speed. In a CNN, the training time can be rendered virtually independent of the length of the input sequence due to the parallel nature of convolutional models and the highly optimized CNN libraries available . Computations in a recurrent model are sequential and cannot be easily parallelized. The training time for RNNs increases at least linearly with the length of the input sequence.
In this work, we present a CNN-based end-to-end speech recognition framework without recurrent neural networks which are widely used in speech recognition tasks. We show promising results on the TIMIT dataset and conclude that the model has the capacity to learn the temporal relations that are required for it to be integrated with CTC. We already observed a gain in computational efficiency on the TIMIT dataset and training the model on large vocabulary datasets and integrate with the language model would be a part of our further study. Another interesting direction is to apply Batch Normalization to the current model.
The experiments were conducted using Theano, Blocks and Fuel . The authors would like to acknowledge the funding support from Samsung, NSERC, Calcul Quebec, Compute Canada, the Canada Research Chairs and CIFAR. The authors would like to thank Dmitriy Serdyuk, Dzmitry Bahdanau, Arnaud Bergeron, and Pascal Lamblin for their helpful comments.
A.-r. Mohamed, G. E. Dahl, and G. Hinton, “Acoustic modeling using deep belief networks,”Audio, Speech, and Language Processing, IEEE Transactions on, vol. 20, no. 1, pp. 14–22, 2012.
Proceedings of the 23rd international conference on Machine learning. ACM, 2006, pp. 369–376.
International Conference on Artificial Intelligence and Statistics, 2011, pp. 315–323.
K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,” inProceedings of the IEEE International Conference on Computer Vision, 2015, pp. 1026–1034.