In recent automatic speech recognition research, two types of neural models have become prominent: recurrent neural network (RNN) encoder-decoders (“sequence-to-sequence” models) [1, 2, 3] and connectionist temporal classification (CTC) models [4, 5, 6, 7, 8]. Both types of models perform well, but CTC-based models are more common in large state-of-the-art systems. Among their advantages, CTC models are typically faster to train than encoder-decoders, because they lack the RNN-based decoder.
Most CTC-based models are based on variants of recurrent Long Short-Term Memory (LSTM) networks, sometimes including convolutional or fully connected layers in addition to the recurrent ones. More recently, a few purely convolutional approaches to CTC [9, 10] have been demonstrated to match or outperform LSTM counterparts. Purely convolutional networks have the advantage that they can be trained much faster, since all frames can be processed in parallel, whereas in recurrent networks the frames within an utterance cannot be naturally distributed across multiple processors.
We take a further step toward all-convolutional CTC models by exploring a variety of convolutional architectures trained with the CTC loss function and evaluating on conversational telephone speech (prior work evaluated on TIMIT, Wall Street Journal, and a corporate data set[9, 10]). Previous work with convolutional CTC models has mainly considered 2-D convolutional layers. Here we study 1-D convolutions, which are more efficient and perform similarly. 1-D convolutions are similar to time-delay neural networks (TDNNs), which have traditionally been used with HMMs [11, 12].
While the ideas should apply to any CTC-based model and task, here we consider the task of lexicon-free conversational speech recognition using character-based models. We find that our best convolutional models are close to, but not quite matching, the best LSTM-based ones. However, the CNNs can be trained much faster, so that given a fixed training time budget (within a wide range), convolutional models typically outperform recurrent ones. Our trained CNN models also convert speech-to-text much faster than their trained recurrent counterparts. As the research community considers increasingly large tasks, such as whole-word CTC models [13, 14], computational efficiency is often a concern, especially with limited hardware resources. The efficiency of CNNs makes them an attractive option in these settings.
2 Model Architecture
CTC is an approach to sequence labeling that uses a neural “encoder”, which maps from an input sequence of frame features
to a sequence of hidden state vectors
, followed by a softmax to produce posterior probabilities of frame-level labels (referred to as “CTC labels”)for each label . The posterior probability of a complete frame-level label sequence is taken to be the product of the frame posteriors:
The CTC label set consists of all of the possible true output labels (in our case, characters) plus a “blank” symbol . Given a CTC label sequence, the hypothesized final label sequence is given by collapsing consecutive identical frame CTC labels and removing blanks. We use to denote the collapsing function. All of the model parameters are learned jointly using the CTC loss function, which is the log posterior probability of the training label sequence given input sequence ,
Model parameters are learned using gradient descent; the gradient can be computed via a forward-backward technique .
Our CTC models operate at a character level. We use the special blank symbol along with a vocabulary of 45 characters which appear in the raw SWB corpus (26 letters, 10 digits, space, &, ’, -, [laughter], [vocalized-noise], [noise], / and _). These transcriptions were inherited from a Switchboard Kaldi  setup without text normalization. We remove punctuation and noise tokens during post-processing. Decoding with CTC models can be done in a number of ways, depending on whether one uses a lexicon and/or a word- or character-level language model (LM) . Here we focus on two simple cases, greedy decoding with no language model and beam-search decoding with an -gram character LM.
To decode without a language model, we take the most likely CTC output label at each frame and collapse the resulting frame label sequence to the corresponding character sequence. We also consider decoding with an -gram language model () using a beam search decoding procedure. We decode with the objective,
where , denotes the length of , and , are tunable parameters. The final decoded output is . Our beam-search method is the algorithm described in .111We account for <s> and </s> tokens during beam-search decoding (not explicitly mentioned in the beam search algorithm in ).
We refer to the neural network that maps from the input x to state vectors h as an encoder. We consider both a typical recurrent LSTM encoder and various convolutional encoders. Our input vectors are 40 log mel frequency filterbank features (static) concatenated with their first-order derivatives (delta).
Our recurrent encoder is a multi-layer bi-directional LSTM with a dropout layer between consecutive layers (with dropout rate 0.1). We concatenate every two consecutive input vectors (as in ), which reduces the time resolution by a factor of two and speeds up both the forward and backward pass.
2.2.2 1-D CNNs
For our all-CNN encoders, we consider 1-D CNN structures that convolve across time only. Each of the input acoustic feature dimensions is treated as a separate input channel. The first layer is a convolution followed by max-pooling across time (with a stride size 2), followed by several convolutional layers, and ending with two 512-unit fully connected layers and a final projection layer. Each convolution has 256 channels. We add batch normalization after every convolution, and include residual connections between every pair of convolutional layers after the max-pool[17, 18]
. A ReLU non-linearity is used after every convolution, similar to the residual learning blocks in  (referred to as “ResBlocks (RBs)” in the rest of the paper). Fig. 1 portrays our architecture.
3 Experimental Setting
3.1 Data Setup
We use the Switchboard corpus (LDC97S62) , which contains roughly 300h of conversational telephone speech, as our training set. Following the Kaldi recipe , we reserve the first 4K utterances as a validation set. Since the training set has several repetitions of short utterances (like “uh-huh”), we remove duplicates beyond a count threshold of 300. The final training set has about 192K utterances. For evaluation, we use the HUB5 Eval2000 data set (LDC2002S09), consisting of two subsets: Switchboard (SWB), which is similar in style to the training set, and CallHome (CH), which contains conversations between friends and family.222Our Eval2000 setup has 4447 utterances, 11 utterances fewer than in some other papers. This discrepancy could result in an Eval2000 WER difference of 0.1-0.2%.
Our input filterbank features along with their deltas are normalized with per-speaker mean and variance normalization.
3.2 Training Setup
All models are trained on a single Titan X GPU with two supporting CPU threads, using TensorFlowr1.1  and optimized using Adam  with a mini-batch size of 64 for LSTM (BasicLSTMCell
) models and 32 for CNN models (unless otherwise mentioned). For the LSTM models, we use a learning rate of 0.001. For the CNN models, a smaller learning rate of 0.0002 was preferred. The learning rate is decayed by 5% whenever validation loss doesn’t decrease over two epochs. We report average training time per epoch for each model as both wall-clock hours () and CPU-hours ().
4.1 LSTM Baseline
As a baseline, we train a 5-layer 320 hidden unit bi-directional recurrent neural network using LSTMs, similar to the architecture described in . With a batch-size of 64, our LSTM needs hours / epoch and hours / epoch. On a batch-size of 32, the LSTM takes hours / epochs and hours / epoch.
|Model||# Weights||WER %|
|5/320 LSTM||11.1M||28.54||64||3.3 / 5.8|
|10*1, 8 RBs||11.1M||36.71||32||0.9 / 2.2|
|10*1, 11 RBs||15.1M||32.67||32||1.0 / 2.5|
|10*1, 14 RBs||19.0M||30.92||32||1.1 / 2.8|
|10*1, 17 RBs||22.9M||29.82||32||1.5 / 3.5|
4.2 1-D CNNs
We conduct experiments on 1-D CNNs investigating variance in performance and time / epoch with network depth and filter size. These are given in Table 1 and Table 2. We notice that for the same number of trainable parameters deeper networks with smaller filters seem to perform the best. We noticed that smaller-filter deeper architectures over-fit less when compared to larger-filter architectures with the same number of trainable parameters. For a fixed network depth, a mid-sized filter performed best. We present a graph of convergence vs wall-clock time in Fig. 2. As expected, the CNNs train faster than LSTMs, and significantly faster at the same batch-size. We also notice significant speed-ups during greedy decoding of the Eval2000 corpus, as shown in Table 3.
We show some of the learned filters in Fig. 3. These filters show that the network learns derivative-like filter patterns across different input channels. Our 1-D convolution structure with filter size *1 can be viewed as similar to a 2-D convolution with filter size *80, since the 1-D filters are learned jointly. We also note the strong relation between filter patterns learned in the static and delta regions.
|Model||# Weights||WER %|
|5*1, 16 RBs||11.1M||33.26||1.0 / 2.3|
|10*1, 8 RBs||11.1M||36.71||0.9 / 2.2|
|15*1, 5 RBs||10.5M||43.18||0.8 / 2.1|
|15*1, 6 RBs||12.4M||39.83||0.9 / 2.4|
|5*1, 28 RBs||19.0M||29.65||1.4 / 3.5|
|10*1, 14 RBs||19.0M||30.92||1.1 / 2.8|
|15*1, 9 RBs||18.3M||35.45||1.1 / 3.1|
|15*1, 10 RBs||20.3M||33.94||1.1 / 3.0|
|5*1, 14 RBs||9.8M||35.34||1.0 / 2.2|
|10*1, 14 RBs||19.0M||30.92||1.1 / 2.8|
|15*1, 14 RBs||28.1M||31.36||1.6 / 3.8|
|5*1, 28 RBs, CNN||19.0M||1||115||/||135|
|5*1, 28 RBs, CNN||19.0M||32||17||/||18|
|5*1, 28 RBs, CNN||19.0M||64||15||/||16|
4.3 Language Model Decoding
We evaluate our baseline LSTM and best performing CNN (5*1 filter with 28 RBs) on the Eval2000 corpus. We train each model to 50 epochs with early stopping on validation data. We augment our models with 7-gram and 9-gram character-level language models (LMs). These -gram models were trained only on the SWB training corpus transcripts using SRILM . For all experiments, a beam size of 200 was used. We choose and after tuning on validation data. Our results are presented in Table 4. Notice that in the no LM results our CNNs are only 0.2% behind on the SWB part of Eval2000, but a larger 1.1% behind on CH. After LM decoding, the differences are more pronounced. This indicates that CNNs seem to over-fit more on the training data (which is similar to the SWB part of Eval2000) and show less improvement with the help of LMs.
|5/320 LSTM + no LM||27.7||47.5||37.6|
|5/320 LSTM + 7-g||20.0||38.5||29.3|
|5/320 LSTM + 9-g||19.7||38.2||29.0|
|5*1 28 RBs, CNN + no LM||27.9||48.6||38.3|
|5*1 28 RBs, CNN + 7-g||21.7||40.4||31.1|
|5*1 28 RBs, CNN + 9-g||21.3||40.0||30.7|
|Maas  + no LM||38.0||56.1||47.1|
|Maas  + 7-g||27.8||43.8||35.9|
|Maas  + RNN||21.4||40.2||30.8|
|Zenkel  + no LM||30.4||44.0||37.2|
|Zenkel  + RNN||18.6||31.6||25.1|
|Zweig  + no LM||25.9||38.8||-|
|Zweig  + -g||19.8||32.1||-|
We take a further step towards making all-convolutional CTC architectures practical for speech recognition. In particular we have explored 1-D convolutions with CTC, which are particularly time-efficient. Our CNN-based CTC models are still slightly behind LSTMs in performance, but train and decode significantly faster. Further work in this space could include additional model variants and regularizers, as well as studying the relative merits of all-convolutional models in larger systems operating at the word level, where the efficiency advantages are expected to be even more important. In addition, CNN-based speech recognition has also been explored in the context of different training and decoding algorithms, such as the auto segmentation criterion . It would be interesting to conduct a broader study considering the interaction of CNNs with different training and decoding approaches.
We are grateful to Shubham Toshniwal for help with the data and baselines, and to Florian Metze for useful comments.
-  Ilya Sutskever, Oriol Vinyals, and Quoc V Le, “Sequence to sequence learning with neural networks,” in Advances in NIPS, 2014.
-  William Chan, Navdeep Jaitly, Quoc Le, and Oriol Vinyals, “Listen, attend and spell: A neural network for large vocabulary conversational speech recognition,” in ICASSP, 2016.
-  Rohit Prabhavalkar, Kanishka Rao, Tara N Sainath, Bo Li, Leif Johnson, and Navdeep Jaitly, “A comparison of sequence-to-sequence models for speech recognition,” Interspeech, 2017.
-  Alex Graves, Santiago Fernández, Faustino Gomez, and Jürgen Schmidhuber, “Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks,” in Proceedings of ICML, 2006.
-  Andrew L Maas, Ziang Xie, Dan Jurafsky, and Andrew Y Ng, “Lexicon-free conversational speech recognition with neural networks,” in HLT-NAACL, 2015.
-  Dario Amodei, Sundaram Ananthanarayanan, Rishita Anubhai, Jingliang Bai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Qiang Cheng, Guoliang Chen, et al., “Deep Speech 2: End-to-end speech recognition in English and Mandarin,” in Proceedings of ICML, 2016.
-  Yajie Miao, Mohammad Gowayyed, and Florian Metze, “EESEN: End-to-end speech recognition using deep RNN models and WFST-based decoding,” in ASRU, 2015.
-  Geoffrey Zweig, Chengzhu Yu, Jasha Droppo, and Andreas Stolcke, “Advances in all-neural speech recognition,” in ICASSP, 2017.
-  Ying Zhang, Mohammad Pezeshki, Philémon Brakel, Saizheng Zhang, César Laurent, Yoshua Bengio, and Aaron C. Courville, “Towards end-to-end speech recognition with deep convolutional neural networks,” in Interspeech, 2016.
-  Yisen Wang, Xuejiao Deng, Songbai Pu, and Zhiheng Huang, “Residual convolutional CTC networks for automatic speech recognition,” CoRR, vol. abs/1702.07793, 2017.
-  Alexander Waibel, Toshiyuki Hanazawa, Geoffrey Hinton, Kiyohiro Shikano, and Kevin J Lang, “Phoneme recognition using time-delay neural networks,” in Readings in speech recognition. Elsevier, 1990.
-  Vijayaditya Peddinti, Daniel Povey, and Sanjeev Khudanpur, “A time delay neural network architecture for efficient modeling of long temporal contexts,” in Interspeech, 2015.
-  Kartik Audhkhasi, Bhuvana Ramabhadran, George Saon, Michael Picheny, and David Nahamoo, “Direct acoustics-to-word models for English conversational speech recognition,” in Interspeech, 2017.
-  Hagen Soltau, Hank Liao, and Hasim Sak, “Neural speech recognizer: Acoustic-to-word LSTM model for large vocabulary speech recognition,” in Interspeech, 2017.
-  Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, et al., “The Kaldi speech recognition toolkit,” in ASRU, 2011.
-  Thomas Zenkel, Ramon Sanabria, Florian Metze, Jan Niehues, Matthias Sperber, Sebastian Stüker, and Alex Waibel, “Comparison of decoding strategies for CTC acoustic models,” in Interspeech, 2017.
-  Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, “Deep residual learning for image recognition,” in CVPR, 2016.
-  Sergey Ioffe and Christian Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in Proceedings of ICML, 2015.
-  Vinod Nair and Geoffrey E Hinton, in Proceedings of ICML, 2010.
-  John J Godfrey, Edward C Holliman, and Jane McDaniel, “SWITCHBOARD: Telephone speech corpus for research and development,” in ICASSP, 1992.
Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig
Citro, Greg Corrado, Andy Davis, Jeffrey Dean, et al.,
“Tensorflow: Large-scale machine learning on heterogeneous distributed systems,” 2015.
-  Diederik Kingma and Jimmy Ba, “Adam: A method for stochastic optimization,” in ICLR, 2015.
-  Andreas Stolcke et al., “SRILM-an extensible language modeling toolkit,” in Interspeech, 2002.
-  Ronan Collobert, Christian Puhrsch, and Gabriel Synnaeve, “Wav2Letter: an end-to-end convnet-based speech recognition system,” CoRR, vol. abs/1609.03193, 2016.