Although researchers have been trying to improve the performance of automatic speech recognition systems under noisy conditions for decades, the problem is far from being solved. Some solutions are focused on removing the noise from the signal or improving the feature representation of the audio channel. However, the amount of signal masked by additive noise presents a natural limitation to those solutions. For that reason, some researchers use an alternative non-related with audio modality, for example the visual channel. Vision is usually not affected by the acoustic environment, and thus immune to corruption by noise. It has been demonstrated by 
that humans also tend to put their attention to other information channels (i.e., vision) to ease the understanding of the speaker when the acoustic channel is corrupted. The addition of a new modality however into a temporal sequence classification problem creates several challenges. In addition to potentially having to estimate stream weights, the temporal alignment of both information sources is usually not constant. This is due to the nature of the speech production process, as well as the complexity of the technical solutions for transmission, storage, and encoding of audio-visual data. In this paper, we present a novel approach to audio-visual speech recognition, which will allow us to investigate these effects in new and interesting ways.
Specifically, it is shown by different linguistic studies such as  that the mouth shape towards an articulatory target modify the following phone. This effect is accentuated when the speaking rate of the speaker is high . In those cases, the visual modality will provide enough information to the system to determine how the phoneme should sound. However, a problem of synchronization between visemes (groups of similar movements of visual articulators) and phonemes is present. Some practical studies such as  state that the coarticulation is speaker- and phoneme-dependent
. This adds a certain level of difficulty to the synchronization task between the audio and video modality, and makes frame-level fusion difficult. Consequently, somewhat a-synchronous approaches such as Hidden Markov models (HMMs) coupled at the state level have been attempted, but have also not met with dramatic success.
We present an AVSR solution that does not require an HMM, but rather uses several layers of bi-directional long short-term memory (LSTM) units as building blocks, followed by a CTC loss function for the output layer. The CTC loss is defined directly over the symbol sequence, and effectively marginalizes over all permitted alignments between frames and states, adding a “blank” state between label states. The resulting alignment typically contains mostly blanks, and is thus “sparse”, see Fig. 1. In a multi-stream setting, where independent models are being trained and tested for the audio and video modality, frame-synchronous approaches such as tightly coupled HMMs together with score fusion (late integration) are therefore unlikely to work, because the “peaks” for the same unit will appear at a different point in time in each stream. Early integration (feature fusion) however should work just fine, and will thus be investigated in this paper.
As an end-to-end approach, CTC directly optimizes for the sequence of output labels without requiring any initial labeling (manual or ported from another system). However it is not clear if the temporal locations of the peaks have any meaningful interpretation. The audio-visual setting provides an interesting opportunity to investigate this issue: intuitively, the locations of the peaks should correspond to “discriminative” input features, and should thus mark “informative” time points. In an audio-visual setting, these may correspond to the different times at which phonemes and their corresponding visemes (we follow  for the mapping) are observed, without requiring any manual labeling or input.
This paper thus makes two main contributions: first, we demonstrate that CTC-based acoustic models can achieve state-of-the-art performance in audio-visual speech recognition tasks, as our system achieve comparable results with a traditional pre-Deep Learning baseline ( report 11% Word Error Rate on the ViaVoice database) and outperforms recent cross-entropy trained DNN baseline  in terms of phoneme error rate. We report results for clean and noisy training and testing conditions. Our second contribution lies in an analysis of the peak structure of the CTC output labeling for a multi-modal input, which we can compare to human intuition about the nature and relationship between the feature generation process.
2 Technical Background
2.1 Architecture of AVSR systems
Traditionally, HMMs and Gaussian Mixture Models (GMMs) have been applied as main learning structure for AVSR systems. HMMs normalize the time axis of the input sequences, and GMMs model the emission probability of each state of the HMM. Two ways of fusing both modalities are used in traditional AVSR: first, early combination (feature fusion) of both feature vectors can be applied[10, 11]
. In some cases algorithms such as Principal Component Analysis (PCA) or Linear Discrimination Analysis (LDA) are used to reduce the dimensionality of those representations. This approach may lead to frame synchronization problems[10, 12, 11]. Second, score combination (late fusion) is performed in order to avoid such problems, and even allow for asynchrony in the state sequences in the two streams. In  for example both modalities are analyzed separately and later on the results of both are fused using a bias.
In [14, 15, 9] present different recent deep-learning approaches to solve the AVSR problem. In  and , a joined (audio and video) representations using Deep Neural Networks (DNNs) is learned to perform word and phone recognition respectively. However, no temporal dependence, which is an inherent property of audiovisual speech recognition, is considered. More recently, 
present a Recurrent Temporal Multimodal Restricted Boltzmann Machine (RTMRB), which takes into consideration long-term dependencies and outperforms other non-temporal solutions. All approaches explicitly align states with the data and care must be taken in aligning data and setting up experiments.
2.2 Audio and Video Feature Representation
Several phonetic studies tried to understand which are the most relevant features that can be extracted from the face in order to perform audiovisual speech recognition. According to , lip position is a considerable source of information when performing visual-only speech recognition. In addition to the position of the lips,  state that teeth visibility eases the process of guessing the sound that was produced. Moreover, [16, 17, 18] conduct experiments where it is shown that the entire face provides information about speech.
Traditionally, researchers use different processing and feature extraction methods in order to represent the features explained above. All of them are based on extracting Regions-Of-Interest (ROIs) of each frame where the mouth and other parts of the face (e.g., jaw) are located. Different techniques are used to parametrize the ROIs such as using grey-scale value of each pixel, extracting the variation of the values of each pixel between frames, or parametrize each part of the face using a specific statistical model. As we will discuss in Section4, those techniques have problems since they provide neither a rotation nor a light-invariant feature representation of the area described.
In the field of deep learning,  proposed a MSHMM infrastructure, which uses features extracted from a Convolutial Neural Network (CNN). In addition, as we explain in Section 2.1, [14, 9, 15] learn a joint feature representation using different DNNs approaches. All those solutions, in turn, are trained on pairs of raw images and the corresponding phoneme labels, which may be unreliable, because of the inherent potential for asynchrony between audio and video (both due to the speech production and due to the technical processes when handling audio-visual content).
3 Architecture of our system
We use the Eesen framework . The Acoustic Model (AM) is composed of multiple stacked LSTM Networks, and uses CTC as loss-function. This set-up allows to our system to automatically align the sequence of vector representations and the phoneme sequences. It is important to note that the system will output the additional CTC symbol “blank” most of the time.
In place of the HMM, a series of three Weighted Finite State Transducers (WFSTs) is used to model the sequence of a symbol and blank states that make up a token (phoneme or a viseme), then the words, and the Language Model (LM) during decoding.
In our pipeline, four layers of RNNs are connected to build our AM. To provide the ability of learning more complex time sequences we use bidirectional LSTM units  for our RNN. possible labels (45 phonemes or 12 visemes in our case) plus a blank label that is added in the position 0 compose all the possible output symbols of the network. Let and with be the utterance (audio, video or audiovisual features) and their corresponding label sequence (phonemes), respectively. Thereby, each is a feature vector (audio, visual or audiovisual) and . The CTC loss function aims to maximize the expressionunits (original number of symbols plus blank).
We assume that the probabilities for each time frame are i.i.d. Let be the output probability vector computed for each time frame and , be a possible output sequence, where . Then the total probability of each possible output sequence of the labels can be computed as:
We denote all possible that can be mapped to a as . Therefore, the likelihood of given an input sequence can be described as follows:
This is the loss function that our RNN aims to maximize.
To allow blanks symbols in , we add them at the beginning, the end, and between each symbol of . Consequently, a modified label sequence of length is to be used to compute . To do so, the well-known forward-backward algorithm is used. It computes the probabilities of every past path that ends with a label at a concrete time as , and the probability of all possible paths that start with label at time to the end as . Then, the total likelihood of a sequence given is computed as:
This is differentiable and can thus be used as objective function.
4 Data and Features
The IBM ViaVoice  data set is used to test and train the proposed pipelines. The data set consists of 17111 utterances, which are spoken by 26 different speakers looking directly to the camera with an estimated Signal-to-Noise Ratio of 19.5dB in the audio channel. The data was initially split in 17111 utterances of 261 speakers for training (about 34.9 h) and 1893 of 26 speakers utterance (4.6 hours) for testing. However, we were only able to use 15963 utterances for training and 1840 utterances for testing (see Section 2.2). We perform a data augmentation adding white Gaussian noise at 10 different levels of to the original audio signal, which actually have an initial 19.5dB SNR (office noise), creating thus different SNR (40dB to 20dB).
FBank features (40 dimensions), FBank + pitch features (43 dimensions) and Mel Cepstral Coefficients (12 dimensions) are used as audio features in our experiments. Cepstral mean and variance normalization is conducted for robustness, and plus/minus one frame is stacked at the inputs of the neural network.
18 coordinate points that define the inner and outer profile of the mouth shape are extracted using IntraFace . Afterwards, a position normalization is applied doing an affine transformation to the fixed points (e.g., eye corners) of an average face. Then, a translation to the center of coordinates is performed to the lips contour (inner and outer). Finally, the acceleration and speed of each mouth point is computed. All features described are concatenated to form the visual representation (72 dimensions) of each frame.
A richer representation of the visual modality is achieved describing the mouth landmark points using a scale invariant local description (SIFT) . The original vector of the SIFT descriptors of all the mouth landmarks points has 2304 dimensions. However, the dimensionality of that feature representation is reduced applying a PCA decreasing the number of feature dimensions to 222 (98% of variance). This information is added to the previous vector to create a more complete representation (294 dimensions) of each frame. IntraFace is not able to process some utterances due to the quality of the data. Therefore, as we already stated in section 4, some utterances are removed.
We perform a baseline audio-only recognition experiments with FBank + pitch coefficients and an in-domain language model as used in , and achieved a WER of 11.8% in clean conditions using the entire database. In the following experiments a subset of the training and testing set was used, as it is explained in 4
, some data had to be removed. Also, in order to reduce the language model bias of this setup in the ViaVoice domain, we decided to switch to a more general n-gram language model based on TED talks, which we reduce to the required vocabulary and we use in the following experiments.
5.1 Audio Results
All features are tested using a 33ms frame rate. This setup is chosen to be as close to the video frame rate as possible for later fusion experiments. As can be seen in Table 1, FBank + pitch leads to better results. This feature is used in the following multi-modal experiment. It is interesting to note that relatively small differences in phone accuracy translate into bigger differences in testing word error rate.
Figure 2 shows the phone error rate, Figure 3 the word error rate for different noise conditions during testing, of systems trained with clean data only, as well as all noise conditions found in the test data (multi-condition training). “Clean” data is very uniform (the recording condition is identical for all speakers), so that multi-condition training does not improve over the baseline in this case. Those results are computed using the reduced dataset explained in section 4. However, experiments show that we achieve a 11.7 % WER using FBank Pitch features with the complete data set.
|MFCC||15.2 %||81.5 %|
|FBank||14.9 %||82.7 %|
|FBank Pitch||14.4 %||83.0 %|
5.2 Video Results
|SIFT Landmarks||63.1 %|
|SIFT Landmarks Speed Acceleration||65.7 %|
As can be seen in Table 2, different combinations of visual feature representations have been tested using visemes as target units. SIFT descriptors perform particularly well.
5.3 Audiovisual Results
Figures 2 and 3 show the benefits of training a model using data augmentation techniques, and the benefit of training a multi-modal system on audio and video features.111Still, because of the homogeneous nature of the data, a model trained specifically for a concrete noise condition will perform better than more general, multi-condition models. “Full fusion” (audio+landmarks+SIFT) models do not perform better than audio+landmarks models at higher SNR, because the high dimensionality of the SIFT feature dominates the audio features, rendering them less useful. We are currently experimenting with further dimensionality reduction techniques to solve that problem.
Figure 4 shows the peaks with which CTC labels the units in each modality, after off-setting technical delays caused by the codec and other factors, for the audio-only, video-only, and audio-visual case. Several conclusions can be extracted from that figure. First, the video signal always precedes audio signal. This finding supports the coarticualation  and anticipatory coarticulation  studies of natural speech production, where is stated that speaker changes the mouth shape before pronounce the following phone. Moreover, from an AVSR point of view, audio-visual models seem to generate better alignments than uni-modal inputs: in most cases, CTC places the position of each phoneme between the audio-only and the video-only position, correcting mis-alignments. Finally, we performed audio-visual training with +/-330ms offset between feature types, without any significant change in WER or PA. This shows that the “peaky” structure of CTC is well suited also to multi-modal fusion, and that more detailed studies should be performed in order to investigate the, presumably,speaker- and phoneme-dependent nature of coarticulation [3, 5].
In this paper, we demonstrated that end-to-end Deep Learning can successfully be applied to the problem of audio-visual (multi-modal) speech recognition. Using the CTC loss function and early integration (feature fusion), our system achieves the lowest published word error rate on the large vocabulary IBM ViaVoice database. We show that multi-condition training can be used to improve results on noisy data, and that audio-visual fusion improves results in all conditions, as expected.
More interestingly, the multi-modal setting allows us to reason about the inherent meaning of the “peaky” output structure of CTC models, and investigate how their location corresponds to our intuition about the speech production process.
Reasonable care has been used to tune the models used in these experiments, but further improvements seem possible by exploring more data augmentation strategies, or by further optimizing the CTC training strategy in this relatively low data scenario. Furthermore, neural methods could also be investigated to achieve late fusion.
Alex Graves, Santiago Fernández, Faustino Gomez, and Jürgen
“Connectionist temporal classification: labelling unsegmented
sequence data with recurrent neural networks,”
Proceedings of the 23rd international conference on Machine learning. ACM, 2006, pp. 369–376.
-  Harry McGurk and John MacDonald, “Hearing lips and seeing voices,” Nature, vol. 264, pp. 746–748, 1976.
-  Raymond D Kent and Fred D Minifie, “Coarticulation in recent speech production models,” Journal of Phonetics, vol. 5, no. 2, pp. 115–133, 1977.
-  Sarah Taylor, Barry-John Theobald, and Iain Matthews, “The effect of speaking rate on audio and visual speech,” in 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2014, pp. 3037–3041.
-  Fredericka Bell-Berti and Katherine S Harris, “Temporal patterns of coarticulation: Lip rounding,” The Journal of the Acoustical Society of America, vol. 71, no. 2, pp. 449–454, 1982.
-  Sepp Hochreiter and Jürgen Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997.
-  Luca Cappelletta and Naomi Harte, “Phoneme-to-viseme mapping for visual speech recognition.,” in ICPRAM (2), 2012, pp. 322–329.
-  Gerasimos Potamianos, Chalapathy Neti, Giridharan Iyengar, and Eric Helmuth, “Large-vocabulary audio-visual speech recognition by machines and humans.,” in INTERSPEECH. Citeseer, 2001, pp. 1027–1030.
-  Youssef Mroueh, Etienne Marcheret, and Vaibhava Goel, “Deep multimodal learning for audio-visual speech recognition,” in 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2015, pp. 2130–2134.
Jan Kratt, Florian Metze, Rainer Stiefelhagen, and Alex Waibel,
“Large vocabulary audio-visual speech recognition using the janus
speech recognition toolkit,”
Joint Pattern Recognition Symposium. Springer, 2004, pp. 488–495.
-  Chalapathy Neti, Gerasimos Potamianos, Juergen Luettin, Iain Matthews, Herve Glotin, Dimitra Vergyri, June Sison, and Azad Mashari, “Audio visual speech recognition,” Tech. Rep., IDIAP, 2000.
-  Iain Matthews, Features for audio-visual speech recognition, Ph.D. thesis, Citeseer, 1998.
-  Juergen Luettin, Gerasimos Potamianos, and Chalapathy Neti, “Asynchronous stream modeling for large vocabulary audio-visual speech recognition,” in Acoustics, Speech, and Signal Processing, 2001. Proceedings.(ICASSP’01). 2001 IEEE International Conference on. IEEE, 2001, vol. 1, pp. 169–172.
-  Jiquan Ngiam, Aditya Khosla, Mingyu Kim, Juhan Nam, Honglak Lee, and Andrew Y Ng, “Multimodal deep learning,” in Proceedings of the 28th international conference on machine learning (ICML-11), 2011, pp. 689–696.
Di Hu, Xuelong Li, et al.,
“Temporal multimodal learning in audiovisual speech recognition,”
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 3574–3582.
-  Eric Vatikiotis-Bateson, “The moving face during speech communication,” Hearing by eye II: Advances in the psychology of speechreading and auditory-visual speech, vol. 2, pp. 123, 1998.
-  Eric Vatikiotis-Bateson, Kevin G Munhall, Makoto Hirayama, Y Victor Lee, and Demetri Terzopoulos, “The dynamics of audiovisual behavior in speech,” in Speechreading by humans and machines, pp. 221–232. Springer, 1996.
-  Eric Vatikiotis-Bateson, Kevin G Munhall, Y Kasahara, Frederique Garcia, and Hani Yehia, “Characterizing audiovisual information during speech.,” in ICSLP, 1996.
-  Kuniaki Noda, Yuki Yamaguchi, Kazuhiro Nakadai, Hiroshi G Okuno, and Tetsuya Ogata, “Audio-visual speech recognition using deep learning,” Applied Intelligence, vol. 42, no. 4, pp. 722–737, 2015.
-  Yajie Miao, Mohammad Gowayyed, and Florian Metze, “Eesen: End-to-end speech recognition using deep rnn models and wfst-based decoding,” in 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU). IEEE, 2015, pp. 167–174.
-  Fernando De la Torre, Wen-Sheng Chu, Xuehan Xiong, Francisco Vicente, Xiaoyu Ding, and Jeffrey Cohn, “Intraface,” in Automatic Face and Gesture Recognition (FG), 2015 11th IEEE International Conference and Workshops on. IEEE, 2015, vol. 1, pp. 1–8.
-  David G Lowe, “Distinctive image features from scale-invariant keypoints,” International journal of computer vision, vol. 60, no. 2, pp. 91–110, 2004.
-  Will Williams, Niranjani Prasad, David Mrva, Tom Ash, and Tony Robinson, “Scaling recurrent neural network language models,” CoRR, vol. abs/1502.00512, 2015.
-  J Jeffers and M Barley, “Lipreading (speechreading),” Charles C. Thomas, Springfield, IL, p. b10, 1971.