Lipreading aims to recognize sentences being spoken by a talking face, which is widely used now in many scenarios including dictating instructions or messages in a noisy environment, transcribing archival silent films, resolving multi-talker speech (Afouras et al., 2018b) and understanding dialogue from surveillance videos. However, it is widely considered a challenging task and even experienced human lipreaders cannot master it perfectly (Assael et al., 2016; Shillingford et al., 2018). Thanks to the rapid development of deep learning in recent years, there has been a line of works studying lipreading and salient achievements have been made.
Existing methods mainly adopt autoregressive (AR) model, either based on RNN (Stafylakis and Tzimiropoulos, 2017; Zhao et al., 2019), or Transformer (Afouras et al., 2018a; Afouras et al., 2018b). Those systems generate each target token conditioned on the sequence of tokens generated previously, which hinders the parallelizability. Thus, they all without exception suffer from high inference latency, especially when dealing with the massive videos data containing hundreds of hours (like long films and surveillance videos) or real-time applications such as dictating messages in a noisy environment.
To tackle the low parallelizability problem due to AR generation, many non-autoregressive (NAR) models (Gu et al., 2017; Lee et al., 2018; Guo et al., 2019; Wang et al., 2019; Ma et al., 2019; Liu et al., 2020; Ren et al., 2020b) have been proposed in the machine translation field. The most typical one is NAT-FT (Gu et al., 2017), which modifies the Transformer (Vaswani et al., 2017) by adding a fertility module to predict the number of words in the target sequence aligned to each source word. Besides NAR translation, many researchers bring NAR generation into other sequence-to-sequence tasks, such as video caption (Ren et al., 2019, 2020a), speech recognition (Chen et al., 2019) and speech synthesis(Oord et al., 2017; Ren et al., 2019). These works focus on generating the target sequence in parallel and mostly achieve more than an order of magnitude lower inference latency than their corresponding AR models.
However, it is very challenging to generate the whole target sequence simultaneously in lipreading task in following aspects:
The considerable discrepancy of sequence length between the input video frames and the target text tokens makes it difficult to estimate the length of the output sequence or to define a proper decoder input during the inference stage. This is different from machine translation model, which can even simply adopt the way of uniformly mapping the source word embedding as the decoder input (Wang et al., 2019) due to the analogous text sequence length.
The true target sequence distributions show a strong correlation across time, but the NAR model usually generates target tokens conditionally independent of each other. This is a poor approximation and may generate repeated words. Gu et al. (2017) terms the problem as “multimodal-problem”.
The feature representation ability of encoder could be weak when just training the raw NAR model due to lack of effective alignment mechanism.
The removal of the autoregressive decoder, which usually acts as a language model, makes the model much more difficult to tackle the inherent ambiguity problem in lipreading.
In our work, we propose FastLR, a non-autoregressive lipreading model based on Transformer. To handle the challenges mentioned above and reduce the gap between FastLR and AR model, we introduce three methods as follows:
To estimate the length of the output sequence and alleviates the problem of time correlation in target sequence, we leverage integrate-and-fire (I&F) module to encoding the continuous video signal into discrete token embeddings by locating the acoustic boundary, which is inspired by Dong and Xu (2019). These discrete embeddings retain the timing information and correspond to the target tokens directly.
To enhance the feature representation ability of encoder, we add the connectionist temporal classification (CTC) decoder on the top of encoder and optimize it with CTC loss, which could force monotonic alignments. Besides, we add an auxiliary AR decoder during training to facilitate the feature extraction ability of encoder.
To tackle the inherent ambiguity problem and reduce the spelling errors in NAR inference, we first propose a novel Noisy Parallel Decoding (NPD) for I&F method. The rescoring method in NPD takes advantages of the language model in the well-trained AR lipreading teacher without harming the parallelizability. Then we bring Byte-Pair Encoding (BPE) into lipreading, which compresses the target sequence and makes each token contain more language information to reduce the dependency among tokens compared with character level encoding.
The core contribution of this work is that, we are the first to propose a non-autoregressive lipreading system, and present several elaborate methods metioned above to bridge the gap between FastLR and state-of-the-art autoregressive lipreading models.
The experimental results show that FastLR achieves the speedup up to 10.97 comparing with state-of-the-art lipreading model with slight WER increase of 1.5% and 5.5% on GRID and LRS2 lipreading datasets respectively, which demonstrates the effectiveness of our proposed method. We also conduct ablation experiments to verify the significance of all proposed methods in FastLR.
2. Related Works
2.1. Autoregressive Deep Lipreading
Prior works utilizing deep learning for lipreading mainly adopt the autoregressive model. The first typical approach is LipNet(Assael et al., 2016) based on CTC (Graves et al., 2006), which takes the advantage of the spatio-temporal convolutional front-end feature generator and GRU (Chung et al., 2014). Further, Stafylakis and Tzimiropoulos (2017) propose a network combining the modified 3D/2D-ResNet architecture with LSTM. Afouras et al. (2018b) introduce the Transformer self-attention architecture into lipreading, and build TM-seq2seq and TM-CTC. The former surpasses the performance of all previous work on LRS2-BBC dataset by a large margin. To boost the performance of lipreading, Petridis et al. (2018) present a hybrid CTC/Attention architecture aiming to obtain the better alignment than attention-only mechanism, Zhao et al. (2019) provide the idea that transferring knowledge from audio-speech recognition model to lipreading model by distillation.
However, these methods, either based on recurrent neural network or Transformer, all adopt autoregressive decoding method which takes in the input video sequence and generates the tokens of target sentenceone by one during the inference process. And they all suffer from the high latency.
2.2. Non-Autoregressive Decoding
An autoregressive model takes in a source sequence and then generates words in target sentence one by one with the causal structure during the inference process (Sutskever et al., 2014; Vaswani et al., 2017). To reduce the inference latency, Gu et al. (2017)
introduce non-autoregressive model based on Transformer into the machine translation field, which generates all target words in parallel. The conditional probability can be defined as
where is the length of the target sequence gained from the fertility prediction function conditioned on the source sentence. Due to the multimodality problem (Gu et al., 2017), the performance of NAR model is usually inferior to AR model. Recently, a line of works aiming to bridge the performance gap between NAR and AR model for translation task has been presented (Ghazvininejad et al., 2019; Guo et al., 2019).
2.3. Spike Neural Network
The integrate-and-fire neuron model describes the membrane potential of a neuron according to the synaptic inputs and the injected current(Burkitt, 2006). It is bio-logical and widely used in spiking neural networks. Concretely, the neuron integrates the input signal forwardly and increases the membrane potential. Once the membrane potential reaches a threshold, a spike signal is generated, which means an event takes place. Henceforth, the membrane potential is reset and then grows in response to the subsequent input signal again. It enables the encoding from continuous signal sequences to discrete signal sequences, while retaining the timing information.
Recently, Dong and Xu (2019) introduce the integrate-and-fire model into speech recognition task. They use continuous functions that support back-propagation to simulate the process of integrate-and-fire. In this work, the fired spike represents the event that locates an acoustic boundary.
In this section, we introduce FastLR and describe our methods thoroughly. As shown in Figure 1
, FastLR is composed of a spatio-temporal convolutional neural network for video feature extraction (visual front-end) and a sequence processing model (main model) based on Transformer with an enhenced encoder, a non-autoregressive decoder and a I&F module. To further tackle the challenges in non-autoregressive lipreading, we propose the NPD method for I&F and bring byte-pair encoding into our method. The details of our model and methods are described in the following subsections222We introduce the visual front-end in section 4.2 as it varies from one dataset to another.:
3.1. Enhenced Encoder
The encoder of FastLR is composed of stacked self-attention and feed-forward layers, which are the same as those in Transformer (Vaswani et al., 2017) and autoregressive lipreading model (TM-seq2seq(Afouras et al., 2018b)). Thus, we add an auxiliary autoregressive decoder, shown in the left panel of Figure 1, and by doing so, we can optimize the AR lipreading task with FastLR together with one shared the encoder during training stage. This transfers knowledge from the AR model to FastLR which facilitates the optimization. Besides, we add the connectionist temporal classification (CTC) decoder with CTC loss on the encoder for forcing monotonic alignments, which is a widely used technique in speech recognition field. Both adjustments improve the feature representation ability of our encoder.
3.2. Integrate-and-fire module
To estimate the length of the output sequence and alleviate the problem of time correlation in target sequence, we adopt continuous integrate-and-Fire (I&F) (Dong and Xu, 2019) module for FastLR. This is a soft and monotonic alignment which can be employed in the encoder-decoder sequence processing model. First, the encoder output hidden sequence
will be fed to a 1-dimensional convolutional layer followed by a fully connected layer with sigmoid activation function. Then we obtain the weight embedding sequencewhich represents the weight of information carried in . Second, the I&F module scans and accumulates them from left to right until the sum reaches the threshold (we set it to 1.0), which means an acoustic boundary is detected. Third, I&F divides at this point into two part: and . is used for fulfilling the integration of current embedding to be fired, while is used for the next integration of . Then, I&F resets the accumulation and continues to scan the rest of which begins with for the next integration. This procedure is noted as “accumulate and detect”. Finally, I&F multiplies all (or ) in by corresponding and integrates them according to detected boundaries. An example is shown in Figure 2.
3.3. Non-autoregressive Decoder
Different from Transformer decoder, the self-attention of FastLR’s decoder can attend to the entire sequence for the conditionally independent property of NAR model. And we remove the inter-attention mechanism since FastLR already has an alignment mechanism (I&F) between source and target. The decoder takes in the fired embedding sequence of I&F and generates the text tokens in parallel during either training or inference stage.
3.4. Noisy parallel decoding (NPD) for I&F
The absence of AR decoding procedure makes the model much more difficult to tackle the inherent ambiguity problem in lipreading. So, we design a novel NPD for I&F method to leverage the language information in well-trained AR lipreading model.
In section 3.2, it is not hard to find that, represents the length of predicted sequence (or ), where is the total sum of . And Dong and Xu (2019) propose a scaling strategy which multiplies by a scalar to generate , where is the length of target label . By doing so, the total sum of is equal to and this teacher-forces I&F to predict with the true length of which would benefit the cross-entropy training.
However, we do not stop at this point. Besides training, we also scale during the inference stage to generate multiple candidates of weight embedding with different length bias . When set the beam size ,
where is the output of I&F module during inference and length bias is provided in ”Length Controller” module in Figure 1
. Then, we utilize the re-scoring method used in Noisy Parallel Decoding (NPD), which is a common practice in non-autoregressive neural machine translation, to select the best sequence from thesecandidates via an AR lipreading teacher:
where is the probability of the sequence generated by autoregressive model; The means the optimal generation of FastLR given a source sentence and weight embedding , represents the parameters of model.
The selection process could leverage information in the language model (decoder) of the well-trained autoregressive lipreading teacher, which alleviates the ambiguity problem and gives a chance to adjust the weight embedding generated by I&F module for predicting a better sequence length. Note that these candidates can be computed independently, which won’t hurt the parallelizability (only doubles the latency due to the selection process). The experiments demonstrate that the re-scored sequence is more accurate.
3.5. Byte-Pair Encoding
Byte-Pair Encoding (Sennrich
et al., 2015) is widely used in NMT (Vaswani et al., 2017) and ASR (Dong and Xu, 2019) fields, but rare in lipreading tasks.
BPE could make each token contain more language information and reduce the dependency among tokens compared with character level encoding, which alleviate the problems of non-autoregressive generation discussed before. In this work, we tokenize the sentence with moses tokenizer 333https://github.com/moses-smt/mosesdecoder/blob/master/scripts/tokenizer
/tokenizer.perl and then use BPE algorithm to segment each target word into sub-words.
3.6. Training of FastLR
We optimize the CTC decoder with CTC loss, which maps the frame-level video feature sequence to a target text sequence. CTC introduces a set of intermediate representation path termed as CTC paths for one target text sequence . Each CTC path is composed of scattered target text tokens and blanks which can reduce to the target text sequence by removing the repeated words and blanks. The likelihood of could be calculated as the sum of probabilities of all CTC paths corresponding to it:
Thus, CTC loss can be formulated as:
where denotes the set of source video and target text sequence pairs in one batch.
We optimize the auxiliary autoregressive task with cross-entropy loss, which can be formulated as:
And most importantly, we optimize the main task FastLR with cross-entropy loss , which can be formulated as:
4. Experiments and Results
The GRID dataset (Cooke et al., 2006) consists of 34 subjects, and each of them utters 1,000 phrases. It is a clean dataset and easy to learn. We adopt the split the same with Assael et al. (2016), where 255 random sentences from each speaker are selected for evaluation. In order to better recognize lip movements, we transform the image into gray scale, and crop the video images to a fixed size containing the mouth region with Dlib face detector. Since the vocabulary size of GRID datasets is quite small and most words are simple, we do not apply Byte-Pair Encoding (Sennrich et al., 2015) on GRID, and just encode the target sequence at the character level.
The LRS2 dataset contains sentences of up to 100 characters from BBC videos (Afouras et al., 2018a), which have a range of viewpoints from frontal to profile. We adopt the origin split of LRS2 for train/dev/test sets, which contains 46k, 1,082 and 1,243 sentences respectively. And we make use of the pre-train dataset provided by LRS2 which contains 96k sentences for pretraining. Following previous works (Afouras et al., 2018a; Afouras et al., 2018b; Zhao et al., 2019), the input video frames are converted to grey scale and centrally cropped into images. As for the text sentence, we split each word token into subwords using BPE (Sennrich et al., 2015), and set the vocabulary size to 1k considering the vocabulary size of LRS2.
The statistics of both datasets are listed in Table 1.
4.2. Visual feature extraction
For GRID datasets, we use spatio-temporal CNN to extract visual features follow Torfi et al. (2017), the hidden sizes of fully connected layer as well as output dense layer are both 256. We directly train this visual front-end together with our main model end-to-end on GRID on the implementation444https://github.com/astorfi/lip-reading-deeplearning by Torfi et al. (2017).
For LRS2 datasets, we adopt the same structure as Afouras et al. (2018a), which uses a 3D convolution on the input frame sequence with a filter width of 5 frames, and a 2D ResNet decreasing the spatial dimensions progressively with depth. The network convert the frame sequence into feature sequence, where is frame number, frame height, frame width respectively. It is worth noting that, training the visual front-end together with the main model could obtain poor results on LRS2, which is observed in previous works (Afouras et al., 2018b). Thus, as Zhao et al. (2019) do, we utilize the frozen visual front-end provided by Afouras et al. (2018b), which is pre-trained on a non-public datasets MV-LRS (Chung and Zisserman, 2017), to exact the visual features. And then, we train FastLR on these features end-to-end. The pre-trained model can be found in http://www.robots.ox.ac.uk/~vgg/research/deep_lip_reading/models/lrs2_lip_model.zip.
4.3. Model Configuration
We adopt the Transformer (Vaswani et al., 2017) as the basic model structure for FastLR because it is parallelizable and achieves state-of-the-art accuracy in lipreading (Afouras et al., 2018b). The model hidden size, number of encoder-layers, number of decoder-layers, and number of heads are set to for LRS2 dataset and for GRID dataset respectively. We replace the fully-connected network in origin Transformer with 2-layer 1D convolution network with ReLU activation which is commonly used in speech task and the same with TM-seq2seq (Afouras et al., 2018b) for lipreading. The kernel size and filter size of 1D convolution are set to and 9 respectively. The CTC decoder consists of two fully-connected layers with ReLU activation function and one fully-connected layer without activation function. The hidden sizes of these fully-connected layers equal to . The auxiliary decoder is an ordinary Transformer decoder with the same configuration as FastLR, which takes in the target text sequence shifted right one sequence step for teacher-forcing.
4.4. Training setup
As mentioned in section 3.1, to boost the feature representation ability of encoder, we add an auxiliary connectionist temporal classification (CTC) decoder and an autoregressive decoder to FastLR and optimize them together. We set to 0.5, to during warm-up training stage, and set to during main training stage for simplicity. The training steps of each training stage are listed in details in Table 2. Note that experiment on GRID dataset needs more training steps, since it is trained with its visual front-end together from scratch, different from experiments on LRS2 dataset. Moreover, the first 45k steps in warm-up stage for LRS2 are trained on LRS2-pretrain sub-dataset and all the left steps are trained on LRS2-main sub-dataset (Afouras et al., 2018a; Afouras et al., 2018b; Zhao et al., 2019).
We train our model FastLR using Adam following the optimizer settings and learning rate schedule in Transformer (Vaswani et al., 2017). The training procedure runs on 2 NVIDIA 1080Ti GPUs. Our code is based on tensor2tensor (Vaswani et al., 2018).
4.5. Inference and Evaluation
During the inference stage, the auxiliary CTC decoder as well as autoregressive decoder will be thrown away. Given the beam size , FastLR generates candidates of weight embedding sequence which correspond to text sequences, and these text sequences will be sent to the decoder of a well-trained autoregressive lipreading model (TM-seq2seq) for selection as described in section 3.4. The result of selected best text sequence is marked with ”NPD9”. We conduct the experiments on both ”NPD9” and ”without NPD”. To be specific, the result of ”without NPD” means directly using the candidate with zero-length bias without a selection process, which has a lower latency.
The recognition quality is evaluated by Word Error Rate (WER) and Character Error Rate (CER). Both error rate can be defined as:
where S, D, I and N are the number of substitutions, deletions, insertions and reference tokens (word or character) respectively.
When evaluating the latency, we run FastLR on 1 NVIDIA 1080Ti GPU in inference.
|LSTM (Wand et al., 2016)||20.4%||/|
|LipNet (Assael et al., 2016)||4.8%||1.9%|
|WAS (Chung et al., 2017)||3.0%||/|
|WAS (Chung et al., 2017)||70.4%||/|
|BLSTM+CTC (Afouras et al., 2018a)||76.5%||40.6%|
|FC-15 (Afouras et al., 2018a)||64.8%||33.9%|
|LIBS (Zhao et al., 2019)||65.3%||45.5%|
|TM-seq2seq (Afouras et al., 2018b)||61.7%||43.5%|
4.6. Main Results
We conduct experiments of FastLR, and compare them with autoregressive lipreading baseline and some mainstream state-of-the-art of AR lipreading models on the GRID and LRS2 datasets respectively. As for TM-seq2seq (Afouras et al., 2018b), it has the same Transformer settings with FastLR and works as the AR teacher for NPD selection. We also apply CTC loss and BPE technique to TM-seq2seq for a fair comparison. 555Our reproduction has a weaker performance compared with the results reported in (Afouras et al., 2018a; Afouras et al., 2018b). Because we do not have the resource of MV-LRS, a non-public dataset which contains individual word excerpts of frequent words used by (Afouras et al., 2018a; Afouras et al., 2018b). Thus, we do not adopt curriculum learning strategy as Afouras et al. (2018a).
The results on two datasets are listed in Table 3 and 4. We can see that 1) WAS (Chung et al., 2017) and TM-seq2seq (Afouras et al., 2018a; Afouras et al., 2018b) obtain the best results of autoregressive lipreading model on GRID and LRS2. Compared with them, FastLR only has a slight WER absolute increase of 1.5% and 5.5% respectively. 2) Moreover, on GRID dataset, FastLR outperforms LipNet (Assael et al., 2016) for 0.3% WER, and exceeds LSTM (Wand et al., 2016) with a notable margin; On LRS2 dataset, FastLR achieves better WER scores than WAS and BLSTM+CTC (Afouras et al., 2018a) and keeps comparable performance with LIBS (Zhao et al., 2019) and FC-15 (Afouras et al., 2018a). In addition, compared with LIBS, we do not introduce any distillation method in training stage, and compared with WAS and TM-seq2seq, we do not leverage information from other datasets beyond GRID and LRS2.
We also propose a baseline non-autoregressive lipreading model without Integrate-and-Fire module termed as NAR-LR (base), and conduct experiments for comparison. As the result shows, FastLR outperforms this NAR baseline distinctly. The overview of the design for NAR-LR (base) is shown in Figure 3.
In this section, we compare the average inference latency of FastLR with that of the autoregressive Transformer lipreading model. And then, we analyze the relationship between speedup and the length of the predicted sequence.
4.7.1. Average Latency Comparison
The average latency is measured in average time in seconds required to decode one sentence on the test set of LRS2 dataset. We record the inference latency and corresponding recognition accuracy of TM-seq2seq (Afouras et al., 2018a; Afouras et al., 2018b), FastLR without NPD and FastLR with NPD9, which is listed in Table 5.
The result shows that FastLR speeds up the inference by 11.94 without NPD, and by 5.81 with NPD9 on average, compared with the TM-seq2seq which has similar number of model parameters. Note that the latency is calculated excluding the computation cost of data pre-processing and the visual front-end.
|TM-seq2seq (Afouras et al., 2018b)||61.7%||0.215||1.00|
|FastLR (no NPD)||73.2%||0.018||11.94|
|FastLR (NPD 9)||67.2%||0.037||5.81|
4.7.2. Relationship between Speedup and Length
During inference, the autoregressive model generates the target tokens one by one, but the non-autoregressive model speeds up the inference by increasing parallelization in the generation process. Thus, the longer the target sequence is, the more the speedup rate is. We visualize the relationship between the length of the predicted sub-word sequence in Figure 4. It can be seen that the inference latency increases distinctly with the predicted text length for TM-seq2seq, while nearly holds a small constant for FastLR.
Then, we bucket the test sequences of length within , and calculate their average inference latency for TM-seq2seq and FastLR to obtain the maximum speedup on LRS2 test set. The results are 0.494s and 0.045s for TM-seq2seq and FastLR (NPD9) respectively, which shows that FastLR (NPD9) achieves the speedup up to 10.97 on LRS2 test set, thanks to the parallel generation which is insensitive to sequence length.
In this section, we first conduct ablation experiments on LRS2 to verify the significance of all proposed methods in FastLR. The experiments are listed in Table 6. Then we visualize the encoder-decoder attention map of the well-trained AR model (TM-seq2seq) and the acoustic boundary detected by the I&F module in FastLR to check whether the I&F module works well.
|Naive Model with I&F||¿1||75.2%|
5.1. The Effectiveness of Auxiliary AR Task
As shown in the table 6, the naive lipreading model with Integrate-and-Fire is not able to converge well, due to the difficulty of learning the weight embedding in I&F module from the meaningless encoder hidden. Thus, the autoregressive lipreading model works as the auxiliary model to enhance the feature representation ability of encoder, and guides the non-autoregressive model with Integrate-and-Fire to learn the right alignments (weight embedding). From this, the model with I&F begins to generate the target sequence with meaning, and (Row 3).
5.2. The Effectiveness of Byte-Pair Encoding
BPE makes each token contain more language information and reduce the dependency among tokens compared with character level encoding. In addition, from observation, the speech speed of BBC video is a bit fast, which causes that one target token (character if without BPE) corresponds to few video frames. While BPE compresses the target sequence and this will help the Integrate-and-Fire module to find the acoustic level alignments easier.
From the table 6 (Row 4), it can be seen that BPE reduces the word error rate and character error rate to 75.7% and 52.7% respectively, which means BPE helps the model gains the ability to generates understandable sentence.
5.3. The Effectiveness of CTC
The result shows that (Row 5), adding auxiliary connectionist temporal classification(CTC) decoder with CTC loss will further boost the feature representation ability of encoder, and cause 2.5% absolute decrease in WER. At this point, the model gains considerable recognition accuracy compared with the traditional autoregressive method.
5.4. The Effectiveness of NPD for I&F
Table 6 (Row 6) shows that using NPD for I&F can boost the performance effectively. We also study the effect of increasing the candidates number for FastLR on LRS2 dataset, as shown in Figure 5. It can be seen that, when setting the candidates number to , the accuracy peaks. Finally, FastLR achieves considerable accuracy compared with state-of-the-art autoregressive lipreading model.
5.5. The Visualization of Boundary Detection
We visualize the encoder-decoder attention map in Figure 6, which is obtained from the well-trained AR TM-seq2seq. The attention map illustrates the alignment between source video frames and the corresponding target sub-word sequence.
The figure shows that the video frames between two horizontal red lines are roughly just what the corresponding target token attends to. It means that the ”accumulate and detect” part in I&F module tells the acoustic boundary well and makes a right prediction of sequence length.
In this work, we developed FastLR, a non-autoregressive lipreading system with Integrate-and-Fire module, that recognizes source silent video and generates all the target text tokens in parallel. FastLR consists of a visual front-end, a visual feature encoder and a text decoder for simultaneous generation. To bridge the accuracy gap between FastLR and state-of-the-art autoregressive lipreading model, we introduce I&F module to encode the continuous visual features into discrete token embedding by locating the acoustic boundary. In addition, we propose several methods including auxiliary AR task and CTC loss to boost the feature representation ability of encoder. At last, we design NPD for I&F and bring Byte-Pair Encoding into lipreading, and both methods alleviate the problem caused by the removal of AR language model. Experiments on GRID and LRS2 lipreading datasets show that FastLR outperforms the NAR-LR baseline and has a slight WER increase compared with state-of-the-art AR model, which demonstrates the effectiveness of our method for NAR lipreading.
In the future, we will continue to work on how to make a better approximation to the true target distribution for NAR lipreading task, and design more flexible policies to bridge the gap between AR and NAR model as well as keeping the fast speed of NAR generation.
This work was supported in part by the National Key R&D Program of China (Grant No.2018AAA0100603), Zhejiang Natural Science Foundation (LR19F020006), National Natural Science Foundation of China (Grant No.61836002, No.U1611461 and No.61751209) and the Fundamental Research Funds for the Central Universities (2020QNA5024). This work was also partially supported by the Language and Speech Innovation Lab of HUAWEI Cloud.
- Afouras et al. (2018b) Triantafyllos Afouras, Joon Son Chung, Andrew Senior, Oriol Vinyals, and Andrew Zisserman. 2018b. Deep audio-visual speech recognition. IEEE transactions on pattern analysis and machine intelligence (2018).
- Afouras et al. (2018a) Triantafyllos Afouras, Joon Son Chung, and Andrew Zisserman. 2018a. Deep lip reading: a comparison of models and an online application. arXiv preprint arXiv:1806.06053 (2018).
- Assael et al. (2016) Yannis M Assael, Brendan Shillingford, Shimon Whiteson, and Nando De Freitas. 2016. Lipnet: End-to-end sentence-level lipreading. arXiv preprint arXiv:1611.01599 (2016).
- Burkitt (2006) Anthony N Burkitt. 2006. A review of the integrate-and-fire neuron model: I. Homogeneous synaptic input. Biological cybernetics 95, 1 (2006), 1–19.
- Chen et al. (2019) Nanxin Chen, Shinji Watanabe, Jesús Villalba, and Najim Dehak. 2019. Non-Autoregressive Transformer Automatic Speech Recognition. arXiv preprint arXiv:1911.04908 (2019).
- Chung et al. (2014) Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555 (2014).
et al. (2017)
Joon Son Chung, Andrew
Senior, Oriol Vinyals, and Andrew
Lip reading sentences in the wild. In
2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 3444–3453.
- Chung and Zisserman (2017) Joon Son Chung and AP Zisserman. 2017. Lip reading in profile. (2017).
et al. (2006)
Martin Cooke, Jon Barker,
Stuart Cunningham, and Xu Shao.
An audio-visual corpus for speech perception and automatic speech recognition.The Journal of the Acoustical Society of America 120, 5 (2006), 2421–2424.
- Dong and Xu (2019) Linhao Dong and Bo Xu. 2019. CIF: Continuous Integrate-and-Fire for End-to-End Speech Recognition. arXiv preprint arXiv:1905.11235 (2019).
Ghazvininejad et al. (2019)
Marjan Ghazvininejad, Omer
Levy, Yinhan Liu, and Luke
Mask-predict: Parallel decoding of conditional
masked language models. In
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). 6114–6123.
Graves et al. (2006)
Alex Graves, Santiago
Fernández, Faustino Gomez, and
Jürgen Schmidhuber. 2006.
Connectionist temporal classification: labelling
unsegmented sequence data with recurrent neural networks. In
Proceedings of the 23rd international conference on Machine learning. 369–376.
- Gu et al. (2017) Jiatao Gu, James Bradbury, Caiming Xiong, Victor OK Li, and Richard Socher. 2017. Non-autoregressive neural machine translation. arXiv preprint arXiv:1711.02281 (2017).
- Guo et al. (2019) Junliang Guo, Xu Tan, Di He, Tao Qin, Linli Xu, and Tie-Yan Liu. 2019. Non-autoregressive neural machine translation with enhanced decoder input. In AAAI, Vol. 33. 3723–3730.
- Lee et al. (2018) Jason Lee, Elman Mansimov, and Kyunghyun Cho. 2018. Deterministic Non-Autoregressive Neural Sequence Modeling by Iterative Refinement. In EMNLP. 1173–1182.
et al. (2020)
Jinglin Liu, Yi Ren,
Xu Tan, Chen Zhang, Tao
Qin, Zhou Zhao, and Tie-Yan Liu.
Task-Level Curriculum Learning for
Non-Autoregressive Neural Machine Translation. In
Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20. 3861–3867.
- Ma et al. (2019) Xuezhe Ma, Chunting Zhou, Xian Li, Graham Neubig, and Eduard Hovy. 2019. FlowSeq: Non-Autoregressive Conditional Sequence Generation with Generative Flow. In EMNLP-IJCNLP. 4273–4283.
- Oord et al. (2017) Aaron van den Oord, Yazhe Li, Igor Babuschkin, Karen Simonyan, Oriol Vinyals, Koray Kavukcuoglu, George van den Driessche, Edward Lockhart, Luis C Cobo, Florian Stimberg, et al. 2017. Parallel wavenet: Fast high-fidelity speech synthesis. arXiv preprint arXiv:1711.10433 (2017).
- Petridis et al. (2018) Stavros Petridis, Themos Stafylakis, Pingchuan Ma, Georgios Tzimiropoulos, and Maja Pantic. 2018. Audio-visual speech recognition with a hybrid ctc/attention architecture. In 2018 IEEE Spoken Language Technology Workshop (SLT). IEEE, 513–520.
- Ren et al. (2020a) Yi Ren, Chenxu Hu, Tao Qin, Sheng Zhao, Zhou Zhao, and Tie-Yan Liu. 2020a. FastSpeech 2: Fast and High-Quality End-to-End Text-to-Speech. arXiv preprint arXiv:2006.04558 (2020).
- Ren et al. (2020b) Yi Ren, Jinglin Liu, Xu Tan, Sheng Zhao, Zhou Zhao, and Tie-Yan Liu. 2020b. A Study of Non-autoregressive Model for Sequence Generation. arXiv preprint arXiv:2004.10454 (2020).
- Ren et al. (2019) Yi Ren, Yangjun Ruan, Xu Tan, Tao Qin, Sheng Zhao, Zhou Zhao, and Tie-Yan Liu. 2019. Fastspeech: Fast, robust and controllable text to speech. In Advances in Neural Information Processing Systems. 3165–3174.
- Sennrich et al. (2015) Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909 (2015).
- Shillingford et al. (2018) Brendan Shillingford, Yannis Assael, Matthew W Hoffman, Thomas Paine, Cían Hughes, Utsav Prabhu, Hank Liao, Hasim Sak, Kanishka Rao, Lorrayne Bennett, et al. 2018. Large-scale visual speech recognition. arXiv preprint arXiv:1807.05162 (2018).
- Stafylakis and Tzimiropoulos (2017) Themos Stafylakis and Georgios Tzimiropoulos. 2017. Combining residual networks with LSTMs for lipreading. arXiv preprint arXiv:1703.04105 (2017).
- Sutskever et al. (2014) Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems. 3104–3112.
- Torfi et al. (2017) Amirsina Torfi, Seyed Mehdi Iranmanesh, Nasser Nasrabadi, and Jeremy Dawson. 2017. 3d convolutional neural networks for cross audio-visual matching recognition. IEEE Access 5 (2017), 22081–22091.
- Vaswani et al. (2018) Ashish Vaswani, Samy Bengio, Eugene Brevdo, Francois Chollet, Aidan N. Gomez, Stephan Gouws, Llion Jones, Łukasz Kaiser, Nal Kalchbrenner, Niki Parmar, Ryan Sepassi, Noam Shazeer, and Jakob Uszkoreit. 2018. Tensor2Tensor for Neural Machine Translation. CoRR abs/1803.07416 (2018). http://arxiv.org/abs/1803.07416
- Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems. 5998–6008.
et al. (2016)
Michael Wand, Jan
Koutník, and Jürgen Schmidhuber.
Lipreading with long short-term memory. In2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 6115–6119.
- Wang et al. (2019) Yiren Wang, Fei Tian, Di He, Tao Qin, ChengXiang Zhai, and Tie-Yan Liu. 2019. Non-Autoregressive Machine Translation with Auxiliary Regularization. In AAAI.
- Yang et al. (2019) Bang Yang, Fenglin Liu, and Yuexian Zou. 2019. Non-Autoregressive Video Captioning with Iterative Refinement. arXiv preprint arXiv:1911.12018 (2019).
- Zhao et al. (2019) Ya Zhao, Rui Xu, Xinchao Wang, Peng Hou, Haihong Tang, and Mingli Song. 2019. Hearing Lips: Improving Lip Reading by Distilling Speech Recognizers. arXiv preprint arXiv:1911.11502 (2019).