A Cascade Sequence-to-Sequence Model for Chinese Mandarin Lip Reading

08/14/2019 ∙ by Ya Zhao, et al. ∙ 0

Lip reading aims at decoding texts from the movement of a speaker's mouth. In recent years, lip reading methods have made great progress for English, at both word-level and sentence-level. Unlike English, however, Chinese Mandarin is a tone-based language and relies on pitches to distinguish lexical or grammatical meaning, which significantly increases the ambiguity for the lip reading task. In this paper, we propose a Cascade Sequence-to-Sequence Model for Chinese Mandarin (CSSMCM) lip reading, which explicitly models tones when predicting sentence. Tones are modeled based on visual information and syntactic structure, and are used to predict sentence along with visual information and syntactic structure. In order to evaluate CSSMCM, a dataset called CMLR (Chinese Mandarin Lip Reading) is collected and released, consisting of over 100,000 natural sentences from China Network Television website. When trained on CMLR dataset, the proposed CSSMCM surpasses the performance of state-of-the-art lip reading frameworks, which confirms the effectiveness of explicit modeling of tones for Chinese Mandarin lip reading.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Lip reading, also known as visual speech recognition, aims to predict the sentence being spoken, given a silent video of a talking face. In noisy environments, where speech recognition is difficult, visual speech recognition offers an alternative way to understand speech. Besides, lip reading has practical potential in improved hearing aids, security, and silent dictation in public spaces. Lip reading is essentially a difficult problem, as most lip reading actuations, besides the lips and sometimes tongue and teeth, are latent and ambiguous. Several seemingly identical lip movements can produce different words.

Thanks to the recent development of deep learning, English-based lip reading methods have made great progress, at both word-level

[petridis2016deep, chung2016lip] and sentence-level [assael2016lipnet, chung2017lipWild]. However, as the language of the most number of speakers, there is only a little work for Chinese Mandarin lip reading in the multimedia community. Yang et al. [yang2018lrw] present a naturally-distributed large-scale benchmark for Chinese Mandarin lip-reading in the wild, named LRW-1000, which contains 1,000 classes with 718,018 samples from more than 2,000 individual speakers. Each class corresponds to the syllables of a Mandarin word composed of one or several Chinese characters. However, they perform only word classification for Chinese Mandarin lip reading but not at the complete sentence level. LipCH-Net [zhang2019understanding]

is the first paper aiming for sentence-level Chinese Mandarin lip reading. LipCH-Net is a two-step end-to-end architecture, in which two deep neural network models are employed to perform the recognition of Picture-to-Pinyin (mouth motion pictures to pronunciations) and the recognition of Pinyin-to-Hanzi (pronunciations to texts) respectively. Then a joint optimization is performed to improve the overall performance.

Belong to two different language families, English and Chinese Mandarin have many differences. The most significant one might be that: Chinese Mandarin is a tone language, while English is not. The tone is the use of pitch in language to distinguish lexical or grammatical meaning - that is, to distinguish or to inflect words 111https://en.wikipedia.org/wiki/Tone_(linguistics). Even two words look the same on the face when pronounced, they can have different tones, thus have different meanings. For example, even though ”练习” (which means practice) and ”联系” (which means contact) have different meanings, but they have the same mouth movement. This increases ambiguity when lip reading. So the tone is an important factor for Chinese Mandarin lip reading.

Based on the above considerations, in this paper, we present CSSMCM, a sentence-level Chinese Mandarin lip reading network, which contains three sub-networks. Same as [zhang2019understanding], in the first sub-network, pinyin sequence is predicted from the video. Different from [zhang2019understanding], which predicts pinyin characters from video, pinyin is taken as a whole in CSSMCM, also known as syllables. As we know, Mandarin Chinese is a syllable-based language and syllables are their logical unit of pronunciation. Compared with pinyin characters, syllables are a longer linguistic unit, and can reduce the difficulty of syllable choices in the decoder by sequence-to-sequence attention-based models [zhou2018syllable]. Chen et al. [chen2008seeing] find that there might be a relationship between the production of lexical tones and the visible movements of the neck, head, and mouth. Motivated by this observation, in the second sub-network, both video and pinyin sequence is used as input to predict tone. Then in the third sub-network, video, pinyin, and tone sequence work together to predict the Chinese character sequence. At last, three sub-networks are jointly finetuned to improve overall performance.

As there is no public sentence-level Chinese Mandarin lip reading dataset, we collect a new Chinese Mandarin Lip Reading dataset called CMLR based on China Network Television broadcasts containing talking faces together with subtitles of what is said.

In summary, our major contributions are as follows.

  • We argue that tone is an important factor for Chinese Mandarin lip reading, which increases the ambiguity compared with English lip reading. Based on this, a three-stage cascade network, CSSMCM, is proposed. The tone is inferred by video and syntactic structure, and are used to predict sentence along with visual information and syntactic structure.

  • We collect a ’Chinese Mandarin Lip Reading’ (CMLR) dataset, consisting of over 100,000 natural sentences from national news program ”News Broadcast”. The dataset will be released as a resource for training and evaluation.

  • Detailed experiments on CMLR dataset show that explicitly modeling tone when predicting Chinese sentence performs a lower character error rate.

Symbol Definition
GRU unit in video encoder
GRU unit in pinyin encoder and pinyin decoder
GRU unit in tone encoder and tone decoder
GRU unit in character decoder
attention between pinyin decoder and video encoder. The superscript indicates the encoder and the subscript indicates the decoder.
video, character, pinyin, and tone sequence
timestep
video encoder output, pinyin encoder output, tone encoder output
video content, pinyin content, tone content
Table 1: Symbol Definition

2 The Proposed Method

In this section, we present CSSMCM, a lip reading model for Chinese Mandarin. As mention in Section 1, pinyin and tone are both important for Chinese Mandarin lip reading. Pinyin represents how to pronounce a Chinese character and is related to mouth movement. Tone can alleviate the ambiguity of visemes (several speech sounds that look the same) to some extent and can be inferred from visible movements. Based on this, the lip reading task is defined as follow:

(1)

The meaning of these symbols is given in Table 1.

As shown in Equation (1), the whole problem is divided into three parts, which corresponds to pinyin prediction, tone prediction, and character prediction separately. Each part will be described in detail below.

2.1 Pinyin Prediction Sub-network

Figure 1: The tone prediction sub-network.

The pinyin prediction sub-network transforms video sequence into pinyin sequence, which corresponds to in Equation (1). This sub-network is based on the sequence-to-sequence architecture with attention mechanism [bahdanau2015neural]. We name the encoder and decoder the video encoder and pinyin decoder, for the encoder process video sequence, and the decoder predicts pinyin sequence. The input video sequence is first fed into the VGG model [chatfield2014return] to extract visual feature. The output of conv5 of VGG is appended with global average pooling [lin2014network] to get the

-dim feature vector. Then the

-dim feature vector is fed into video encoder. The video encoder can be denoted as:

(2)

When predicting pinyin sequence, at each timestep , video encoder outputs are attended to calculate a context vector :

(3)
(4)
(5)

2.2 Tone Prediction Sub-network

As shown in Equation (1), tone prediction sub-network () takes video and pinyin sequence as inputs and predict corresponding tone sequence. This problem is modeled as a sequence-to-sequence learning problem too. The corresponding model architecture is shown in Figure 1.

In order to take both video and pinyin information into consideration when producing tone, a dual attention mechanism [chung2017lipWild] is employed. Two independent attention mechanisms are used for video and pinyin sequence. Video context vectors and pinyin context vectors are fused when predicting a tone character at each decoder step.

The video encoder is the same as in Section 2.1 and the pinyin encoder is:

(6)

The tone decoder takes both video encoder outputs and pinyin encoder outputs to calculate context vector, and then predicts tones:

(7)
(8)
(9)
(10)

2.3 Character Prediction Sub-network

Figure 2: The character prediction sub-network.

The character prediction sub-network corresponds to in Equation (1). It considers all the pinyin sequence, tone sequence and video sequence when predicting Chinese character. Similarly, we also use attention based sequence-to-sequence architecture to model this equation. Here the attention mechanism is modified into triplet attention mechanism:

(11)
(12)
(13)
(14)
(15)

For the following needs, the formula of tone encoder is also listed as follows:

(16)

2.4 CSSMCM Architecture

The architecture of the proposed approach is demonstrated in Figure 3. For better display, the three attention mechanisms are not shown in the figure. During the training of CSSMCM, the outputs of pinyin decoder are fed into pinyin encoder, the outputs of tone decoder into tone encoder:

(17)
(18)

We replace Equation (6) with Equation (17), Equation (16) with Equation (18

). Then, the three sub-networks are jointly trained and the overall loss function is defined as follows:

(19)

where and stand for loss of pinyin prediction sub-network, tone prediction sub-network and character prediction sub-network respectively, as defined below.

(20)
Figure 3: The overall of the CSSMCM network. The attention module is omitted for sake of simplicity.

2.5 Training Strategy

To accelerate training and reduce overfitting, curriculum learning [chung2017lipWild] is employed. The sentences are grouped into subsets according to the length of less than 11, 12-17, 18-23, more than 24 Chinese characters. Scheduled sampling proposed by [bengio2015scheduled] is used to eliminate the discrepancy between training and inference. At the training stage, the sampling rate from the previous output is selected from 0.7 to 1. Greedy decoder is used for fast decoding.

3 Dataset

In this section, a three-stage pipeline for generating the Chinese Mandarin Lip Reading (CMLR) dataset is described, which includes video pre-processing, text acquisition, and data generation. This three-stage pipeline is similar to the method mentioned in [chung2017lipWild], but considering the characteristics of our Chinese Mandarin dataset, we have optimized some steps and parts to generate a better quality lip reading dataset. The three-stage pipeline is detailed below.

Video Pre-processing

. First, national news program ”News Broadcast” recorded between June 2009 and June 2018 is obtained from China Network Television website. Then, the HOG-based face detection method is performed

[king2009dlib], followed by an open source platform for face recognition and alignment. The video clip set of eleven different hosts who broadcast the news is captured. During the face detection step, using frame skipping can improve efficiency while ensuring the program quality.

Text Acquisition. Since there is no subtitle or text annotation in the original ”News Broadcast” program, FFmpeg tools 222https://ffmpeg.org/ are used to extract the corresponding audio track from the video clip set. Then through the iFLYTEK 333https://www.xfyun.cn/ ASR, the corresponding text annotation of the video clip set is obtained. However, there is some noise in these text annotation. English letters, Arabic numerals, and rare punctuation are deleted to get a more pure Chinese Mandarin lip reading dataset.

Data Generation. The text annotation acquired in the previous step also contains timestamp information. Therefore, video clip set is intercepted according to these timestamp information, and then the corresponding word, phrase, or sentence video segment of the text annotation are obtained. Since the text timestamp information may have a few uncertain errors, some adjustments are made to the start frame and the end frame when intercepting the video segment. It is worth noting that through experiments, we found that using OpenCV 444http://docs.opencv.org/2.4.13/modules/refman.html can capture clearer video segment than the FFmpeg tools.

Through the three-stage pipeline mentioned above, we can obtain the Chinese Mandarin Lip Reading (CMLR) dataset containing more than 100,000 sentences, 25,000 phrases, 3,500 characters. The dataset is randomly divided into training set, validation set, and test set in a ratio of 7:1:2. Details are listed in Table 2.

Set # sentences # phrases # characters
Train 71,452 22,959 3,360
Validation 10,206 10,898 2,540
Test 20,418 14,478 2,834
All 102,076 25,633 3,517
Table 2: The CMLR dataset. Division of training, validation and test data; and the number of sentences, phrases and characters of each partition.

4 Experiments

4.1 Implementation Details

The input images are 64 128 in dimension. Lip frames are transformed into gray-scale, and the VGG network takes every 5 lip frames as an input, moving 2 frames at each timestep. For all sub-networks, a two-layer bi-direction GRU [cho2014learning]

with a cell size of 256 is used for the encoder and a two-layer uni-direction GRU with a cell size of 512 for the decoder. For character and pinyin vocabulary, we keep characters and pinyin that appear more than 20 times. [sos], [eos] and [pad] are also included in these three vocabularies. The final vocabulary size is 371 for pinyin prediction sub-network, 8 for tone prediction sub-network (four tones plus a neutral tone), and 1,779 for character prediction sub-network.

The initial learning rate was 0.0001 and decreased by 50% every time the training error did not improve for 4 epochs. CSSMCM is implemented using pytorch library and trained on a Quadro 64C P5000 with 16GB memory. The total end-to-end model was trained for around 12 days.

4.2 Compared Methods and Evaluation Protocol

Models sub-network CER PER TER
WAS - 38.93% - -
LipCH-Net-seq V2P - 27.96% -
P2C 9.88% - -
OVERALL 34.07% 39.52% -
CSSMCM-w/o video V2P - 27.96% -
P2T - - 6.99%
PT2C 4.70 % - -
OVERALL 42.23% 46.67% 13.14%
CSSMCM V2P - 27.96% -
VP2T - - 6.14%
VPT2C 3.90% - -
OVERALL 32.48% 36.22% 10.95%
Table 3: The detailed comparison between CSSMCM and other methods on the CMLR dataset. V, P, T, C stand for video, pinyin, tone and character. V2P stands for the transformation from video sequence to pinyin sequence. VP2T represents the input are video and pinyin sequence and the output is sequence of tone. OVERALL means to combine the sub-networks and make a joint optimization.
Method Chinese Character Sentence Pinyin Sequence Tone Sequence
GT 既让老百姓得实惠 ji rang lao bai xing de shi hui 4 4 3 3 4 2 2 4
WAS 介项老百姓姓事会 jie xiang lao bai xing xing shi hui 4 4 3 3 4 4 4 4
LipCH-Net-seq 既让老百姓的吃贵 ji rang lao bai xing de chi gui 4 4 3 3 4 0 1 4
CSSMCM 既让老百姓得实惠 ji rang lao bai xing de shi hui 4 4 3 3 4 2 2 4
GT 有效应对当前半岛局势 you xiao ying dui dang qian ban dao ju shi 3 4 4 4 1 2 4 3 2 4
WAS 有效应对当天半岛趋势 you xiao ying dui dang tian ban dao qu shi 3 4 4 4 1 1 4 3 1 4
LipCH-Net-seq 有效应对党年半岛局势 you xiao ying dui dang nian ban dao ju shi 3 4 4 4 3 2 4 3 2 4
CSSMCM 有效应对当前半岛局势 you xiao ying dui dang qian ban dao ju shi 3 4 4 4 1 2 4 3 2 4
Table 4: Examples of sentences that CSSMCM correctly predicts while other methods do not. The pinyin and tone sequence corresponding to the Chinese character sentence are also displayed together. GT stands for ground truth.

WAS: The architecture used in [chung2017lipWild] without the audio input. The decoder output Chinese character at each timestep. Others keep unchanged to the original implementation.

LipCH-Net-seq: For a fair comparison, we use sequence-to-sequence with attention framework to replace the Connectionist temporal classification (CTC) loss [graves2006connectionist] used in LipCH-Net [zhang2019understanding] when converting picture to pinyin.

CSSMCM-w/o video: To evaluate the necessity of video information when predicting tone, the video stream is removed when predicting tone and Chinese characters. In other word, video is only used when predicting the pinyin sequence. The tone is predicted from the pinyin sequence. Tone information and pinyin information work together to predict Chinese character.

We tried to implement the Lipnet architecture [assael2016lipnet] to predict Chinese character at each timestep. However, the model did not converge. The possible reasons are due to the way CTC loss works and the difference between English and Chinese Mandarin. Compared to English, which only contains 26 characters, Chinese Mandarin contains thousands of Chinese characters. When CTC calculates loss, it first adds blank between every character in a sentence, that causes the number of the blank label is far more than any other Chinese character. Thus, when Lipnet starts training, it predicts only the blank label. After a certain epoch, ”的” character will occasionally appear until the learning rate decays to close to zero.

GT 向全球价值链中高端迈进
xiang quan qiu jia zhi lian zhong gao duan mai jin
CSSMCM 向全球下试联中高端迈进
xiang quan qiu xia shi lian zhong gao duan mai jin
GT 随着我国医学科技的进步
sui zhe wo guo yi xue ke ji de jin bu
CSSMCM 随着我国一水科技的信步
sui zhe wo guo yi shui ke ji de jin bu
Table 5: Failure cases of CSSMCM.

For all experiments, Character Error Rate (CER) and Pinyin Error Rate (PER) are used as evaluation metrics. CER is defined as

, where is the number of substitutions, is the number of deletions, is the number of insertions to get from the reference to the hypothesis and is the number of words in the reference. PER is calculated in the same way as CER. Tone Error Rate (TER) is also included when analyzing CSSMCM, which is calculated in the same way as above.

4.3 Results

Figure 4: Video-to-text alignment using CSSMCM (a) and WAS (b).
Figure 5: Aligenment between output characters and predicted tone sequences using CSSMCM.

Table 3 shows a detailed comparison between various sub-network of different methods. Comparing P2T and VP2T, VP2T considers video information when predicting the pinyin sequence and achieves a lower error rate. This verifies the conjecture of [chen2008seeing] that the generation of tones is related to the motion of the head. In terms of overall performance, CSSMCM exceeds all the other architecture on the CMLR dataset and achieves 32.48% character error rate. It is worth noting that CSSMCM-w/o video achieves the worst result (42.23% CER) even though its sub-networks perform well when trained separately. This may be due to the lack of visual information to support, and the accumulation of errors. CSSMCM using tone information performs better compared to LipCH-Net-seq, which does not use tone information. The comparison results show that tone is important when lip reading, and when predicting tone, visual information should be considered.

Table 4 shows some generated sentences from different methods. CSSMCM-w/o video architecture is not included due to its relatively lower performance. These are sentences other methods fail to predict but CSSMCM succeeds. The phrase ”实惠” (which means affordable) in the first example sentence, has a tone of 2, 4 and its corresponding pinyin are shi, hui. WAS predicts it as ”事会” (which means opportunity). Although the pinyin prediction is correct, the tone is wrong. LipCH-Net-seq predicts ”实惠” as ”吃贵” (not a word), which have the same finals ”ui” and the corresponding mouth shapes are the same. It’s the same in the second example. ”前, 天, 年” have the same finals and mouth shapes, but the tone is different.

These show that when predicting characters with the same lip shape but different tones, other methods are often unable to predict correctly. However, CSSMCM can leverage the tone information to predict successfully.

Apart from the above results, Table 5 also lists some failure cases of CSSMCM. The characters that CSSMCM predicts wrong are usually homophones or characters with the same final as the ground truth. In the first example, ”价” and ”下” have the same final, ia, while ”一” and ”医” are homophones in the second example. Unlike English, if one character in an English word is predicted wrong, the understanding of the transcriptions has little effect. However, if there is a character predicted wrong in Chinese words, it will greatly affect the understandability of transcriptions. In the second example, CSSMCM mispredicts ”医学” ( which means medical) to ”一水” (which means all). Although their first characters are pronounced the same, the meaning of the sentence changed from Now with the progress of medical science and technology in our country to It is now with the footsteps of China’s Yishui Technology.

4.4 Attention Visualisation

Figure 4 (a) and Figure 4 (b) visualise the alignment of video frames and Chinese characters predicted by CSSMCM and WAS respectively. The ground truth sequence is ”同时他还向媒体表示”. Comparing Figure 4 (a) with Figure 4 (b), the diagonal trend of the video attention map got by CSSMCM is more obvious. The video attention is more focused where WAS predicts wrong, i.e.  the area corresponding to ”还向”. Although WAS mistakenly predicts the ”媒体” as ”么体”, the ”媒体” and the ”么体” have the same mouth shape, so the attention concentrates on the correct frame.

It’s interesting to mention that in Figure 5, when predicting the -th character, attention is concentrated on the -th tone. This may be because attention is applied to the outputs of the encoder, which actually includes all the information from the previous timesteps. The attention to the tone of -th timestep serves as the language model, which reduces the options for generating the character at -th timestep, making prediction more accurate.

5 Summary and Extension

In this paper, we propose the CSSMCM, a Cascade Sequence-to-Sequence Model for Chinese Mandarin lip reading. CSSMCM is designed to predicting pinyin sequence, tone sequence, and Chinese character sequence one by one. When predicting tone sequence, a dual attention mechanism is used to consider video sequence and pinyin sequence at the same time. When predicting the Chinese character sequence, a triplet attention mechanism is proposed to take all the video sequence, pinyin sequence, and tone sequence information into consideration. CSSMCM consistently outperforms other lip reading architectures on the proposed CMLR dataset.

Lip reading and speech recognition are very similar. In Chinese Mandarin speech recognition, there have been kinds of different acoustic representations like syllable initial/final approach, syllable initial/final with tone approach, syllable approach, syllable with tone approach, preme/toneme approach [chen1997new] and Chinese Character approach [zhou2018a]. In this paper, the Chinese character is chosen as the output unit. However, we find that the wrongly predicted characters severely affect the understandability of transcriptions. Using larger output units, like Chinese words, maybe can alleviate this problem.

References