Mutual Information Maximization for Effective Lip Reading

03/13/2020
by   Xing Zhao, et al.
0

Lip reading has received an increasing research interest in recent years due to the rapid development of deep learning and its widespread potential applications. One key point to obtain good performance for the lip reading task depends heavily on how effective the representation can be to capture the lip movement information and meanwhile to resist the noises resulted from the change of pose, lighting conditions, speaker's appearance and so on. Towards this target, we propose to introduce the mutual information constraints on both the local feature's level and the global sequence's level to enhance the relations of the features with the speech content. On the one hand, we constraint the features generated at each time step to enable them carry a strong relation with the speech content by imposing the local mutual information maximization constraint (LMIM), leading to improvements over the model's ability to discover fine-grained lip movements and the fine-grained differences among words with similar pronunciation, such as “spend” and “spending”. On the other hand, we introduce the mutual information maximization constraint on the global sequence's level (GMIM), to make the model be able to pay more attention to discriminate key frames related with the speech content, and less to various noises appeared in the speaking process. By combining these two advantages together, the proposed method is expected to be both discriminative and robust for effective lip reading. To verify this method, we evaluate on two large-scale benchmark. We perform a detailed analysis and comparison on several aspects, including the comparison of the LMIM and GMIM with the baseline, the visualization of the learned representation and so on. The results not only prove the effectiveness of the proposed method but also report new state-of-the-art performance on both the two benchmarks.

READ FULL TEXT

page 1

page 4

research
08/30/2019

Multi-Grained Spatio-temporal Modeling for Lip-reading

Lip-reading aims to recognize speech content from videos via visual anal...
research
12/28/2020

Lip-reading with Hierarchical Pyramidal Convolution and Self-Attention

In this paper, we propose a novel deep learning architecture to improvin...
research
03/09/2020

Pseudo-Convolutional Policy Gradient for Sequence-to-Sequence Lip-Reading

Lip-reading aims to infer the speech content from the lip movement seque...
research
12/01/2022

Mutual Information-based Generalized Category Discovery

We introduce an information-maximization approach for the Generalized Ca...
research
10/07/2021

InfoSeg: Unsupervised Semantic Image Segmentation with Mutual Information Maximization

We propose a novel method for unsupervised semantic image segmentation b...
research
08/18/2022

Speech Representation Disentanglement with Adversarial Mutual Information Learning for One-shot Voice Conversion

One-shot voice conversion (VC) with only a single target speaker's speec...
research
03/12/2020

Deformation Flow Based Two-Stream Network for Lip Reading

Lip reading is the task of recognizing the speech content by analyzing m...

Please sign up or login with your details

Forgot password? Click here to reset