Lip-reading with Hierarchical Pyramidal Convolution and Self-Attention

12/28/2020
by   Hang Chen, et al.
3

In this paper, we propose a novel deep learning architecture to improving word-level lip-reading. On the one hand, we first introduce the multi-scale processing into the spatial feature extraction for lip-reading. Specially, we proposed hierarchical pyramidal convolution (HPConv) to replace the standard convolution in original module, leading to improvements over the model's ability to discover fine-grained lip movements. On the other hand, we merge information in all time steps of the sequence by utilizing self-attention, to make the model pay more attention to the relevant frames. These two advantages are combined together to further enhance the model's classification power. Experiments on the Lip Reading in the Wild (LRW) dataset show that our proposed model has achieved 86.83 the current state-of-the-art. We also conducted extensive experiments to better understand the behavior of the proposed model.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset