Learning Fine-Grained Multimodal Alignment for Speech Emotion Recognition
Speech emotion recognition is a challenging task because the emotion expression is complex, multimodal and fine-grained. In this paper, we propose a novel multimodal deep learning approach to perform fine-grained emotion recognition from real-life speeches. We design a temporal alignment pooling mechanism to capture the subtle and fine-grained emotions implied in every utterance. In addition, we propose a cross modality excitation module to conduct sample-specific activations on acoustic embedding dimensions and adaptively recalibrate the corresponding values by latent semantic features. The proposed model is evaluated on two well-known real-world speech emotion recognition datasets. The results demonstrate that our approach is superior on the prediction tasks for multimodal speech utterances, and it outperforms a wide range of baselines in terms of prediction accuracy. In order to encourage the research reproducibility, we make the code publicly available at https://github.com/hzlihang99/icassp2021_CME.git.
READ FULL TEXT