Multimodal Utterance-level Affect Analysis using Visual, Audio and Text Features

05/02/2018
by   Didan Deng, et al.
0

Affective computing models are essential for human behavior analysis. A promising trend of affective system is enhancing the recognition performance by analyzing the contextual information over time and across modalities. To overcome the limitations of instantaneous emotion recognition, the 2018 IJCNN challenge on One-Minute Gradual-Emotion Recognition (OMG-Emotion) encourages the participants to address long-term emotion recognition using multiple modalities data like facial expression, audio and language context. Compared with single modality models given by the baseline method, a multi-modal inference network can leverage the information from each modality and their correlations to improve the performance of recognition. In this paper, we propose a multi-modal architecture which uses facial, audio and language context features to recognize human sentiment from utterances. Our model outperforms the provided unimodal baseline, and achieves the concordance correlation coefficients (CCC) 0.400 of arousal task, and 0.353 of valence task.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset