Multimodal Local-Global Ranking Fusion for Emotion Recognition

08/12/2018
by   Paul Pu Liang, et al.
0

Emotion recognition is a core research area at the intersection of artificial intelligence and human communication analysis. It is a significant technical challenge since humans display their emotions through complex idiosyncratic combinations of the language, visual and acoustic modalities. In contrast to traditional multimodal fusion techniques, we approach emotion recognition from both direct person-independent and relative person-dependent perspectives. The direct person-independent perspective follows the conventional emotion recognition approach which directly infers absolute emotion labels from observed multimodal features. The relative person-dependent perspective approaches emotion recognition in a relative manner by comparing partial video segments to determine if there was an increase or decrease in emotional intensity. Our proposed model integrates these direct and relative prediction perspectives by dividing the emotion recognition task into three easier subtasks. The first subtask involves a multimodal local ranking of relative emotion intensities between two short segments of a video. The second subtask uses local rankings to infer global relative emotion ranks with a Bayesian ranking algorithm. The third subtask incorporates both direct predictions from observed multimodal behaviors and relative emotion ranks from local-global rankings for final emotion prediction. Our approach displays excellent performance on an audio-visual emotion recognition benchmark and improves over other algorithms for multimodal fusion.

READ FULL TEXT
research
08/24/2022

ICANet: A Method of Short Video Emotion Recognition Driven by Multimodal Data

With the fast development of artificial intelligence and short videos, e...
research
07/08/2020

Temporal aggregation of audio-visual modalities for emotion recognition

Emotion recognition has a pivotal role in affective computing and in hum...
research
06/12/2022

COLD Fusion: Calibrated and Ordinal Latent Distribution Fusion for Uncertainty-Aware Multimodal Emotion Recognition

Automatically recognising apparent emotions from face and voice is hard,...
research
03/02/2021

Investigations on Audiovisual Emotion Recognition in Noisy Conditions

In this paper we explore audiovisual emotion recognition under noisy aco...
research
07/01/2016

Fractal Dimension Pattern Based Multiresolution Analysis for Rough Estimator of Person-Dependent Audio Emotion Recognition

As a general means of expression, audio analysis and recognition has att...
research
06/27/2023

Explainable Multimodal Emotion Reasoning

Multimodal emotion recognition is an active research topic in artificial...
research
03/03/2018

An Ensemble Framework of Voice-Based Emotion Recognition System for Films and TV Programs

Employing voice-based emotion recognition function in artificial intelli...

Please sign up or login with your details

Forgot password? Click here to reset