MIMAMO Net: Integrating Micro- and Macro-motion for Video Emotion Recognition

11/21/2019
by   Didan Deng, et al.
0

Spatial-temporal feature learning is of vital importance for video emotion recognition. Previous deep network structures often focused on macro-motion which extends over long time scales, e.g., on the order of seconds. We believe integrating structures capturing information about both micro- and macro-motion will benefit emotion prediction, because human perceive both micro- and macro-expressions. In this paper, we propose to combine micro- and macro-motion features to improve video emotion recognition with a two-stream recurrent network, named MIMAMO (Micro-Macro-Motion) Net. Specifically, smaller and shorter micro-motions are analyzed by a two-stream network, while larger and more sustained macro-motions can be well captured by a subsequent recurrent network. Assigning specific interpretations to the roles of different parts of the network enables us to make choice of parameters based on prior knowledge: choices that turn out to be optimal. One of the important innovations in our model is the use of interframe phase differences rather than optical flow as input to the temporal stream. Compared with the optical flow, phase differences require less computation and are more robust to illumination changes. Our proposed network achieves state of the art performance on two video emotion datasets, the OMG emotion dataset and the Aff-Wild dataset. The most significant gains are for arousal prediction, for which motion information is intuitively more informative. Source code is available at https://github.com/wtomin/MIMAMO-Net.

READ FULL TEXT

page 4

page 6

research
05/04/2021

Technical Report for Valence-Arousal Estimation on Affwild2 Dataset

In this work, we describe our method for tackling the valence-arousal es...
research
05/28/2019

Hallucinating Optical Flow Features for Video Classification

Appearance and motion are two key components to depict and characterize ...
research
08/07/2021

HetEmotionNet: Two-Stream Heterogeneous Graph Recurrent Neural Network for Multi-modal Emotion Recognition

The research on human emotion under multimedia stimulation based on phys...
research
04/15/2021

AffectiveNet: Affective-Motion Feature Learningfor Micro Expression Recognition

Micro-expressions are hard to spot due to fleeting and involuntary momen...
research
12/18/2019

Spotting Macro- and Micro-expression Intervals in Long Video Sequences

This paper presents baseline results for the Third Facial Micro-Expressi...
research
08/30/2016

Motion Representation with Acceleration Images

Information of time differentiation is extremely important cue for a mot...

Please sign up or login with your details

Forgot password? Click here to reset