Multimodal Learning with Channel-Mixing and Masked Autoencoder on Facial Action Unit Detection

09/25/2022
by   Xiang Zhang, et al.
0

Recent studies utilizing multi-modal data aimed at building a robust model for facial Action Unit (AU) detection. However, due to the heterogeneity of multi-modal data, multi-modal representation learning becomes one of the main challenges. On one hand, it is difficult to extract the relevant features from multi-modalities by only one feature extractor, on the other hand, previous studies have not fully explored the potential of multi-modal fusion strategies. For example, early fusion usually required all modalities to be present during inference, while late fusion and middle fusion increased the network size for feature learning. In contrast to a large amount of work on late fusion, there are few works on early fusion to explore the channel information. This paper presents a novel multi-modal network called Multi-modal Channel-Mixing (MCM), as a pre-trained model to learn a robust representation in order to facilitate the multi-modal fusion. We evaluate the learned representation on a downstream task of automatic facial action units detection. Specifically, it is a single stream encoder network that uses a channel-mixing module in early fusion, requiring only one modality in the downstream detection task. We also utilize the masked ViT encoder to learn features from the fusion image and reconstruct back two modalities with two ViT decoders. We have conducted extensive experiments on two public datasets, known as BP4D and DISFA, to evaluate the effectiveness and robustness of the proposed multimodal framework. The results show our approach is comparable or superior to the state-of-the-art baseline methods.

READ FULL TEXT

page 1

page 3

page 6

page 7

research
03/22/2022

Multi-Modal Learning for AU Detection Based on Multi-Head Fused Transformers

Multi-modal learning has been intensified in recent years, especially fo...
research
10/23/2022

Anticipative Feature Fusion Transformer for Multi-Modal Action Anticipation

Although human action anticipation is a task which is inherently multi-m...
research
08/02/2021

Multimodal Feature Fusion for Video Advertisements Tagging Via Stacking Ensemble

Automated tagging of video advertisements has been a critical yet challe...
research
07/17/2018

Robust Deep Multi-modal Learning Based on Gated Information Fusion Network

The goal of multi-modal learning is to use complimentary information on ...
research
06/10/2019

Commuting Conditional GANs for Robust Multi-Modal Fusion

This paper presents a data driven approach to multi-modal fusion, where ...
research
04/10/2022

Multimodal Machine Learning in Precision Health

As machine learning and artificial intelligence are more frequently bein...
research
06/07/2019

Multi-modal Active Learning From Human Data: A Deep Reinforcement Learning Approach

Human behavior expression and experience are inherently multi-modal, and...

Please sign up or login with your details

Forgot password? Click here to reset