Anticipative Feature Fusion Transformer for Multi-Modal Action Anticipation

10/23/2022
by   Zeyun Zhong, et al.
0

Although human action anticipation is a task which is inherently multi-modal, state-of-the-art methods on well known action anticipation datasets leverage this data by applying ensemble methods and averaging scores of unimodal anticipation networks. In this work we introduce transformer based modality fusion techniques, which unify multi-modal data at an early stage. Our Anticipative Feature Fusion Transformer (AFFT) proves to be superior to popular score fusion approaches and presents state-of-the-art results outperforming previous methods on EpicKitchens-100 and EGTEA Gaze+. Our model is easily extensible and allows for adding new modalities without architectural changes. Consequently, we extracted audio features on EpicKitchens-100 which we add to the set of commonly used features in the community.

READ FULL TEXT

page 1

page 4

page 7

page 12

research
09/25/2022

Multimodal Learning with Channel-Mixing and Masked Autoencoder on Facial Action Unit Detection

Recent studies utilizing multi-modal data aimed at building a robust mod...
research
05/31/2023

A Multi-Modal Transformer Network for Action Detection

This paper proposes a novel multi-modal transformer network for detectin...
research
11/09/2020

After All, Only The Last Neuron Matters: Comparing Multi-modal Fusion Functions for Scene Graph Generation

From object segmentation to word vector representations, Scene Graph Gen...
research
04/08/2019

Text-based Depression Detection: What Triggers An Alert

Recent advances in automatic depression detection mostly derive from mod...
research
06/02/2023

Backchannel Detection and Agreement Estimation from Video with Transformer Networks

Listeners use short interjections, so-called backchannels, to signify at...
research
01/08/2023

RGB-T Multi-Modal Crowd Counting Based on Transformer

Crowd counting aims to estimate the number of persons in a scene. Most s...
research
04/16/2023

TransFusionOdom: Interpretable Transformer-based LiDAR-Inertial Fusion Odometry Estimation

Multi-modal fusion of sensors is a commonly used approach to enhance the...

Please sign up or login with your details

Forgot password? Click here to reset