Memory Based Attentive Fusion

07/16/2020
by   Darshana Priyasad, et al.
0

The use of multi-modal data for deep machine learning has shown promise when compared to uni-modal approaches, where fusion of multi-modal features has resulted in improved performance. However, most state-of-the-art methods use naive fusion which processes feature streams from a given time-step and ignores long-term dependencies within the data during fusion. In this paper, we present a novel Memory Based Attentive Fusion (MBAF) layer, which fuses modes by incorporating both the current features and long-term dependencies in the data, thus allowing the model to understand the relative importance of modes over time. We define an explicit memory block within the fusion layer which stores features containing long-term dependencies of the fused data. The inputs to our layer are fused through attentive composition and transformation, and the transformed features are combined with the input to generate the fused layer output. Following existing state-of-the-art methods, we have evaluated the performance and the generalizability of the proposed approach on the IEMOCAP and PhysioNet-CMEBS datasets with different modalities. In our experiments, we replace the naive fusion layer in benchmark networks with our proposed layer to enable a fair comparison. Experimental results indicate that MBAF layer can generalise across different modalities and networks to enhance the fusion and improve performance.

READ FULL TEXT

page 14

page 16

research
09/04/2017

Multi-modal Conditional Attention Fusion for Dimensional Emotion Prediction

Continuous dimensional emotion prediction is a challenging task where th...
research
03/03/2020

Deep Multi-Modal Sets

Many vision-related tasks benefit from reasoning over multiple modalitie...
research
05/06/2020

Hybrid and hierarchical fusion networks: a deep cross-modal learning architecture for action recognition

Two-stream networks have provided an alternate way of exploiting the spa...
research
04/20/2017

Knowledge Fusion via Embeddings from Text, Knowledge Graphs, and Images

We present a baseline approach for cross-modal knowledge fusion. Differe...
research
08/24/2023

SkipcrossNets: Adaptive Skip-cross Fusion for Road Detection

Multi-modal fusion is increasingly being used for autonomous driving tas...
research
11/27/2019

Perceive, Transform, and Act: Multi-Modal Attention Networks for Vision-and-Language Navigation

Vision-and-Language Navigation (VLN) is a challenging task in which an a...
research
02/02/2018

Dual Memory Neural Computer for Asynchronous Two-view Sequential Learning

One of the core task in multi-view learning is to capture all relations ...

Please sign up or login with your details

Forgot password? Click here to reset