Interpretable Multimodal Routing for Human Multimodal Language

04/29/2020
by   Yao-Hung Hubert Tsai, et al.
0

The human language has heterogeneous sources of information, including tones of voice, facial gestures, and spoken language. Recent advances introduced computational models to combine these multimodal sources and yielded strong performance on human-centric tasks. Nevertheless, most of the models are often black-box, which comes with the price of lacking interpretability. In this paper, we propose Multimodal Routing to separate the contributions to the prediction from each modality and the interactions between modalities. At the heart of our method is a routing mechanism that represents each prediction as a concept, i.e., a vector in a Euclidean space. The concept assumes a linear aggregation from the contributions of multimodal features. Then, the routing procedure iteratively 1) associates a feature and a concept by checking how this concept agrees with this feature and 2) updates the concept based on the associations. In our experiments, we provide both global and local interpretation using Multimodal Routing on sentiment analysis and emotion prediction, without loss of performance compared to state-of-the-art methods. For example, we observe that our model relies mostly on the text modality for neutral sentiment predictions, the acoustic modality for extremely negative predictions, and the text-acoustic bimodal interaction for extremely positive predictions.

READ FULL TEXT
research
07/17/2021

M2Lens: Visualizing and Explaining Multimodal Models for Sentiment Analysis

Multimodal sentiment analysis aims to recognize people's attitudes from ...
research
07/25/2023

Text-oriented Modality Reinforcement Network for Multimodal Sentiment Analysis from Unaligned Multimodal Sequences

Multimodal Sentiment Analysis (MSA) aims to mine sentiment information f...
research
05/14/2019

Strong and Simple Baselines for Multimodal Utterance Embeddings

Human language is a rich multimodal signal consisting of spoken words, f...
research
05/07/2023

Interpretable multimodal sentiment analysis based on textual modality descriptions by using large-scale language models

Multimodal sentiment analysis is an important area for understanding the...
research
08/22/2022

Make Acoustic and Visual Cues Matter: CH-SIMS v2.0 Dataset and AV-Mixup Consistent Module

Multimodal sentiment analysis (MSA), which supposes to improve text-base...
research
02/03/2018

Multi-attention Recurrent Network for Human Communication Comprehension

Human face-to-face communication is a complex multimodal signal. We use ...
research
05/10/2021

Accountable Error Characterization

Customers of machine learning systems demand accountability from the com...

Please sign up or login with your details

Forgot password? Click here to reset