Variational Fusion for Multimodal Sentiment Analysis

08/13/2019 ∙ by Navonil Majumder, et al. ∙ University of Michigan adobe Agency for Science, Technology and Research 5

Multimodal fusion is considered a key step in multimodal tasks such as sentiment analysis, emotion detection, question answering, and others. Most of the recent work on multimodal fusion does not guarantee the fidelity of the multimodal representation with respect to the unimodal representations. In this paper, we propose a variational autoencoder-based approach for modality fusion that minimizes information loss between unimodal and multimodal representations. We empirically show that this method outperforms the state-of-the-art methods by a significant margin on several popular datasets.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Multimodal sentiment analysis has received significant traction in recent years, due to its ability to understand the opinions expressed in the increasing number of videos available on open platforms such as YouTube, Facebook, Vimeo, and others. This is important, as more and more enterprises tend to make business decisions based on the user sentiment behind their products as expressed through these videos.

Multimodal fusion is considered a key step in multimodal sentiment analysis. Most recent work on multimodal fusion poria-EtAl:2017:Long; AAAI1817390 has focused on the strategy of obtaining a multimodal representation from the independent unimodal representations. Our approach takes this strategy one step further, by also requiring that the original unimodal representations be reconstructed from the unified multimodal representation. The motivation behind this is the intuition that different modalities are an expression of the state of the mind. Hence, if we assume that the fused representation is the mind-state/sentiment/emotion, then in our approach we are ensuring that the fused representation can be mapped back to the unimodal representations, which should improve the quality of the multimodal representation. In this paper, we empirically argue that this is the case by showing that this approach outperforms the state-of-the-art in multimodal fusion.

We employ a variational autoencoder (VAE) DBLP:journals/corr/KingmaW13, where the encoder network generates a latent representation from the unimodal representations. Further, the decoder network decodes the unimodal representations from the latent representation to the original unimodal representation. This latent representation is treated as the multimodal representation for the final classification.

2 Related Work

rozgic2012ensemble and wollmer2013youtube were the first to fuse acoustic, visual, and text modalities for sentiment and emotion detection. Later, poria2015deep employed CNN and multi-kernel learning for multimodal sentiment analysis. Further, poria-EtAl:2017:Long

used long short-term memory (LSTM) to enable context-dependent multimodal fusion, where the surrounding utterances are taken into account for context.

Recently, for context-free setting where the surrounding utterances are not used as context, zadeh-EtAl:2017:EMNLP2017

used tensor outer-products to model intra- and inter-modal interactions. Again,

AAAI1817341 used multi-view learning for utterance-level multimodal fusion. Further, AAAI1817390 employed hybrid LSTM memory components to model intra-modal and cross-modal interactions.

3 Method

Usually humans express their thoughts through three perceivable modalities - textual (speech), acoustic (pitch and other properties of voice), and visual (facial expression). Where most recent works on multimodal fusion treat these unimodal representations independently and employ an encoder network to obtain the fused representation vector, we go one step further by decoding the fused-multimodal representation into the original unimodal representations.

First the utterance-level unimodal features are extracted independently. Then, the modality features are fed to encoder network to sample the fused representation. Further, the fused representation is decoded back to the unimodal representations to ensure the fidelity of the fused representation. This setup is basically an autoencoder. Specifically, we employ a variational autoencoder (VAE) DBLP:journals/corr/KingmaW13, as described in Fig. 1, where the latent representation is used as the fused representation.

Figure 1: Graphical model of our multimodal fusion scheme.

3.1 Unimodal Feature Extraction

Textual (), visual (), and acoustic () features are extracted using CNN, 3D-CNN tran2015learning, and OpenSmile eyben2015opensmile respectively, with the methodology described by poria-EtAl:2017:Long.

3.2 Encoder

The encoder takes the concatenation of the unimodal features of an utterance as input, where is textual feature of size , is acoustic feature of size , and is visual feature of size , and infers the latent multimodal representation of size from the posterior distribution , such that

(1)
(2)
(3)

Since, the true posterior distribution is intractable, is fed through two fully-connected layers to generate mean (

) and standard deviation (

) of the approximate posterior normal distribution

, which infers the latent representation :

(4)
(5)
(6)

where , , , , , , , and .

Sampling Latent (Multimodal) Representation

The latent representation is sampled using the reparameterization trick DBLP:journals/corr/KingmaW13

to facilitate backpropagation:

(7)
(8)

where , , and represents hadamard product. This is considered as the fused multimodal representation.

3.3 Decoder

The decoder reconstructs the input as from the latent representation with two fully-connected layers as follows:

(9)
(10)

where , , , , , and .

3.4 Classification

We tried two different classification networks:

Logistic Regression (LR)

We employ a fully-connected layer with softmax activation where the fused representation is fed:

(11)
(12)

where , ,

is the vector of class-probabilities,

is the predicted class, and is the number of classes ( in our case).

Context-Dependent Classifier (bc-LSTM poria-EtAl:2017:Long)

The sequence of fused utterance representations () in a video is fed to a bidirectional-LSTM hochreiter1997long, following poria-EtAl:2017:Long, of size for context propagation and then the output of the LSTM is fed to a fully-connected layer with softmax activation for classification:

(13)
(14)
(15)
(16)
(17)

where is the sequence of fused utterance representations in a video with utterances, is the context-dependent fused representations of the utterances (), , , is the vector of class-probabilities for utterance , is the predicted class for utterance , and is the number of classes (e.g. for MOSI dataset (Section 4.1)).

3.5 Training

Latent Representation Inference

Following DBLP:journals/corr/KingmaW13, the approximate posterior distribution is tuned close to the true posterior by maximizing the evidence lower bound (ELBO), where

(18)
ELBO
(19)

The first term of the ELBO, , corresponds to the reconstruction loss of input . The second term, , pushes the approximate posterior close to the prior by minimizing the KL-divergence between them.

Classification

To train the sentiment classifier (

Section 3.4), we minimize the categorical cross-entropy (), defined as

(20)

where is the number of samples,

is the probability distribution for sample

on different classes (for our experiments, we use two classes; positive and negative), and is the target class for sample .

The networks are optimized using stochastic-gradient-descent-based Adam 

DBLP:journals/corr/KingmaB14

algorithm. Further, the hyperparameters

and learning-rate are optimized with grid-search (optimal hyperparameters are listed in the supplementary material). The latent representation size is set to .

4 Experimental Settings

We evaluate the quality of the multimodal features extracted by VAE

111implementation available at https://github.com/xxxx/xxxx/ (will be releaved upon acceptance) using two classifiers (Section 3.4). Hence, the two variants are named VAE+LR and VAE+bc-LSTM in Table 2.

4.1 Datasets

We evaluate our approach on three different datasets.

Mosi zadeh2016multimodal

This dataset contains videos of 89 people reviewing various topics in English. The videos are segmented into utterances where each utterance is annotated with sentiment tags (positive/negative). Our train/test splits of the dataset are completely disjoint with respect to speakers. In particular, 1447 and 752 utterances are used for training and test respectively.

Mosei bagherzadeh-EtAl:2018:Long

MOSEI dataset contains 22676 utterances from 3229 videos. The videos were crawled from Youtube. There are 1000 unique speakers in the MOSEI dataset. Videos in MOSEI mostly comprise of product and movie reviews. We used 16188, 1874, and 4614 utterances as training, validation, and test folds. respectively. The utterances are labeled with either of the positive, negative, and neutral sentiment categories.

Iemocap iemocap

IEMOCAP contains two way conversations among ten speakers, segmented into utterances. The utterances are tagged with one of the six emotion labels anger, happy, sad, neutral, excited, and frustrated. The first eight speakers of sessions one to four belong to training set and the rest to the test set.

Dataset Train Test
MOSEI 16188 4614
IEMOCAP 5810 1623
MOSI 1447 752
Table 1: Utterance count in the train and test sets.

4.2 Baseline Methods

Logistic Regression (LR)

The concatenation of the utterance-level unimodal representations is classified using logistic regression as described in

Section 3.4. This does not consider the surrounding neighbouring utterances as context.

bc-LSTM poria-EtAl:2017:Long

The concatenation of the utterance-level unimodal representations is sequentially fed to the bc-LSTM classifier described in Section 3.4. This is the state-of-the-art method.

Tfn zadeh-EtAl:2017:EMNLP2017

This network models both intra-modal and inter-modal interactions through outer product. It does not use the neighbouring utterances as context.

Mfn Aaai1817341

This network exploits multi-view learning to fuse modalities. It also does not use neighbouring utterances as context.

Marn Aaai1817390

In this model the intra-modal and cross-modal interactions are modeled with hybrid LSTM memory component.

5 Results and Discussion

Method MOSI MOSEI IEMOCAP
TFN 74.8 53.7 56.8
MARN 74.5 53.2 54.2
MFN 74.2 54.1 53.5
LR 74.6 56.6 53.9
VAE+LR 77.8 57.4 54.4
bc-LSTM 75.1 56.8 57.7
VAE+bc-LSTM 80.4 58.8 59.6
Table 2: Trimodal (acoustic, visual, and textual) F1-scores of our method against the baselines (results on MOSI and IEMOCAP are based on the dataset split from poria-EtAl:2017:Long); * signifies statistically significant improvement (

with paired t-test) over bc-LSTM.

Table 2 shows the performance our VAE-based methods, namely VAE+LR and VAE+bc-LSTM, outperform their concatenation fusion counterpart LR and bc-LSTM consistently on all three datasets. Specifically, our context-dependent model, VAE+bc-LSTM, outperforms the context-dependent state-of-the-art method bc-LSTM on all the datasets, by 3.1% on average. Moreover, our context-free model VAE+LR outperforms the other context-free models, namely MFN, MARN, TFN, and LR, on all datasets, by 1.5% on average. Also, due to the contextual information, VAE+bc-LSTM outperforms VAE+LR by 3.1% on average.

This is due to the superior multimodal representation from VAE, that retains enough information from the unimodal representations to allow reconstruction. This leads to highly informative classification. (Supplementary material compares the visualizations of the fused representations)

5.1 Case Study

Comparing the predictions of our model to the baselines reveals that our model is better equipped for catching the instances where the non-verbal cues are essential for classification. For instance, the utterance from IEMOCAP “I still can’t live on in six seven and five. It’s not possible in Los Angeles. Housing is too expensive.” is mis-classified as excited by bc-LSTM, whereas VAE+bc-LSTM correctly classifies it as angry. We posit that in this case the bc-LSTM is confused by the emotionally ambiguous textual modality, whereas the VAE+bc-LSTM taps into the visual modality to observe the frown on the speakers face to make the correct classification. Besides this, we observed several similar cases where VAE+bc-LSTM or VAE+LR correctly classifies based on non-verbal cues, where their non-VAE counterparts could not.

Error Analysis

“No. I am just making myself fascinating for you.” is response to a question “you going out somewhere, dear?”. This is a sarcastic response. VAE+bc-LSTM falsely predicted the emotion as excited, where the ground truth is angry. We suspect that our model’s failure to identify sarcasm with the aid of multimodality led to this misclassification.

6 Conclusion

In this paper, we presented a comprehensive fusion strategy, based on VAE that outperforms previous methods by a significant margin. The encoder and decoder networks in the VAE are simple fully-connected layers. We plan to improve the performance of our method by employing more sophisticated networks, such as fusion networks like MFN and TFN as the encoders.

References