Information Fusion in Attention Networks Using Adaptive and Multi-level Factorized Bilinear Pooling for Audio-visual Emotion Recognition

Multimodal emotion recognition is a challenging task in emotion computing as it is quite difficult to extract discriminative features to identify the subtle differences in human emotions with abstract concept and multiple expressions. Moreover, how to fully utilize both audio and visual information is still an open problem. In this paper, we propose a novel multimodal fusion attention network for audio-visual emotion recognition based on adaptive and multi-level factorized bilinear pooling (FBP). First, for the audio stream, a fully convolutional network (FCN) equipped with 1-D attention mechanism and local response normalization is designed for speech emotion recognition. Next, a global FBP (G-FBP) approach is presented to perform audio-visual information fusion by integrating selfattention based video stream with the proposed audio stream. To improve G-FBP, an adaptive strategy (AG-FBP) to dynamically calculate the fusion weight of two modalities is devised based on the emotion-related representation vectors from the attention mechanism of respective modalities. Finally, to fully utilize the local emotion information, adaptive and multi-level FBP (AMFBP) is introduced by combining both global-trunk and intratrunk data in one recording on top of AG-FBP. Tested on the IEMOCAP corpus for speech emotion recognition with only audio stream, the new FCN method outperforms the state-ofthe-art results with an accuracy of 71.40 and the IEMOCAP corpus for audio-visual emotion recognition, the proposed AM-FBP approach achieves the best accuracy of 63.09 the test set.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 5

page 6

page 7

page 10

page 11

page 12

page 13

01/15/2019

Deep Fusion: An Attention Guided Factorized Bilinear Pooling for Audio-video Emotion Recognition

Automatic emotion recognition (AER) is a challenging task due to the abs...
03/28/2016

Audio Visual Emotion Recognition with Temporal Alignment and Perception Attention

This paper focuses on two key problems for audio-visual emotion recognit...
09/09/2020

Multi-modal Attention for Speech Emotion Recognition

Emotion represents an essential aspect of human speech that is manifeste...
06/05/2018

Attention Based Fully Convolutional Network for Speech Emotion Recognition

Speech emotion recognition is a challenging task for three main reasons:...
01/04/2018

A pairwise discriminative task for speech emotion recognition

Speech emotion recognition is an important task in human-machine interac...
05/03/2018

Framewise approach in multimodal emotion recognition in OMG challenge

In this report we described our approach achieves 53% of unweighted accu...
06/25/2019

Emotion Recognition Using Fusion of Audio and Video Features

In this paper we propose a fusion approach to continuous emotion recogni...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Due to the rapid development of intelligent technology and the wide applications in human-computer interaction, it is of great significance to realize scientific emotion recognition [1]. For example, doctors utilize emotion recognition technology to do research on Parkinson’s disease [2, 3]. In the field of service robots, effective emotion recognition can bring more comfortable interaction experience to users. As for the areas of computing advertising and entertainment, detecting consumers’ emotions helps enterprises provide better services [4, 5]. Accordingly, automatic multimodal emotion recognition, has attracted more and more attention in real-life cases [6, 7], which is also the focus of this study.

Some progress of multimodal emotion recognition has been made by combining different modalities such as speech, face, body gesture, and brain signals, etc [8, 3, 9]. Audio and video, more specifically, the speech and facial expressions are two kinds of most powerful, natural and universal signals for human beings to convey their emotional states and intentions [10]. For speech emotion recognition (SER) using only audio stream, distinguishing acoustic features [11, 12]

are often extracted from original raw speech signals, followed by different classifiers 

[13, 14]. For video-based emotion recognition, video frame or image preprocessing [15]

is necessary correspondingly, referring to face detection, alignment and face key point detection and so on. Then image feature vectors are also fed into a classifier for prediction 

[16]

. As for integrating audio and video modalities, it usually involves at the feature, model and decision levels. In feature-level fusion, features extracted from each of the two modalities are concatenated as one vector for emotion classification 

[17]

, which does not take into account the differences in modal-specific emotional characteristics. Moreover, this strategy is difficult to model the time synchronization between audio and visual modalities. For decision-level fusion, the posterior probabilities of the two individual classifiers are combined, e.g., using linear weighted combination, support vector machine (SVM), etc. 

[18]

to obtain the final recognition results. This technique fully considers the differences of audio and visual features, but it is weak in modeling the interactions between the two modalities. As a compromise between feature-level and decision-level fusions, model-level fusion has also been used for audio-visual emotion recognition (AVER). A tripled hidden Markov model (THMM) was introduced to perform the recognition which allowed the state asynchrony of the audio and visual observation sequences while preserving their natural correlation over time 

[19]. In [20]

, multi-stream hidden Markov model (MFHMM) was proposed which adopted a variety of learning methods to achieve a robust multi-stream fusion result according to the maximum entropy principle.

Fig. 1: An overall architecture of the proposed multimodal attention and fusion network based on adaptive and multi-level factorized bilinear pooling for audio-visual emotion recognition. The detected face and spectrogram are initially encoded by video encoder and audio encoder respectively. The outputs are subsequently weighted by 1-D attention. Then AG-FBP and M-FBP are employed to fuse audio-visual information based on whole sentence and segmentation respectively. FBP module is shown in Fig. 2. The outputs are finally concatenated and fed into classifier to determine the emotion class.

With recent developments of deep learning in the multimodal field, recurrent neural network (RNN) and 3D convolutional networks (C3D) were used to solve the problem of video classification in 

[21]. It encodes appearance and motion information in different ways and combines them into a late-fusion manner. Researchers have also investigated how to extract more representative features [22, 23]

as expression forms of audio and video are often quite different. Deep learning shows better performances than other traditional machine learning algorithms in this kind of fusion. 

[24] proposed to bridge the emotional gap by using a hybrid deep model, which first produces audio-

visual segment features with convolutional neural networks (CNNs) and 3D-CNN, then fuses them in deep belief networks (DBNs). In 

[25], a concatenation of different modalities was performed after an encoder which yielded significant improvements. In our recent work [26], we introduced global-trunk based factorized bilinear pooling (G-FBP) to integrate the audio and visual features, achieving a state-of-the-art performance.

Audio-visual emotion recognition has been investigated for quite a few years and considered as a comparatively hot topic in the field of affective computing. Nonetheless, it remains a challenging problem in which there are still many uncontrolled factors for data acquisition. The varying conditions for audio-visual emotion data include indoor and outdoor scenarios, environmental noises, lighting situations, motion blurs, occlusions and pose changes, etc [27]. To address these issues, the EmotiW [7] challenges have been held successfully since 2013. The winning teams [28, 29, 30, 23] have proposed several advanced techniques for AVER and achieved better results every year, which further investigated on how to effectively model different modalities for information fusion. In this study, we comprehensively extend our previous G-FBP approach [26] and propose an attention network for multimodal fusion for AVER based on adaptive and multi-level FBP as shown in Fig. 1. The new contributions can be summarized below:

  • A fully convolutional network (FCN) based 1-D attention network is designed for speech emotion recognition by utilizing local response normalization (LRN).

  • An adaptive G-FBP (AG-FBP) approach is presented to automatically calculate the importance weights of audio and video modalities when using G-FBP fusion.

  • On top of AG-FBP, adaptive and multi-level FBP (AM-FBP) is introduced to fully utilize the local emotion information by additionally using intra-trunk data.

  • We achieve the best accuracy of 63.09% on the test set of EmotiW2019 sub-challenge [7] and 75.49% on the test set of IEMOCAP corpus, and demonstrate the effectiveness of the proposed approach by visualizing the changes of attention weights and network embedding.

The rest of this paper is organized as follows. In Section II, we introduce the related work. In Section III, we elaborate on the proposed fusion strategy. In Section IV, we present our experimental results and analyses. Finally, we draw our conclusions in Section V.

Ii Related Work

Ii-a Audio-based emotion recognition

In the process of human interaction, speech is the most direct communication channel. People can often clearly feel the changes in emotion through speech, such as human voice quality, rhythm, as well as prosodic expressions in pitch and energy contours. In order to recognize speakers’ emotional states, distinguishing paralinguistic features, which do not depend on the the lexical content, need to be extracted from speech [31]. Many types of acoustic features have been used for speech emotion recognition, including continuous, qualitative, and spectral features [32, 33].

Fig. 2: Factorized bilinear pooling (FBP) module.

In general, the original raw speech signals are first segmented into overlapped frames. Then various statistical functions (e.g. mean, max, linear regression coefficients, etc.) are utilized to obtain frame-level features. The outputs are then concatenated as a feature vector to represent the whole audio recording, followed with the classifiers 

[12]

. As deep learning came to prominence, deep neural networks (DNNs) were utilized on top of the traditional utterance-level features and achieved a significant accuracy improvement comparing with conventional classifiers. 

[34]

investigated a number of DNN architectures and interleaved time-delay neural network and long short term memory (TDNN-LSTM) with time-restricted self-attention, achieving a good performance gain. 

[5] presented a new implementation of emotion recognition using spectrogram features and classifiers based on CNNs and RNNs. In [11], a 3-D attention-based convolution RNN (ACRNN) was proposed. An attention pooling based representation learning method was introduced in [14]. They all processed the whole speech utterance into small segments and used attention mechanisms for speech emotion recognition. In [35], a triplet loss was used to reinforce emotional clustering based on LSTM and explore three different strategies to handle variable-length inputs for SER. Recently, more research efforts focused on auxiliary information and innovative ways to assist emotion recognition. For example, transcripts, language cues and cross-culture information were adopted in emotion recognition [25, 36, 37]. In [38]

, conditioned data augmentation using generative adversarial networks (GANs) was explored to address the problem of data imbalance in SER tasks. Furthermore, 

[39] used multi-task learning with attention mechanism to share useful information in SER scenarios. Also, there were no agreements on appropriate features for SER. In [40] representations were learnt from raw speech and in [41] phone posteriors in raw speech waveform were employed to improve emotion identification.

Ii-B Video-based emotion recognition

In interpersonal interactions, people can enhance communication effectiveness by controlling their facial expressions, an important way to spread human emotional information. It refers to all kinds of emotions expressed through the changes of muscle movements in face, eye and mouth. Among them, the muscle groups near the eyes and mouth are shown to be the most prominent [42]. In recent years, research on visual recognition paid more attentions to feature learning via neural networks. [43] utilized CNNs for feature extraction of facial expression recognition (FER). The winners in the AVER task of EmotiW Challenge used facial features extracted from deep CNNs trained on large face datasets [43, 29]. In [15], spatial-temporal techniques aimed to model the temporal or motion information in videos. Deep C3D was a widely-used spatial-temporal approach to video-based FER [44]. In [45], geometry-based FER was proposed to boost the accuracy by using a multi-kernel framework to combine features. Effective emotion recognition was implemented in [46] by learning the proposed 2D landmark information on a CNN and a LSTM-based network. In [47] and [48] emotion expressions were recognized by using the differences (or relations) between neutral and expressive faces. Finally, continuous emotion recognition in videos was implemented in [49] by fusing facial expression, head pose and eye gaze.

Ii-C Audio-visual based emotion recognition

Audio-visual based emotion recognition is to integrate audio and visual modalities with different statistical properties by using fusion strategies at feature, decision and model levels. Feature-level fusion is also called early fusion. A substantial number of previous works [50, 51] have demonstrated the performances of feature-level fusion on the AVER tasks. However, because it merged audio and visual features in a straightforward way, feature-level fusion could not model the complicated relationships, e.g., the differences on time scales and metric levels, between the two modalities [24]. Decision-level fusion has also been adopted by almost all the winning systems of the EmotiW challenges [29, 22]. Note that, it is usually implemented by combining the individual classification scores and therefore not able to well capture the mutual correlation among different modalities, as these modalities are assumed to be independent. In [52], model-level fusion was performed by fusing audio and visual streams of hidden Markov models (HMMs), which facilitated the building of an optimal connection among multiple streams according to the maximum entropy principle and the maximum mutual information criterion. To improve emotion recognition performances, the mouth area was further divided into several subregions, as elaborated in [53], to extract LBP-TOP features from each subregion and concatenate the respective features. In [30] a multiple attention fusion network (MAFN) was proposed by modeling human emotion recognition mechanisms.

Iii Proposed Attention and Fusion Strategy

The proposed multimodal attention and fusion network based on adaptive and multi-level FBP for audio-visual emotion recognition is shown in Fig. 1, in which all symbols and numbers will be introduced in the following subsections. Most of audio-visual emotion datasets are annotated at the sentence level [54, 55, 56]. The proposed framework maps a temporal feature sequence to a single label, which mainly consists of three important parts: audio and video encoder, attention, fusion and classifier.

Audio and video encoder: in our study, for audio encoder, a FCN with local response normalization (LRN) [57] is adopted to encode the speech spectrogram into a high-level representation. For video encoder used on AFEW database, each detected face is encoded into a vector through a pre-trained model [43], which has been proved to be effective, while on IEMOCAP database, we use the same network as audio stream by utilizing the facial marker information [58].

Attention: 1-D attention-based decoder is employed to obtain information more related to emotion after audio encoder and video encoder respectively.

Fusion and classifier:

as for audio-visual fusion, an adaptive and multi-level FBP approach is presented. Based on the fusion vector, the output posterior probabilities of emotion classes can be generated by using a fully-connected (FC) layer followed by a softmax layer. We will elaborate on each module in the following section.

Iii-a Audio stream

The audio stream directly handles the speech spectrogram by using stacked convolutional layers followed by an attention block. Without handcrafted feature extraction, learning based on CNN has been widely used for SER [59, 60]. Inspired by AlexNet [57], we used a FCN based audio encoder as illustrated in Fig. 3

. All the convolutional layers are followed by a ReLU activation function and LRN. The dimensions of frequency domain and time domain are f and t respectively.

Fig. 3: FCN based audio encoder using LRN.

The main principle of LRN is to suppress neighboring neurons by imitating biological active neurons, aiming at improving the accuracy of deep network training. In 

[57], the LRN layer was proposed to create a competition mechanism for the activity of local neurons, which makes the larger response pairs enhanced, and suppresses other neurons with smaller feedback, thus improving the generalization ability of the model. Suppose the activity of a neuron computed by applying kernel at position is denoted by . Then by applying the ReLU [61] nonlinearity, the response-normalized activity is expressed as:

(1)

where the sum runs over ¡°adjacent¡± kernel maps at the same spatial position, and is the total number of kernels in the layer [57]. The constants , , , and

are hyperparameters. We applied this normalization after applying the ReLU nonlinearity in certain layers.

To align audio with video in time, here we pool the size of the frequency domain to 1. Accordingly the output of the audio encoder is a 2-D array by reshaping, where and represent the number of time frames and channels, respectively. We consider the output as a variable-length grid of elements. Each element is a -dim (=256) vector corresponding to a region of speech spectrogram, represented as . Therefore, the whole audio utterance is now denoted as:

(2)

Intuitively, not all time-frequency units in set contribute equally to the emotion state of the whole utterance. We introduce self-attention to extract the elements that are important to the emotion of the utterance, as shown in Fig. 4. By calculating these weights in the time dimension, elements in the set are weighted and summed. We use the following formulae to realize this idea:

(3)
(4)
(5)
(6)

First, is fed to a fully connected layer with a parameter set followed by a tanh function to obtain a new representation. Then we measure the importance weight by the inner product between the new representation of and a learnable vector . After that, the normalized importance weight is calculated using softmax with the temperature parameter to control the uniformity of the importance weights [62]. If = 0, the weight obtained by attention is the same, which means all the time-frequency units have the same importance to the utterance audio vector . Finally, is computed with importance weights using the set . By summing over , represents the audio-based global feature vector for emotion.

Fig. 4: Structure of audio/video self-attention.

Iii-B Video stream

As shown in Fig. 1, for the video stream, the face frames are first detected and aligned using a face detector [63]. Then due to the limited amount of training data in the AVER tasks, such as the EmotiW challenge data set, a pre-trained video encoder network on FER2013 [64] is adopted. The size of the detected faces is , corresponding to the height and width. If a face is not found on the frame, the entire frame is resampled and passed to the network because the network still could capture some contextual cues and this can also ensure the synchronization of audio and video in time. In [43], four kinds of deep convolutional neural networks, e.g., VGG-Face, are adopted to extract emotion-related features from a face image with a 3-channel input (R, G, B). Moreover, three proprietary state-of-the-art face recognition networks, notated as FR-Net-A, FR-Net-B, FR-Net-C, have also been investigated. In this study, FR-Net-B network is chosen as our video encoder as it achieved a high accuracy as demonstrated in [26, 43].

For each face frame, a -dim () feature vector of the video encoder output is generated from the last fully connected layer. Therefore the visual feature sequence of an -frame video can be represented as:

(7)

where denotes the facial feature vector of -th frame. Similar to the audio stream, we adopt the self-attention mechanism to calculate the weight for each frame, as shown in Fig. 4. Before entering into the attention block, the dimension reduction is conducted to decrease computational complexity and to relieve over-fitting. Here we use a convolution layer as it has fewer parameters than full connection layer. The formulae are listed below:

(8)
(9)
(10)
(11)

where Conv represents the convolution operation to yield a new low-dimension representation of the -th frame. Specifically, we first extend the input vector to a matrix by reshaping. The convolution kernel size is

and the stride is 4 respectively. Then the dimension of Conv output is reduced to

. Finally, is computed with importance weights using the set . By summing over , represents the video-based global feature vector for emotion.

Iii-C Global factorized bilinear pooling (G-FBP)

Bilinear pooling is introduced in [65] and initially used for feature fusion. Then the fused vectors are used for classification. Although improving the system performance, it also brings a huge amount of computation. Some researches focusing on reducing computational cost have achieved considerable results [66, 67]. According to [68], for the audio feature vector, , and video feature vector, , the bilinear pooling for the output, , is defined as follows:

(12)

where is the -th projection matrix. By learning a set of projection matrices , we can obtain an -dimensional audio-visual fusion vector .

According to [69, 68], the projection matrix in Eq.(12) can be factorized into two low-rank matrices:

(13)

where is the latent dimension of the factorized matrices in Eq.(13), and , represents the element-wise multiplication of two vectors, and is an all-1 vector. The advantage is that the low rank matrices and are used to approximate , so the operation is simplified and parameter quantity can be reduced. When we want the output to be a vector, it just need to expand the matrix and . Specifically, to obtain the output feature vector by Eq.(14

) below, two 3-D tensors,

and , need to be learned. Note and can be reformulated as 2-D matrices, and , respectively, by using a reshape operation. Accordingly, we have:

(14)

where and are implemented by feeding and to fully connected layers respectively, and the function applies sum pooling within a series of non-overlapped windows to . We indicate in Eq.(14) as global factorized bilinear pooling (G-FBP). The G-FBP module is shown in Fig. 2.

After that, since the magnitude of the output varies dramatically due to the introduced element-wise multiplication, the L2-normalization is used after G-FBP to normalize the energy of to .

Iii-D Adaptive global factorized bilinear pooling (AG-FBP)

Through our analysis of the experimental results on the EmotiW database (see Section IV for detail), we find that the influence of audio and video streams on the emotional state of each specific recording is different. For example, it tends to classify audio streams into “Angry” and “Fear” emotions while video streams are often classified as “Disgust”, “Happy”, “Sad” or “Surprise” emotions. We therefore propose an adaptive strategy for G-FBP (denoted as AG-FBP) audio-visual information fusion. In this study, we adopt the encoder vectors before audio and video fusion to dynamically calculate the two coefficients:

(15)
(16)

where and are the adaptive factor coefficients of the audio and video streams respectively and computed based on the current sample. represents the L2-norm operation.

Compared with Eq.(12), the new formulation is shown below:

(17)

Correspondingly, the formulation of G-FBP in Eq.(14) is modified as:

(18)

In comparison to G-FBP, no additional learning parameters are required in AG-FBP. and are adaptively determined by audio-based and video-based global feature vectors for emotion, which are learned in the attention module and can well represent the contribution and correlation of each modality to the current emotion state.

Iii-E Multi-Level factorized bilinear pooling (M-FBP)

Since the change of emotional state is usually continuous, there is no good characterization of emotional state durations. Some previous studies showed that 250ms was the suggested minimum segment length required for identifying emotion [70, 71]. Speech segments have also been investigated for speech emotion recognition [24, 5, 11, 72]. Emotions change over time, and the audio-visual fusion based on segment level may be more effective. Motivated by this, we proposed a multi-level FBP (M-FBP) approach for audio-visual emotion recognition by using intra-trunk data of one recording. As shown in Section III-C, G-FBP only extracts a global audio/video vector for FBP fusion. In addition to G-FBP, M-FBP performs a high-resolution fusion at a segment level, which can fully integrate audio and visual information.

To implement M-FBP, on one hand, the stride of the pooling layer of the audio stream can be modified to adjust the length of intra-trunk audio data, namely . is the number of intra-trunks and determined by the time lengths of the sample () and one intra-trunk (), and . On the other hand, the frame rate of the video stream is 40ms while the frame shift of the audio stream is 10ms. To synchronize the audio and video streams, we change the time length of the video stream to be the same as that of the audio stream through the reshape and sum operations, namely

. As the length of each video recording is different, we adopt zero-padding and use masking at the end as in 

[12]. Finally, we formulate intra-trunk based FBP as follows:

(19)

where and are two 3-D tensors for the -th intra-trunk data. Different from G-FBP, the L2-normalization is used after M-FBP to normalize the energy of to .

Note that we can also combine AG-FBP and M-FBP to perform adaptive and multi-level FBP (AM-FBP). Accordingly, both of global-trunk data and of intra-trunk data are concatenated as the fusion vector for the AM-FBP system. For all these audio-visual systems, we update the network parameters using the cross-entropy criterion.

Iv Experiments and Result Analyses

To verify the effectiveness of our proposed approach, we validate audio-visual emotion recognition network on IEMOCAP database [55] and AFEW8.0 database [56] which was used in the audio-visual sub-challenge of EmotiW2019.

The IEMOCAP corpus comprises five sessions, each of which includes labeled emotional speech utterances from recordings of dialogs between two actors. There is no actor overlapping between these sessions. We utilize the database in the same way with [12]. Four emotional categories are adopted, namely happy, sad, angry and neutral. By only using improvised data instead of acting, we implement a 5-fold cross-validation. The spectrogram extraction process is consistent with [12]

. First, a sequence of overlapping Hamming windows are applied to the speech waveform, with 10 msec window shift, and 40 msec window size. Then, we calculate a discrete Fourier transform (DFT) of length 800 for each frame. Finally, the 200-dimensional low-frequency part of the spectrogram is used as the input. In addition, the IEMOCAP database contains detailed facial marker information from speakers. The Mocap data (facial expression) contains a column of tuples. The sample rate of the marker capture system is 120 frames per second. For details, please refer to 

[55]. The marker point coordinates are used as features for the training of the video-based network. Since we use the facial marker information as the input, 1D-ABFCN which is consistent with audio encoder is also employed as the video encoder. The input channel is 3 and the pool size of the two is different.

The AFEW database is collected from films and TV series to simulate the real world, including seven emotional categories: angry, disgust, fear, happy, neutral, sad, surprise. There are 773 videos and corresponding audios in the training set, 383 in the validation set, and 653 in the test set. We carried out experiments by using the FR-Net-B feature, which has been proved effective in [26]. For the audio feature, we also used 200-dimensional spectrogram in the same way as IEMOCAP database.

Fig. 5: 2D-ABFCN and 1D-ABFCN learning curves, where 1D-ABFCN (No LRN) means that only the LRN layer is re-
moved compared with 1D-ABFCN.

For the audio system and the video system, the proposed architectures are all implemented using TensorFlow 1.2.1 and are trained on two GeForce GTX 1080 GPUs for 100 epochs. The batch size is 32. We use Adam optimizer with

= 0.3, = 0.5 for the IEMOCAP database, = 0.3 and = 1 for the AFEW database. For the audio system, the learning rate is 0.0001. For the video system, the learning rate is 0.0002 for IEMOCAP database and 0.0001 for AFEW database.

For the audio-visual fusion system, the proposed methods are implemented using TensorFlow 1.2.1 and are trained on two GeForce GTX 1080 GPUs for 200 epochs. The batch size is 64. We use Adam optimizer with a learning rate of 0.0001. = 0.3 and = 1. We train the whole network with = 4 and = 2. The value of is 128 in G-FBP system and 192 in M-FBP system. for the AFEW database and for the IEMOCAP database. The dropout parameter is set to 0.3 for alleviating the over-fitting problem. The parameters of LRN can refer to [57].

For the selection of and , we carry out experiments with and from 0 to 1 with a step of 0.1, and determine the value corresponding to the optimal result on the validation set. And for the selection of and , we take 2 as the basic step to train the proposed networks. Finally, and corresponding to the best result on the validation set are determined.

Iv-a Audio and video based emotion recognition

Systems Initialization Accuracy
Att.+BLSTM+FCN [73] Random 68.10%
CNN+LSTM [5] Random 68.80%
Fusion_TACN [74] Random 69.75%
2D-ABFCN [12] Pre-trained 70.40%
1D-ABFCN (No LRN) Random 70.79%
1D-ABFCN Random 71.40%
TABLE I: Classification accuracy comparison of different audio network architectures and parameter initializations on IEMOCAP test set.

In order to verify the effectiveness of the proposed audio stream in emotion recognition, we designed our experiments using a 5-fold cross-validation on the IEMOCAP corpus. 1D-ABFCN (No LRN) means that only the LRN layer is removed compared with 1D-ABFCN. The results are listed in Table I

. By using the random initialization, our approach yielded an absolute accuracy gain of 2.60% over the CNN+LSTM based approach. Even in an unfair comparison to 2D-ABFCN with a pre-trained network using ImageNet dataset, the proposed 1D-ABFCN achieved an improvement of 1% accuracy on the test set. Please note that 1D-ABFCN employed the same configuration and hyperparameter setting as 2D-ABFCN. The effectiveness of this initialization method based on ImageNet dataset on speech emotion recognition has been investigated in 

[12], and that really gets a higher accuracy than random initialization. Compared with the Fusion_TACN [74], the performance of our proposed method is also improved by 1.65%. After using LRN layer, the system accuracy is increased by 0.61%. Furthermore, we make a comparison of the learning curves between 2D-ABFCN, 1D-ABFCN (No LRN) and 1D-ABFCN on the validation set in Fig. 5. After 50 epochs, the learning rate was halved. By using LRN in 1D-ABFCN, the generalization ability of the model can be enhanced, and the convergence can be accelerated. Accordingly, our approach can achieve a smaller loss which leads to a higher accuracy.

Fig. 6: Attention analysis of two randomly selected examples. For each, the curves show the video attention weights along frames in the video and the audio-visual G-FBP systems, with corresponding audio spectrograms and partial video frames.
Systems Accuracy
EmotiW2019 baseline [7] 38.81%
Audio system 34.99%
Video system 52.07%
G-FBP 61.10%
TABLE II: Classification accuracy comparison of different systems on AFEW validation set.

Iv-B Audio-visual emotion recognition with G-FBP

On top of the 1D-ABFCN based audio stream, we next evaluate audio-visual fusion using G-FBP. We show in Table II accuracies obtained with different systems on the AFEW validation set. Compared with the EmotiW2019 baseline audio-visual system [7], our proposed G-FBP approach significantly improved the accuracy from 38.81% to 61.10%. As an ablation study of different modalities, our audio system in Section III-A and video system in Section III-A yield the emotion recognition accuracy of 34.99% and 52.07%, respectively. Clearly, G-FBP fusion is quite effective with an accuracy gain of about 9% over the single video modality.

To better demonstrate the effect of G-FBP fusion via the attention mechanism, we plot the attention weights of each frame for two randomly picked examples from the validation set in Fig. 6. For each curve, the values represent the video attention importance of each frame, the higher the better, in the video-only and audio-visual G-FBP systems.

For the example in Fig. 6(a), we show that the video attention can well align with the audio and video frames. After 14 frames, no face was detected in the video, where the video frame could be completely regarded as noise for emotion recognition. Accordingly, the corresponding attention weights in the video system were all zeros. However, in the audio-visual fusion system, considering that the first few audio frames contained women’s voice of fear and the last few audio frames contained machine noise, the weight of the video stream changed accordingly. Specifically, the weight of the first seven frames increases due to the enhanced emotion with the audio modality while the weight after 14 frames slightly fluctuated due to the machine noise, which demonstrated the good coupling between audio and video modalities. The video system classified this example as “Surprise” by inputting facial features, and it was correctly classified as “Fear” after audio-visual fusion.

For the example in Fig. 6(b), we illustrate the complementarity between audio and video modalities using the video attention weight. In the first few frames, the woman was facing sideways, and gradually turned to the front. The corresponding attention weight was also increasing in the video system. After about 56 frames, the woman smiled bitterly, leading to much larger weights. The video system classified this example as “Disgust” while the ground truth is “Sad”. This classification error might be generated due to the wry smile of the woman. Moreover, this example was also misclassified as “Neutral” by the audio system. This could be explained by the spectrogram that only starting and ending periods included the weak emotion information of a little sob. However, by the influence of the audio stream, the attention weight of the video stream in the G-FBP fusion system slowly paid attention to the 14-62 frames with a slightly sad face. At the same time, it also reduced the weights during the period of a wry smile. Although both audio system and video system generated wrong emotion classes by partial or weak emotion information, correct classification could be still achieved through the G-FBP system by deep fusion of both audio and video information.

Modality
Angry
Disgust
 
Fear
Happy
  
Sad
Surprise
Neutral
Audio
81.25%
0.00%
34.78%
14.29%
21.31%
0.00%
66.67%
Video
59.38%
25.00%
10.87%
80.95%
55.74%
39.13%
60.32%
TABLE III: The accuracy on the AFEW validation set for single-modality emotion classification.
Fig. 7: Adaption factors of the audio modality in AG-FBP.

Iv-C Audio-visual emotion recognition with AG-FBP

As listed in Table III, we analyze the single-modality classification results of audio and video systems on the AFEW validation set. The numbers in the table represent the accuracy for each emotion. We observed that the impact of audio and video on each emotional category was quite different. Specifically, although the overall accuracy of audio-only system is much lower than that of video-only system (see Table II), the accuracy of “Angry” and “Fear” for the audio modality is significantly higher than that of video system, which shows that for these two emotions, audio modality usually played a more important role. While in terms of “Happy”, “Sad”, “Disgust” and “Surprise”, video modality information seemed more discriminative, especially for the “Happy” category. This is the main motivation to propose AG-FBP by adaptively determining the importance of each modality for one specific sample or recording.

From the results on the AFEW validation set in Table IV,we achieved an absolute accuracy increase of 1.30% from G-FBP to AG-FBP. It is worth noting that AG-FBP does not yield additional overhead in terms of storage and computation over G-FBP as only an adaption factor in Eq.(15) needs to be calculated for a specific recording. By visualizing the mean of adaption factors for each emotion category in Fig. 7, we observed that the adaptive weighting factors and the relative strength of the modality for each category in Table III were well consistent. For example, the adaption factors of “Fear” and “Angry” in the audio modality are higher than those of the other categories, while for “Happy”, it is the lowest. These demonstrate that, for recordings of different emotions, the representation ability of the audio and video modalities can be quite different.

Systems Accuracy p-value
G-FBP 61.10% -
AG-FBP 62.40% 0.004
M-FBP 63.18% 0.002
AM-FBP 64.17% 0.001
TABLE IV: Classification accuracy and p-value of our improved FBP approaches on the AFEW validation set.
Fig. 8: Learning curves of different FBP systems on the validation set.
Fig. 9: Embedding visualization of different network architectures for emotion recognition.

Iv-D Audio-visual emotion recognition with M-FBP/AM-FBP

Next, the effects of multi-level FBP and its combination with AG-FBP are discussed. As shown in Table IV

, we see the M-FBP approach achieves the accuracy of 63.18%, yielding a gain of 0.78% over the AG-FBP system by using the multi-level information of both global-trunk and intra-trunk data. By fully utilizing the complementarity between AG-FBP and M-FBP, the proposed AM-FBP system attains the best accuracy of 64.17% among all the improved FBP approaches. We perform the significance test which is a one-tailed test with the null hypothesis that there is no performance difference between the two systems. Here we refer to the “Matched Pair Test” method mentioned in 

[75] to calculate the p-values, which are the main indicator of significance tests, and reflect the degree of support for the null hypothesis. The smaller p-value is, the bigger significant differences between two systems are. The p-values between the G-FBP and our improved FBP systems on the AFEW validation set are listed in Table IV, which imply that there is a high probability that our improved FBP approaches are able to achieve a better performance compared to the G-FBP approach.

As illustrated in Fig. 8, we compare the learning curves of four audio-visual fusion strategies using cross entropy on the validation set. First of all, the proposed AG-FBP framework shows a lower learning loss and attains a better accuracy than the G-FBP system. As expected the adaptive M-FBP framework also learns well and achieve more stable and better convergence behaviors than the M-FBP system when comparing the bottom curve with the M-FBP curve above it in Fig. 8, which is also consistent with the recognition results in Table IV.

In Fig. 9, we visualize the embedding in the fully-connected layer of different network architectures for emotion recognition on the validation set using t-SNE [76]. Clearly, for single-modality audio or video system, the overall embedding seemed scattered in general for different emotion categories. On the other hand, “Angry” was more clustered in the audio system while “Happy” was easier to distinguish in the video system. However, in the multi-modal systems of G-FBP and AM-FBP, the embedding was much more distinctive than that of single-modality systems. For example, the G-FBP system could well distinguish both the “Angry” and “Happy” categories. When compared with G-FBP, AM-FBP yielded the best embedding results with clearer boundaries among different colors (categories), demonstrating its effectiveness in deeper interactions between audio and video modalities. All those results are well aligned with the classification accuracies.

Iv-E Overall comparison

Systems Single model Accuracy p-value
EmotiW2019 baseline [7] 41.07% -
MAFN [30] 58.65% -
4CNNs+LMED+DL-A+LSTM [29] 61.87% -
4CNNs+BLSTM+Audio [22] 62.78% -
G-FBP 60.64% -
4G-FBP 62.48% -
AG-FBP 61.26% 0.008
M-FBP 61.87% 0.001
AM-FBP 62.17% <0.001
2AM-FBP 62.79% <0.001
2AM-FBP+4G-FBP 63.09% <0.001
TABLE V: The overall performance comparison and p-value of different systems on the AFEW test set.
Fig. 10: Confusion matrix on AFEW test set.

To perform an overall comparison among our proposed techniques on the EmotiW2019 challenge data, we add the validation data to the training set similar to other participating teams in EmotiW [28, 21]. In the upper block of Table V, we show the performance of different systems on the AFEW test set. MAFN [30] is a multi-modal adaption method with intra/inter-modality attention mechanisms. “4CNNs+LMED+DL-A+LST” [29] combined five visual and two audio models and obtained the best accuracy in EmotiW2018. While “4CNNs+BLSTM+Audio” [22] fused four visual and three audio models to rank as the champion system for the EmotiW2019 challenges. It also obtained the best accuracy of 62.78% as shown in Table V. In the bottom block, using the same test set to evaluate our four proposed FBP systems, our proposed AM-FBP system with a single model setting can yield a competitive accuracy of 62.17%, indicating the effectiveness of AM-FBP fusion. Moreover, the results of G-FBP and M-FBP on the test set also followed a similar accuracy trend to those on the validation set. In order to ensure the generalization ability of the whole network, we randomly selected at least two groups of models trained with different random seeds for proposed framework to test, e.g., “4G-FBP” and “2AM-FBP” shown in Table V. And it can be observed that the results after fusion are also improved. Finally, we integrated two AM-FBP and four G-FBP models to achieve the best result of 63.09% among all systems as shown in the bottom row in Table V. We further perform the significance analysis. The p-values between the G-FBP and our improved FBP systems on the AFEW test set are listed in Table V, which imply that there is a high probability that our improved FBP approaches are able to achieve a better performance compared to the G-FBP approach.

For a further analysis in Fig. 10, we illustrate the confusion matrix of our best AM-FBP system on the test set. We can observe that “Angry”, “Happy” and “Neutral” were more easily classified. As for “Surprise” and “Disgust”, the worse performance might be due to a potential mixing of different emotions, making these emotion categories not easy to be correctly classified. We also observe that the proportion of these two emotions is the lowest in the training set, and similar results are also found in [22, 29, 30, 77].

Systems Accuracy p-value
Audio system [78] 63.00% -
Decision fusion [78] 65.40% -
Audio system [58] 50.97% -
Video system [58] 49.39% -
Encoder concat [58] 67.58% -
Audio system 71.40% -
Video system 53.42% -
Decision fusion 72.54% -
Encoder concat 73.11% -
G-FBP 73.98% 0.001
AM-FBP 75.49% <0.001
TABLE VI: Classification accuracy comparison and p-value of different systems on IEMOCAP test set.

Finally, the proposed methods are further evaluated on the IEMOCAP database. We implemented video-based emotion recognition by using the 1D-ABFCN network and the results are shown in Table VI. In the upper block of Table VI, we show the performance of different systems on the IEMOCAP test set. Different machine learning and deep learning based models are designed to investigate multimodal emotion recognition in [58]. In [78], feature extraction scheme and matching model structure are investigated for multimodal emotion recognition respectively. Although it does not show the performance of the video-based system in [78], the audio-visual system improves by 2.40% compared with the audio-based system through decision fusion. In the bottom block, inspired by [58] and [78], the two common fusion methods, namely “Encoder concat” and “Decision fusion”, are selected to compare with the proposed AM-FBP approach. According to Table VI, the accuracy of encoder concat is 0.57% higher than that of decision fusion and we further take the encoder concat as the audio-visual fusion baseline. From Table VI, we can observe that the proposed AM-FBP achieves the highest accuracy of 75.49% with the absolute accuracy gain of 2.38% compared with encoder concat.

It is worth noting that for the single modality systems, the accuracy of the video-based system is higher on the AFEW database than that of audio-based system, while the observation is opposite on the IEMOCAP database. According to the results of the audio-visual system, the two modalities are more complementary on the AFEW database. This result may originate from the differences of the two databases. First, these two databases were collected in different ways. The AFEW database was collected from films and TV series with very complex scenarios and environments, which is a great challenge to the audio-based system. While the IEMOCAP database was recorded from only ten actors in the lab with defined scenes, and we believe it is easier for the audio-based system. Second, the facial features in the two databases vary in the level of detail. For the AFEW database, the detected faces are sent to the trained neural network to extract high-level embedded features. While for the IEMOCAP database, the publisher provides detailed information about the actor’s facial expression by placing the markers on the actor’s face, head, and hands, which is also widely utilized, e.g., in [58] and [78]. Due to the sparse markers, the emotional cues of video are also limited. From the above two perspectives, it is more difficult to correctly recognize emotions in the AFEW database than in the IEMOCAP database, for example, one obvious observation is that the audio-based system can achieve better performance on the IEMOCAP database (71.40%) than the AFEW database (34.99%). Despite these differences, the performances of audio-visual fusion by using the proposed AM-FBP are improved on both two databases, which demonstrates a good generalization ability of the proposed method. The p-values between our FBP systems and the encoder concat on the test set are also shown in Table VI, which imply that the superiority of the proposed FBP is statistically significant.

V Conclusion

In this paper, we introduce a novel audio-visual emotion recognition attention network using adaptive and multi-level FBP fusion. Specifically, the deep features from the audio encoder and the video encoder are first selected through the embedding attention mechanism to obtain the emotion-related regions for FBP fusion. Then, the adaptive adjustment of audio and video weights is presented for a deep fusion. Furthermore, the multi-level information by using global-trunk data and intra-trunk data is adopted to design a new network architecture. The proposed approach is verified on the test set of the EmotiW2019 challenge and IEMOCAP database, outperforming other state-of-the-art approaches in literature.

Acknowledgment

The authors would like to thank the organizers of EmotiW for evaluating the accuracies of the proposed systems on the test set of the AFEW database. This work was supported by the Strategic Priority Research Program of Chinese Academy of Sciences under Grant No. XDC08050200.

References

  • [1] M. Ren, W. Nie, A. Liu, and Y. Su, “Multi-modal correlated network for emotion recognition in speech,” Visual Informatics, vol. 3, no. 3, pp. 150–155, 2019.
  • [2] D. Bowers, K. Miller, W. Bosch, D. Gokcay, O. Pedraza, U. Springer, and M. S. Okun, “Faces of emotion in parkinsons disease: Micro-expressivity and bradykinesia during voluntary facial expressions,” Journal of The International Neuropsychological Society, vol. 12, no. 6, pp. 765–773, 2006.
  • [3] R. Yuvaraj, M. Murugappan, N. M. Ibrahim, K. Sundaraj, M. I. Omar, K. Mohamad, and R. Palaniappan, “Detection of emotions in parkinson’s disease using higher order spectral features from brain’s electrical activity,” Biomedical Signal Processing and Control, vol. 14, pp. 108–116, 2014.
  • [4] C. Vinola and K. Vimaladevi, “A survey on human emotion recognition approaches, databases and applications,”

    Electronic Letters on Computer Vision and Image Analysis

    , vol. 14, no. 2, pp. 24–44, 2015.
  • [5] A. Satt, S. Rozenberg, and R. Hoory, “Efficient emotion recognition from speech using deep learning on spectrograms.” in 18th Annual Conference of the International Speech Communication Association, INTERSPEECH, 2017, pp. 1089–1093.
  • [6] A. Dhall, A. Kaur, R. Goecke, and T. Gedeon, “Emotiw 2018: Audio-video, student engagement and group-level affect prediction,” in 2018 International Conference on Multimodal Interaction (ICMI), 2018, pp. 653–656.
  • [7] A. Dhall, “Emotiw 2019: Automatic emotion, engagement and cohesion prediction tasks,” in 2019 International Conference on Multimodal Interaction (ICMI), 2019, pp. 546–550.
  • [8] H. Gunes and M. Piccardi, “Bi-modal emotion recognition from expressive face and body gestures,” Journal of Network and Computer Applications, vol. 30, no. 4, pp. 1334–1345, 2007.
  • [9] S. Z. Bong, K. Wan, M. Murugappan, N. M. Ibrahim, Y. Rajamanickam, and K. Mohamad, “Implementation of wavelet packet transform and non linear analysis for emotion classification in stroke patient using brain signals,” Biomedical signal processing and control, vol. 36, pp. 102–112, 2017.
  • [10] Y.-I. Tian, T. Kanade, and J. F. Cohn, “Recognizing action units for facial expression analysis,” IEEE Transactions on pattern analysis and machine intelligence, vol. 23, no. 2, pp. 97–115, 2001.
  • [11]

    M. Chen, X. He, J. Yang, and H. Zhang, “3-d convolutional recurrent neural networks with attention model for speech emotion recognition,”

    IEEE Signal Processing Letters, vol. 25, no. 10, pp. 1440–1444, 2018.
  • [12] Y. Zhang, J. Du, Z. Wang, J. Zhang, and Y. Tu, “Attention based fully convolutional network for speech emotion recognition,” in 2018 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC).   IEEE, 2018, pp. 1771–1775.
  • [13] Y. Li, J. Tao, B. Schuller, S. Shan, D. Jiang, and J. Jia, “Mec 2017: Multimodal emotion recognition challenge,” in 2018 First Asian Conference on Affective Computing and Intelligent Interaction (ACII Asia).   IEEE, 2018, pp. 1–5.
  • [14] P. Li, Y. Song, I. Mcloughlin, W. Guo, and L. Dai, “An attention pooling based representation learning method for speech emotion recognition.” in 19th Annual Conference of the International Speech Communication Association, INTERSPEECH, 2018, pp. 3087–3091.
  • [15] D. Meng, X. Peng, K. Wang, and Y. Qiao, “Frame attention networks for facial expression recognition in videos,” in 2019 IEEE International Conference on Image Processing (ICIP).   IEEE, 2019, pp. 3866–3870.
  • [16] Y. Fan, J. C. Lam, and V. O. Li, “Video-based emotion recognition using deeply-supervised neural networks,” in Proceedings of the 20th ACM International Conference on Multimodal Interaction, 2018, pp. 584–588.
  • [17] S. Poria, E. Cambria, A. Hussain, and G.-B. Huang, “Towards an intelligent framework for multimodal affective data analysis,” Neural Networks, vol. 63, pp. 104–116, 2015.
  • [18] S. Dobrišek, R. Gajšek, F. Mihelič, N. Pavešić, and V. Štruc, “Towards efficient multi-modal emotion recognition,” International Journal of Advanced Robotic Systems, pp. 1–10, 2013.
  • [19] M. Song, J. Bu, C. Chen, and N. Li, “Audio-visual based emotion recognition - A new approach,” in

    2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2004), with CD-ROM, 27 June - 2 July 2004, Washington, DC, USA

    , 2004, pp. 1020–1025.
  • [20] Z. Zeng, Y. Hu, M. Liu, Y. Fu, and T. S. Huang, “Training combination strategy of multi-stream fused hidden markov model for audio-visual affect recognition,” in Proceedings of the 14th ACM international conference on Multimedia, 2006, pp. 65–68.
  • [21] Y. Fan, X. Lu, D. Li, and Y. Liu, “Video-based emotion recognition using cnn-rnn and c3d hybrid networks,” in Proceedings of the 18th ACM International Conference on Multimodal Interaction, 2016, pp. 445–450.
  • [22] S. Li, W. Zheng, Y. Zong, C. Lu, C. Tang, X. Jiang, J. Liu, and W. Xia, “Bi-modality fusion for emotion recognition in the wild,” in 2019 International Conference on Multimodal Interaction, 2019, pp. 589–594.
  • [23] H. Zhou, D. Meng, Y. Zhang, X. Peng, J. Du, K. Wang, and Y. Qiao, “Exploring emotion features and fusion strategies for audio-video emotion recognition,” in 2019 International Conference on Multimodal Interaction, 2019, pp. 562–566.
  • [24] S. Zhang, S. Zhang, T. Huang, W. Gao, and Q. Tian, “Learning affective features with a hybrid deep model for audio–visual emotion recognition,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 28, no. 10, pp. 3030–3043, 2017.
  • [25] J.-L. Li and C.-C. Lee, “Attentive to individual: A multimodal emotion recognition network with personalized attention profile.” in 20th Annual Conference of the International Speech Communication Association, INTERSPEECH, 2019, pp. 211–215.
  • [26] Y. Zhang, Z.-R. Wang, and J. Du, “Deep fusion: An attention guided factorized bilinear pooling for audio-video emotion recognition,” in 2019 International Joint Conference on Neural Networks (IJCNN).   IEEE, 2019, pp. 1–8.
  • [27] E. Avots, T. Sapiński, M. Bachmann, and D. Kamińska, “Audiovisual emotion recognition in wild,” Machine Vision and Applications, vol. 30, no. 5, pp. 975–985, 2019.
  • [28] P. Hu, D. Cai, S. Wang, A. Yao, and Y. Chen, “Learning supervised scoring ensemble for emotion recognition in the wild,” in Proceedings of the 19th ACM international conference on multimodal interaction, 2017, pp. 553–560.
  • [29] C. Liu, T. Tang, K. Lv, and M. Wang, “Multi-feature based emotion recognition for video clips,” in Proceedings of the 20th ACM International Conference on Multimodal Interaction, 2018, pp. 630–634.
  • [30] Y. Wang, J. Wu, and K. Hoashi, “Multi-attention fusion network for video-based emotion recognition,” in 2019 International Conference on Multimodal Interaction, 2019, pp. 595–601.
  • [31] A. Guidi, N. Vanello, G. Bertschy, C. Gentili, L. Landini, and E. P. Scilingo, “Automatic analysis of speech f0 contour for the characterization of mood changes in bipolar patients,” Biomedical Signal Processing and Control, vol. 17, pp. 29–37, 2015.
  • [32] Y. Huang, A. Wu, G. Zhang, and Y. Li, “Extraction of adaptive wavelet packet filter-bank-based acoustic feature for speech emotion recognition,” IET Signal Processing, vol. 9, no. 4, pp. 341–348, 2015.
  • [33] J. Zhao, X. Mao, and L. Chen, “Speech emotion recognition using deep 1d & 2d cnn lstm networks,” Biomedical Signal Processing and Control, vol. 47, pp. 312–323, 2019.
  • [34] M. Sarma, P. Ghahremani, D. Povey, N. K. Goel, K. K. Sarma, and N. Dehak, “Emotion identification from raw speech signals using dnns.” in Interspeech, 2018, pp. 3097–3101.
  • [35] J. Huang, Y. Li, J. Tao, Z. Lian et al.

    , “Speech emotion recognition from variable-length inputs with triplet loss function.” in

    19th Annual Conference of the International Speech Communication Association, INTERSPEECH, 2018, pp. 3673–3677.
  • [36] J. Liang, S. Chen, J. Zhao, Q. Jin, H. Liu, and L. Lu, “Cross-culture multimodal emotion recognition with adversarial learning,” in ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).   IEEE, 2019, pp. 4000–4004.
  • [37] H. Xu, H. Zhang, K. Han, Y. Wang, Y. Peng, and X. Li, “Learning alignment for multimodal emotion recognition from speech,” in Interspeech, 2019, pp. 3569–3573.
  • [38] A. Chatziagapi, G. Paraskevopoulos, D. Sgouropoulos, G. Pantazopoulos, M. Nikandrou, T. Giannakopoulos, A. Katsamanis, A. Potamianos, and S. Narayanan, “Data augmentation using gans for speech emotion recognition.” in 20th Annual Conference of the International Speech Communication Association, INTERSPEECH, 2019, pp. 171–175.
  • [39] Z. Zhang, B. Wu, and B. Schuller, “Attention-augmented end-to-end multi-task learning for emotion prediction from speech,” in ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).   IEEE, 2019, pp. 6705–6709.
  • [40] S. Latif, R. Rana, S. Khalifa, R. Jurdak, and J. Epps, “Direct modelling of speech emotion from raw speech,” in Interspeech, 2019, pp. 3920–3924.
  • [41] M. Sarma, P. Ghahremani, D. Povey, N. K. Goel, K. K. Sarma, and N. Dehak, “Improving emotion identification using phone posteriors in raw speech waveform based dnn,” in 20th Annual Conference of the International Speech Communication Association, INTERSPEECH, 2019, pp. 3925–3929.
  • [42] P. Ekman and W. V. Friesen, Facial action coding systems.   Consulting Psychologists Press, 1978.
  • [43] B. Knyazev, R. Shvetsov, N. Efremova, and A. Kuharenko, “Leveraging large face recognition data for emotion classification,” in 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018).   IEEE, 2018, pp. 692–696.
  • [44] D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri, “Learning spatiotemporal features with 3d convolutional networks,” in Proceedings of the IEEE international conference on computer vision, 2015, pp. 4489–4497.
  • [45] P. Yang, H. Yang, Y. Wei, and X. Tang, “Geometry-based facial expression recognition via large deformation diffeomorphic metric curve mapping,” in 2018 25th IEEE International Conference on Image Processing (ICIP).   IEEE, 2018, pp. 1937–1941.
  • [46] D. Y. Choi, D. H. Kim, and B. C. Song, “Recognizing fine facial micro-expressions using two-dimensional landmark feature,” in 2018 25th IEEE International Conference on Image Processing (ICIP).   IEEE, 2018, pp. 1962–1966.
  • [47] B. C. Song, M. K. Lee, and D. Y. Choi, “Facial expression recognition via relation-based conditional generative adversarial network,” in 2019 International Conference on Multimodal Interaction, 2019, pp. 35–39.
  • [48] M. Bai, W. Xie, and L. Shen, “Disentangled feature based adversarial learning for facial expression recognition,” in 2019 IEEE International Conference on Image Processing (ICIP).   IEEE, 2019, pp. 31–35.
  • [49] S. Wu, Z. Du, W. Li, D. Huang, and Y. Wang, “Continuous emotion recognition in videos by fusing facial expression, head pose and eye gaze,” in 2019 International Conference on Multimodal Interaction, 2019, pp. 40–48.
  • [50] M. Mansoorizadeh and N. M. Charkari, “Multimodal information fusion application to human emotion recognition from face and speech,” Multimedia Tools and Applications, vol. 49, no. 2, pp. 277–297, 2010.
  • [51] Y. Wang, L. Guan, and A. N. Venetsanopoulos, “Kernel cross-modal factor analysis for information fusion with application to bimodal emotion recognition,” IEEE Transactions on Multimedia, vol. 14, no. 3, pp. 597–607, 2012.
  • [52] Z. Zeng, J. Tu, B. M. Pianfetti, and T. S. Huang, “Audio–visual affective expression recognition through multistream fused hmm,” IEEE Transactions on multimedia, vol. 10, no. 4, pp. 570–577, 2008.
  • [53] G. Zhao, M. Barnard, and M. Pietikainen, “Lipreading with local spatiotemporal descriptors,” IEEE Transactions on Multimedia, vol. 11, no. 7, pp. 1254–1265, 2009.
  • [54] O. Martin, I. Kotsia, B. Macq, and I. Pitas, “The enterface’05 audio-visual emotion database,” in 22nd International Conference on Data Engineering Workshops (ICDEW’06).   IEEE, 2006, pp. 8–8.
  • [55] C. Busso, M. Bulut, C.-C. Lee, A. Kazemzadeh, E. Mower, S. Kim, J. N. Chang, S. Lee, and S. S. Narayanan, “Iemocap: Interactive emotional dyadic motion capture database,” Language resources and evaluation, vol. 42, no. 4, pp. 335–359, 2008.
  • [56] A. Dhall, R. Goecke, S. Lucey, and T. Gedeon, “Collecting large, richly annotated facial-expression databases from movies,” IEEE Annals of the History of Computing, vol. 19, no. 03, pp. 34–41, 2012.
  • [57] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Advances in neural information processing systems, vol. 25, pp. 1097–1105, 2012.
  • [58] S. S. S. Mahesh G. Huddar and V. S. Rajpurohit, “Multimodal emotion recognition using facial expressions, body gestures, speech, and text modalities,” International Journal of Engineering and Advanced Technology, vol. 8, 2019.
  • [59] S. Mirsamadi, E. Barsoum, and C. Zhang, “Automatic speech emotion recognition using recurrent neural networks with local attention,” in 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).   IEEE, 2017, pp. 2227–2231.
  • [60] Z. Aldeneh and E. M. Provost, “Using regional saliency for speech emotion recognition,” in 2017 IEEE international conference on acoustics, speech and signal processing (ICASSP).   IEEE, 2017, pp. 2741–2745.
  • [61] X. Glorot, A. Bordes, and Y. Bengio, “Deep sparse rectifier neural networks,” in

    Proceedings of the fourteenth international conference on artificial intelligence and statistics

    .   JMLR Workshop and Conference Proceedings, 2011, pp. 315–323.
  • [62] G. Hinton, O. Vinyals, and J. Dean, “Distilling the knowledge in a neural network,” in NIPS Deep Learning and Representation Learning Workshop, 2015.
  • [63] D. E. King, “Dlib-ml: A machine learning toolkit,” The Journal of Machine Learning Research, vol. 10, pp. 1755–1758, 2009.
  • [64] P.-L. Carrier, A. Courville, I. J. Goodfellow, M. Mirza, and Y. Bengio, “Fer-2013 face database,” Universit de Montral, 2013.
  • [65] T.-Y. Lin, A. RoyChowdhury, and S. Maji, “Bilinear cnn models for fine-grained visual recognition,” in Proceedings of the IEEE international conference on computer vision, 2015, pp. 1449–1457.
  • [66] Y. Gao, O. Beijbom, N. Zhang, and T. Darrell, “Compact bilinear pooling,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 317–326.
  • [67] Y. Li, N. Wang, J. Liu, and X. Hou, “Factorized bilinear models for image recognition,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 2079–2087.
  • [68] Z. Yu, J. Yu, J. Fan, and D. Tao, “Multi-modal factorized bilinear pooling with co-attention learning for visual question answering,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 1821–1830.
  • [69] J. Kim, K. W. On, W. Lim, J. Kim, J. Ha, and B. Zhang, “Hadamard product for low-rank bilinear pooling,” in ICLR, 2017.
  • [70]

    E. M. Provost, “Identifying salient sub-utterance emotion dynamics using flexible units and estimates of affective flow,” in

    2013 IEEE International Conference on Acoustics, Speech and Signal Processing.   IEEE, 2013, pp. 3682–3686.
  • [71] M. Wöllmer, M. Kaiser, F. Eyben, B. Schuller, and G. Rigoll, “Lstm-modeling of continuous emotions in an audiovisual affect recognition framework,” Image and Vision Computing, vol. 31, no. 2, pp. 153–163, 2013.
  • [72] T. N. Yuya Chiba and A. Ito, “Multi-stream attention-based BLSTM with feature segmentation for speech emotion recognition,” in 21st Annual Conference of the International Speech Communication Association, INTERSPEECH, 2020, pp. 3301–3305.
  • [73] Z. Zhao, Z. Bao, Y. Zhao, Z. Zhang, N. Cummins, Z. Ren, and B. Schuller, “Exploring deep spectrum representations via attention-based recurrent and convolutional neural networks for speech emotion recognition,” IEEE Access, vol. 7, pp. 97 515–97 525, 2019.
  • [74] J. Liu, Z. Liu, L. Wang, Y. Gao, L. Guo, and J. Dang, “Temporal attention convolutional network for speech emotion recognition with latent representation,” in 21st Annual Conference of the International Speech Communication Association, INTERSPEECH, 2020, pp. 2337–2341.
  • [75] D. S. Pallet, W. M. Fisher, and J. G. Fiscus, “Tools for the analysis of benchmark speech recognition tests,” in International Conference on Acoustics, Speech, and Signal Processing (ICASSP), vol. 1, 1990, pp. 97–100.
  • [76] L. Van der Maaten and G. Hinton, “Visualizing data using t-sne.” Journal of machine learning research, vol. 9, no. 11, pp. 2579–2605, 2008.
  • [77] C. Lu, W. Zheng, C. Li, C. Tang, S. Liu, S. Yan, and Y. Zong, “Multiple spatio-temporal feature learning for video-based emotion recognition in the wild,” in Proceedings of the 20th ACM International Conference on Multimodal Interaction, 2018, pp. 646–652.
  • [78] C. Zheng, C. Wang, and N. Jia, “Emotion recognition model based on multimodal decision fusion,” in Journal of Physics: Conference Series, vol. 1873, no. 1, 2021, p. 012092.

References