Semi-supervised Bayesian Deep Multi-modal Emotion Recognition

04/25/2017
by   Changde Du, et al.
0

In emotion recognition, it is difficult to recognize human's emotional states using just a single modality. Besides, the annotation of physiological emotional data is particularly expensive. These two aspects make the building of effective emotion recognition model challenging. In this paper, we first build a multi-view deep generative model to simulate the generative process of multi-modality emotional data. By imposing a mixture of Gaussians assumption on the posterior approximation of the latent variables, our model can learn the shared deep representation from multiple modalities. To solve the labeled-data-scarcity problem, we further extend our multi-view model to semi-supervised learning scenario by casting the semi-supervised classification problem as a specialized missing data imputation task. Our semi-supervised multi-view deep generative framework can leverage both labeled and unlabeled data from multiple modalities, where the weight factor for each modality can be learned automatically. Compared with previous emotion recognition methods, our method is more robust and flexible. The experiments conducted on two real multi-modal emotion datasets have demonstrated the superiority of our framework over a number of competitors.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

07/27/2018

Semi-supervised Deep Generative Modelling of Incomplete Multi-Modality Emotional Data

There are threefold challenges in emotion recognition. First, it is diff...
09/05/2020

Semi-supervised Multi-modal Emotion Recognition with Cross-Modal Distribution Matching

Automatic emotion recognition is an active research topic with wide rang...
07/14/2020

TCGM: An Information-Theoretic Framework for Semi-Supervised Multi-Modality Learning

Fusing data from multiple modalities provides more information to train ...
07/28/2021

Deep Recurrent Semi-Supervised EEG Representation Learning for Emotion Recognition

EEG-based emotion recognition often requires sufficient labeled training...
05/23/2018

Semi-supervised classification by reaching consensus among modalities

This paper introduces transductive consensus network (TCNs), as an exten...
10/15/2020

DialogueTRM: Exploring the Intra- and Inter-Modal Emotional Behaviors in the Conversation

Emotion Recognition in Conversations (ERC) is essential for building emp...
04/28/2018

Ladder Networks for Emotion Recognition: Using Unsupervised Auxiliary Tasks to Improve Predictions of Emotional Attributes

Recognizing emotions using few attribute dimensions such as arousal, val...

Code Repositories

semiMVAE

Semi-supervised Multi-view Variational Autoencoder (semiMVAE)


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

With the development of human-computer interaction, emotion recognition has become increasingly important. Since human’s emotion contains many nonverbal cues, various modalities ranging from facial expressions, voice, Electroencephalogram (EEG), eye movements to other physiological signals can be used as the indicators of emotional states [Calvo and D’Mello2010]. In real-world applications, it is difficult to recognize the emotional states using just a single modality, because signals from different modalities represent different aspects of emotion and provide complementary information. Recent studies show that integrating multiple modalities can significantly boost the emotion recognition accuracy [Verma and Tiwary2014, Pang et al.2015, Lu et al.2015, Liu et al.2016, Soleymani et al.2016, Zhang et al.2016].

The most successful approach to fuse the information from multiple modalities is based on deep multi-view representation learning [Ngiam et al.2011, Andrew et al.2013, Srivastava and Salakhutdinov2014, Wang et al.2015, Chandar et al.2016]. For example,  [Pang et al.2015]

proposed to learn a joint density model for emotion analysis with a multi-modal Deep Boltzmann Machine (DBM) 

[Srivastava and Salakhutdinov2014]

. This multi-modal DBM is exploited to model the joint distribution over visual, auditory, and textual features.  

[Liu et al.2016]

proposed a multi-modal emotion recognition method by using multi-modal Deep Autoencoders (DAE) 

[Ngiam et al.2011], in which the joint representations of EEG and eye movement signals were extracted. Nevertheless, there are still limitations with these deep multi-modal emotion recognition methods, e.g., their performances depend on the amount of labeled data.

By using the modern sensor equipments, we can easily collect massive physiological signals, which are closely related to people’s emotional states. Despite the convenience of data acquisition, the data labeling procedure requires lots of manual efforts. Therefore, in most cases only a small set of labeled samples is available, while the majority of whole dataset is left unlabeled. Traditional emotion recognition approaches only utilized the limited amount of labeled data, which may result in severe overfitting. The most attractive way to deal with this issue is based on Semi-supervised Learning (SSL), which builds more robust model by exploiting both labeled and unlabeled data [Schels et al.2014, Jia et al.2014, Zhang et al.2016]. Though multi-modal approaches have been widely implemented for emotion recognition, very few of them explored SSL simultaneously. To the best of our knowledge, only  [Zhang et al.2016] proposed an enhanced multi-modal co-training algorithm for semi-supervised emotion recognition, but its shallow structure is hard to capture the high-level correlation between different modalities.

Amongst existing SSL approaches, the most competitive one is based on deep generative models, which employs the Deep Neural Networks (DNNs) to learn discriminative features and casts the semi-supervised classification problem as a specialized missing data imputation task.  

[Kingma et al.2014] and  [Maaløe et al.2016]

have shown that deep generative models and approximate Bayesian inference exploiting recent advances in scalable variational methods 

[Kingma and Welling2014, Rezende et al.2014] can provide state-of-the-art performance for semi-supervised classification. Though the Variational Autoencoder (VAE) framework [Kingma and Welling2014] has shown great advantages in SSL, its potential merits remain under-explored. For example, until recently, there was no successful multi-view extension for it. The main difficulty lies in its inherent assumption that the posterior approximation should be conditioned on the data point, which is natural to single-view data but becomes problematic for multi-view case.

In this paper, we propose a novel semi-supervised multi-view deep generative framework for multi-modal emotion recognition. Our framework combines the advantages of deep multi-view representation learning and Bayesian modeling, thus it has sufficient flexibility and robustness in learning joint features and classifier. Our main contributions can be summarized as follows.

  • We propose a multi-view extension for VAE by imposing a mixture of Gaussians assumption on the posterior approximation of the latent variables. For multi-view learning, this is critical for fully exploiting the information from multiple views.

  • We introduce a semi-supervised multi-modal emotion recognition framework based on multi-view VAE. Our framework can leverage both labeled and unlabeled samples from multiple modalities and the weight factor for each modality can be learned automatically, which is critical for building a robust emotion recognition system.

  • We demonstrate the superiority of our framework and provide insightful observations on two real multi-modal emotion datasets.

2 Multi-view Variational Autoencoder for Semi-supervised Emotion Recognition

The VAE framework has recently been introduced as a robust model for latent feature learning [Kingma and Welling2014, Rezende et al.2014]. However, the single-view architecture in VAE can’t effectively deal with multi-view data. In this section, we first build a multi-view VAE, which can learn the shared deep representation from multi-view data. And then, we extend it to the semi-supervised scenario. Assume we are faced with multi-view data that appears as pairs , with observation from the -th view and the corresponding class label .

2.1 Multi-view Variational Autoencoder

2.1.1 DNN-parameterized Likelihoods

We assume the latent variable can generate multi-view features . Specifically, we assume generates for any , with the following generative model (cf. Fig. 1a):

(1)

where

is a suitable likelihood function (e.g. a Gaussian for continuous observation or Bernoulli for binary observation), which is formed by a non-linear transformation of the latent variable

. This non-linear transformation is essential to allow for higher moments of the data to be captured by the density model, and we choose these non-linear functions to be DNNs, referred to as the generative networks, with parameters

. Note that, the likelihoods for different data views are assumed to be independent of each other, with different nonlinear transformations.

The Bayesian Canonical Correlation Analysis (CCA) model  [Klami et al.2013] can be seen as a special case of our model, where linear shallow transformations were used to generate each data view and only two different views were considered.  [Wang et al.2016] used a similar deep nonlinear generative process as ours to construct deep Bayesian CCA model, but during inference they construct the variational posterior approximation from just one view and ignore the rest one. Such a choice is convenient for inference and computation, but only seeks suboptimal solutions as it doesn’t fully exploit the data. As shown in the following, we assume the variational approximation to the posterior of latent variables to be a mixture of Gaussians, utilizing information from multiple views.

2.1.2 Gaussian Prior and Mixture of Gaussians Posterior

Typically, both the prior and the approximate posterior

are assumed to be Gaussian distributions 

[Kingma and Welling2014, Rezende et al.2014] in order to maintain mathematical and computational tractability. Although this assumption has leaded to favorable results on several tasks, it is clearly a restrictive and often unrealistic assumption. Specifically, the choice of a Gaussian distribution for and imposes a strong uni-modal structure assumption on the latent space. However, for data distributions that are strongly multi-modal, the uni-modal Gaussian assumption inhibits the model’s ability to extract and represent important structure in the data. To improve the flexibility of the model, one way is to impose a mixture of Gaussians assumption on . However, it has the risk of creating separate “islands” of discontinuous manifolds that may break the meaningfulness of the representation in the latent space.

To learn more powerful and expressive models – in particular, models with multi-modal latent variable structures for multi-modal emotion recognition applications – we seek a mixture of Gaussians for , while preserving as a standard Gaussian. Thus (cf. Fig. 1b),

(2)

where the mean and the covariance are nonlinear functions of the observation , with variational parameter . As in our generative model, we choose these nonlinear functions to be DNNs, referred to as the inference networks. is the non-negative normalized weight factor for the -th view, i.e., and . By conditioning the posterior approximation on the data point, we avoid variational parameters per data point, instead only requiring to fit global variational parameters. Note that, our mixed Gaussian assumption on the variational approximation distinguishes our method from all existing ones using the auto-encoding variational framework  [Kingma and Welling2014, Wang et al.2016, Burda et al.2016, Kingma et al.2016, Serban et al.2016, Maaløe et al.2016]. For multi-view learning, this is critical for fully exploiting the information from multiple views.

(a) Generative model

(b) Inference model
Figure 1: Graphical model of the multi-view VAE, where .

2.2 Semi-supervised Emotion Recognition

In semi-supervised classification, only a subset of the samples have corresponding class labels, and we focus on using the multi-view VAE to build a model (semiMVAE) that learns classifier from both labeled and unlabeled multi-view data. Since the emotional data is continuous, we choose the Gaussian likelihoods. Then the generative model is defined as (cf. Fig. 2a):

(3)

where denotes the categorical distribution, is treated as a latent variable for the unlabeled data points, and the mean

and variance

are nonlinear functions of and , with parameter . The inference model is defined as (cf. Fig. 2b):

(4)

where is assumed to be a mixture of Gaussians to combine the information from multiple data views. Intuitively, , and correspond to the encoder, the decoder and the classifier, respectively.

For brevity, we omit the explicit dependencies on , and for the moment variables mentioned above hereafter. In principle, , , , and

can be implemented by various DNN models, e.g., Multiple Layer Perceptrons (MLP) and Convolutional Neural Networks (CNN).

(a) Generative model

(b) Inference model
Figure 2: Graphical model of the semiMVAE for semi-supervised multi-view learning, where .

2.3 Variational Lower Bound

The variational lower bound on the marginal likelihood for a single labeled data point is

(5)

where . Note that, the Shannon entropy is hard to compute analytically, and we have used the Jensen’s inequality to derive a lower bound of it:

For unlabeled data, we further introduce the variational distribution for :

(6)

with . The objective function for the entire dataset is now:

(7)

where and are labeled and unlabeled dataset, respectively. The classification accuracy can be improved by introducing an explicit classification loss for labeled data. The extended objective function is:

(8)

where the hyper-parameter is a weight between generative and discriminative learning. We set , where is a scaling constant, and and are the numbers of labeled and unlabeled data points in one minibatch, respectively. Note that, the classifier is also used at test phase for the prediction of unseen data.

2.4 Optimization

Eq. (8

) provides a unified objective function for optimizing the parameters of encoder, decoder and classifier networks. This optimization can be done jointly, without resort to the variational EM algorithm, using the stochastic backpropagation technique 

[Kingma and Welling2014, Rezende et al.2014].

2.4.1 Reparameterization Trick

The reparameterization trick is a vital component of the algorithm, because it allows us to easily take the derivative of with respect to the variational parameters . However, the use of a mixture of Gaussians for the variational distribution makes the application of reparameterization trick challenging. It can be shown that, for any , can be rewritten, using the location-scale transformation for the Gaussian distribution, as:

(9)

where and .

2.4.2 Gradients of the Objective

While the expectations on the right hand side of Eq. (2.4.1) still cannot be solved analytically, their gradients w.r.t. , and

can be efficiently estimated using the following Monte-Carlo estimators,

(10)
(11)
(12)

where is evaluated at and with . In practice, it suffices to use a small (e.g. ) and then estimate the gradient using minibatches of data points. We use the same random numbers for all estimators to have lower variance. The gradient w.r.t. is omitted here, since it can be derived straightforwardly by using traditional reparameterization trick [Kingma et al.2014].

The gradients of the loss for semiMVAE (Eq. (8

)) can then be computed by a direct application of the chain rule and estimators presented above. During optimization we can use the estimated gradients in conjunction with standard stochastic gradient based optimization methods such as SGD, RMSprop or Adam 

[Kingma and Ba2014]. Overall, the model can be trained with reparameterization trick for backpropagation through the mixed Gaussian latent variables.

3 Experiments

In this section, we present extensive experimental results to demonstrate the effectiveness of the proposed semi-supervised multi-view framework for emotion recognition.

3.1 Experimental Testbed and Setup

Data description  Two multi-modal emotion datasets, the SEED dataset111http://bcmi.sjtu.edu.cn/%7Eseed/index.html [Lu et al.2015] and the DEAP dataset222http://www.eecs.qmul.ac.uk/mmv/datasets/deap/download.html [Koelstra et al.2012], were used in our experiments.

The SEED dataset contains EEG and eye movement signals from 15 subjects during watching 15 movie clips, where each movie clip lasts about 4 minutes long. The EEG signals were recorded from 62 channels and the eye movement signals contained information about blink, saccade fixation and so on. In our experiment, we used the data from 9 subjects across 3 sessions, totally 27 data files. For each data file, data from watching the 1-9 movie clips were used as training set, while data from watching the 10-12 movie clips were used as validation set and the rest (13-15) were used as testing set.

The DEAP dataset contains EEG and peripheral physiological signals of 32 participants. Signals were recorded when they were watching 40 one-minute duration music videos. The EEG signals were recorded from 32 channels, whereas the peripheral physiological signals were recorded from 8 channels. The participants, using values from 1 to 9, rated each music video in terms of the levels of valence, arousal and so on. In our experiment, the valence-arousal space was divided into four quadrants according to the ratings. The threshold we used was 5, leading to four classes of data. Considering the fuzzy boundary of emotions and the variations of participants’ ratings possibly associated with individual difference in rating scale, we discarded the samples whose ratings of arousal and valence are between 3 and 6. The dataset was randomly divided into 10-folds, where 8 folds for training, one fold for validation and the last fold for testing. The size of testing set is relative small, because some graph-based semi-supervised baselines are hard to deal with large dataset.

Feature selection  For SEED dataset,  lu2015combining have extracted the Differential Entropy (DE) features and 33 eye movement features from EEG and eye movement signals [Lu et al.2015]. We also used these features in our experiments. For DEAP dataset, we extracted the DE features from EEG and peripheral physiological signals. The DE features can be calculated in four frequency bands: theta (4-8Hz), alpha (8-14Hz), beta (14-31Hz), and gamma (31-45Hz), and we used all band’s features. The details of the data used in our experiments were summarized in Table 1.

Datasets #Instances #Features #Training #Validation #Testing #Classes
SEED 22734 310(EEG), 33(Eye) 13473 4725 4536 3
DEAP 21042 128(EEG), 32(Phy.) 16834 2104 2104 4
Table 1: The details of the datasets used in our experiments.

Compared methods  We compared our semiMVAE with a broad range of solutions, including supervised learning, transductive and inductive semi-supervised learning. We briefly summarize the various baselines in the following.

  • : the multi-view extension of deep autoencoders, which can be used to extract the joint representations from multiple modalities [Ngiam et al.2011].

  • : the full deep neural network extension of Canonical Correlation Analysis (CCA). DCCA can learn deep nonlinear mappings of two views, which are maximally correlated [Andrew et al.2013].

  • : a deep multi-view representation learning model which combines the advantages of the DCCA and deep autoencoders. In particular, DCCAE consists of two autoencoders and optimizes the combination of canonical correlation between the learned bottleneck representations and the reconstruction errors of the autoencoders  [Wang et al.2015].

  • : a graph-based multi-view semi-supervised classification algorithm, which can integrate heterogeneous features from both labeled and unlabeled data [Cai et al.2013].

  • : a latest auto-weighted multiple graph learning framework, which can be applied to multi-view semi-supervised classification task [Nie et al.2016].

  • : a single-view semi-supervised deep generative model proposed in [Kingma et al.2014]. We evaluate semiVAE’s performance for each modality and the concatenation of all modalities, respectively.

For MAE, DCCA and DCCAE, we used the Support Vector Machines

333http://www.csie.ntu.edu.tw/%7Ecjlin/liblinear/. (SVM) and transductive SVM444http://svmlight.joachims.org/. (TSVM) for supervised learning and transductive semi-supervised learning, respectively. Parameter setting  For semiMVAE, we considered multiple layer perceptrons as the type of inference and generative networks. On both datasets, we set the structures of the inference and generative networks for each view as ‘100-50-30’ and ‘30-50-100’, respectively. We used the Adam optimizer [Kingma and Ba2014] with a learning rate in training. The scaling constant was selected from {0.1, 0.5, 1} throughout the experiments. The weight factor for each view was initialized with , where is the number of views. For MAE, DCCA and DCCAE, we considered the same setups (network structure, learning rate, etc.) as our semiMVAE. For AMMSS, we tuned the parameters as suggested in [Cai et al.2013]. For AMGL and semiVAE, we used their default settings.

3.2 Performance Evaluation

To simulate semi-supervised learning scenario, on both datasets, we randomly labeled different proportions of samples in the training set, and remained the rest samples in the training set unlabeled. For transductive semi-supervised learning, we trained models on the dataset consisting of the testing data and labeled data belonging to training set. For inductive semi-supervised learning, we trained models on the entire training set consisting of the labeled and unlabeled data. For supervised learning, we trained models on the labeled data belonging to training set, and test their performance on the testing set. Table 2 presents the classification accuracies of all methods on SEED and DEAP datasets. The proportions of labeled samples in the training set vary from to . Several observations can be drawn as follows.

SEED data Algorithms 1% labeled 2% labeled 3% labeled
Supervised learning MAE+SVM .814.031 .896.024 .925.024
DCCA+SVM .809.035 .891.035 .923.028
DCCAE+SVM .819.036 .893.034 .923.027
Transductive semi-supervised learning AMMSS .731.055 .839.036 .912.018
AMGL .711.047 .817.023 .886.028
MAE+TSVM .818.035 .910.025 .931.026
DCCA+TSVM .811.031 .903.024 .928.021
DCCAE+TSVM .823.040 .907.027 .929.023
semiMVAE .861.037 .931.020 .960.021
Inductive semi-supervised learning semiVAE (Eye) .753.024 .849.055 .899.049
semiVAE (EEG) .768.041 .861.040 .919.026
semiVAE (Concat.) .803.035 .876.043 .926.044
semiMVAE .880.033 .955.020 .968.015
DEAP data Algorithms 1% labeled 2% labeled 3% labeled
Supervised learning MAE+SVM .353.027 .387.014 .411.016
DCCA+SVM .359.016 .400.014 .416.018
DCCAE+SVM .361.023 .403.017 .419.013
Transductive semi-supervised learning AMMSS .303.029 .353.024 .386.014
AMGL .291.027 .341.021 .367.019
MAE+TSVM .376.025 .403.031 .417.026
DCCA+TSVM .379.021 .408.024 .421.017
DCCAE+TSVM .384.022 .412.027 .425.021
semiMVAE .424.020 .441.013 .456.013
Inductive semi-supervised learning semiVAE (Phy.) .366.024 .389.048 .402.034
semiVAE (EEG) .374.019 .397.013 .407.016
semiVAE (Concat.) .383.019 .404.016 .416.012
semiMVAE .421.019 .439.025 .451.022
Table 2: Comparison with several supervised and semi-supervised methods on SEED and DEAP with few labels. Results (meanstd) were averaged over 20 independent runs.

First, the average accuracy of semiMVAE significantly surpasses the baselines in all cases. Second, by examining semiMVAE against supervised learning approaches trained on very limited labeled data, we can find that semiMVAE always outperforms them. This encouraging result shows that semiMVAE can effectively leverage the useful information from unlabeled data. Third, multi-view semi-supervised algorithms AMMSS and AMGL perform worst in all cases. We attribute this to the fact that graph-based shallow models AMMSS and AMGL can’t extract the deep features from the original data. Fourth, the performances of three TSVM based semi-supervised methods are moderate. Although MAE+TSVM, DCCA+TSVM and DCCAE+TSVM can also integrate multi-modality information from unlabeled samples, their two-stage learning can’t obtain the global optimal model parameters. Finally, compared with the single-view semi-supervised method semiVAE, our multi-view method is more effective in integrating multiple modalities.

(a) SEED dataset
(b) DEAP dataset
Figure 3: semiMVAE’s performance with different proportions of labeled samples in the training set.
(a) SEED dataset
(b) DEAP dataset
Figure 4: Inductive semiMVAE’s performance with different proportions of unlabeled samples in the training set.

The proportion of labeled and unlabeled samples in the training set will affect the performance of semi-supervised models. Figs. 3 and 4 show the changes of semiMVAE’s average accuracy on both datasets with different proportions of labeled and unlabeled samples in the training set. We can observe that both labeled and unlabeled samples can effectively boost the classification accuracy of semiMVAE.

Instead of treating each modality equally, our semiMVAE can weight each modality and perform classification simultaneously. Fig. 5a shows the learned weight factors by inductive semiMVAE on SEED and DEAP datasets ( labeled). From it, we can observe that EEG modality has the highest weight on both datasets, which is consistent with single modality’s performance of semiVAE shown in Table 2 and the results in previous work [Lu et al.2015].

Figure 5: (a) Learned weight factors by inductive semiMVAE. (b) The impact of scaling constant .

The scaling constant controls the weight of discriminative learning in semiMVAE. Fig. 5b shows the performance of inductive semiMVAE with different values ( labeled). From it, we can find that the scaling constant can be chosen from {0.1, 0.5, 1}, where semiMVAE achieves good results.

4 Conclusion

This paper proposes a semi-supervised multi-view deep generative framework for emotion recognition, which can leverage both labeled and unlabeled data from multiple modalities. The key to the framework are two parts: 1) multi-view VAE can fully integrate the information from multiple modalities and 2) semi-supervised learning can overcome the labeled-data-scarcity problem. Experimental results on two real multi-modal emotion datasets demonstrate the effectiveness of our approach.


References

  • [Andrew et al.2013] Galen Andrew, Raman Arora, Jeff A Bilmes, and Karen Livescu. Deep canonical correlation analysis. In ICML, pages 1247–1255, 2013.
  • [Burda et al.2016] Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. In ICLR, 2016.
  • [Cai et al.2013] Xiao Cai, Feiping Nie, Weidong Cai, and Heng Huang. Heterogeneous image features integration via multi-modal semi-supervised learning model. In ICCV, pages 1737–1744, 2013.
  • [Calvo and D’Mello2010] Rafael A Calvo and Sidney D’Mello. Affect detection: An interdisciplinary review of models, methods, and their applications. IEEE Transactions on Affective Computing, 1(1):18–37, 2010.
  • [Chandar et al.2016] Sarath Chandar, Mitesh M Khapra, Hugo Larochelle, and Balaraman Ravindran. Correlational neural networks. Neural computation, 28(2):257–285, 2016.
  • [Jia et al.2014] Xiaowei Jia, Kang Li, Xiaoyi Li, and Aidong Zhang.

    A novel semi-supervised deep learning framework for affective state recognition on EEG signals.

    In International Conference on Bioinformatics and Bioengineering (BIBE), pages 30–37. IEEE, 2014.
  • [Kingma and Ba2014] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  • [Kingma and Welling2014] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. In ICLR, 2014.
  • [Kingma et al.2014] Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised learning with deep generative models. In NIPS, pages 3581–3589, 2014.
  • [Kingma et al.2016] Diederik P Kingma, Tim Salimans, and Max Welling. Improving variational inference with inverse autoregressive flow. In NIPS, 2016.
  • [Klami et al.2013] Arto Klami, Seppo Virtanen, and Samuel Kaski. Bayesian canonical correlation analysis.

    Journal of Machine Learning Research

    , 14(1):965–1003, 2013.
  • [Koelstra et al.2012] Sander Koelstra, Christian Muhl, Mohammad Soleymani, Jong-Seok Lee, Ashkan Yazdani, Touradj Ebrahimi, Thierry Pun, Anton Nijholt, and Ioannis Patras. Deap: A database for emotion analysis; using physiological signals. IEEE Transactions on Affective Computing, 3(1):18–31, 2012.
  • [Liu et al.2016] Wei Liu, Wei-Long Zheng, and Bao-Liang Lu. Multimodal emotion recognition using multimodal deep learning. arXiv preprint arXiv:1602.08225, 2016.
  • [Lu et al.2015] Yifei Lu, Wei-Long Zheng, Binbin Li, and Bao-Liang Lu. Combining eye movements and EEG to enhance emotion recognition. In IJCAI, pages 1170–1176, 2015.
  • [Maaløe et al.2016] Lars Maaløe, Casper Kaae Sønderby, Søren Kaae Sønderby, and Ole Winther. Auxiliary deep generative models. In ICML, pages 1445–1453, 2016.
  • [Ngiam et al.2011] Jiquan Ngiam, Aditya Khosla, Mingyu Kim, Juhan Nam, Honglak Lee, and Andrew Y Ng. Multimodal deep learning. In ICML, pages 689–696, 2011.
  • [Nie et al.2016] Feiping Nie, Jing Li, Xuelong Li, et al. Parameter-free auto-weighted multiple graph learning: A framework for multiview clustering and semi-supervised classification. In IJCAI, pages 1881–1887, 2016.
  • [Pang et al.2015] Lei Pang, Shiai Zhu, and Chong-Wah Ngo. Deep multimodal learning for affective analysis and retrieval. IEEE Transactions on Multimedia, 17(11):2008–2020, 2015.
  • [Rezende et al.2014] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In NIPS, pages 1278–1286, 2014.
  • [Schels et al.2014] Martin Schels, Markus Kächele, Michael Glodek, David Hrabal, Steffen Walter, and Friedhelm Schwenker. Using unlabeled data to improve classification of emotional states in human computer interaction. Journal on Multimodal User Interfaces, 8(1):5–16, 2014.
  • [Serban et al.2016] Iulian V Serban, II Ororbia, G Alexander, Joelle Pineau, and Aaron Courville. Multi-modal variational encoder-decoders. arXiv preprint arXiv:1612.00377, 2016.
  • [Soleymani et al.2016] Mohammad Soleymani, Sadjad Asghari-Esfeden, Yun Fu, and Maja Pantic. Analysis of EEG signals and facial expressions for continuous emotion detection. IEEE Transactions on Affective Computing, 7(1):17–28, 2016.
  • [Srivastava and Salakhutdinov2014] Nitish Srivastava and Ruslan Salakhutdinov. Multimodal learning with deep boltzmann machines. Journal of Machine Learning Research, 15:2949–2980, 2014.
  • [Verma and Tiwary2014] Gyanendra K Verma and Uma Shanker Tiwary. Multimodal fusion framework: A multiresolution approach for emotion classification and recognition from physiological signals. NeuroImage, 102:162–172, 2014.
  • [Wang et al.2015] Weiran Wang, Raman Arora, Karen Livescu, and Jeff A Bilmes. On deep multi-view representation learning. In ICML, pages 1083–1092, 2015.
  • [Wang et al.2016] Weiran Wang, Xinchen Yan, Honglak Lee, and Karen Livescu. Deep variational canonical correlation analysis. arXiv: 1610.03454, 2016.
  • [Zhang et al.2016] Zixing Zhang, Fabien Ringeval, Bin Dong, Eduardo Coutinho, Erik Marchi, and Björn Schüller. Enhanced semi-supervised learning for multimodal emotion recognition. In ICASSP, pages 5185–5189. IEEE, 2016.