# Discovering Influential Factors in Variational Autoencoder

In the field of machine learning, it is still a critical issue to identify and supervise the learned representation without manually intervention or intuition assistance to extract useful knowledge or serve for the latter tasks in machine learning. In this work, we focus on supervising the influential factors extracted by the variational autoencoder(VAE). The VAE is proposed to learn independent low dimension representation while facing the problem that sometimes pre-set factors are ignored. We argue that the mutual information of the input and each learned factor of the representation plays a necessary indicator. We find the VAE objective inclines to induce mutual information sparsity in factor dimension over the data intrinsic dimension and results in some non-influential factors whose function on data reconstruction could be ignored. We show mutual information also influences the lower bound of VAE's reconstruction error and latter classification task. To make such indicator applicable, we design an algorithm on calculating the mutual information for VAE and prove its consistency. Experimental results on Mnist, CelebA and Deap datasets show that mutual information can help determine influential factors, of which some are interpretable and can be used to further generation and classification tasks, and help discover the variant that connects with emotion on Deap dataset.

## Authors

• 3 publications
• 10 publications
• 32 publications
• 5 publications
• 7 publications
• 5 publications
• 12 publications
• 54 publications
• ### VMI-VAE: Variational Mutual Information Maximization Framework for VAE With Discrete and Continuous Priors

Variational Autoencoder is a scalable method for learning latent variabl...
05/28/2020 ∙ by Andriy Serdega, et al. ∙ 21

• ### Learning Representations by Maximizing Mutual Information in Variational Autoencoder

Variational autoencoders (VAE) have ushered in a new era of unsupervised...
12/21/2019 ∙ by Ali Lotfi Rezaabad, et al. ∙ 0

• ### CausalVAE: Structured Causal Disentanglement in Variational Autoencoder

Learning disentanglement aims at finding a low dimensional representatio...
04/18/2020 ∙ by Mengyue Yang, et al. ∙ 12

• ### Improve variational autoEncoder with auxiliary softmax multiclassifier

As a general-purpose generative model architecture, VAE has been widely ...
08/17/2019 ∙ by Yao Li, et al. ∙ 0

• ### Distributed generation of privacy preserving data with user customization

Distributed devices such as mobile phones can produce and store large am...
04/20/2019 ∙ by Xiao Chen, et al. ∙ 6

• ### Auto-Encoding Total Correlation Explanation

Advances in unsupervised learning enable reconstruction and generation o...
02/16/2018 ∙ by Shuyang Gao, et al. ∙ 0

• ### An information theoretic approach to the autoencoder

We present a variation of the Autoencoder (AE) that explicitly maximizes...
01/23/2019 ∙ by Vincenzo Crescimanna, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Learning efficient low dimension representation of data is important in machine learning and related applications. Efficient and intrinsic low dimension representation is helpful to exploit the underlying knowledge of data and serves for latter tasks including generation, classification and association. Early linear dimension reduction (Principle Component Analysis) has been widely used in primary data analysis and its variant has been applied in face identification (Yang et al. (2004)

) and classical linear independent representation (Independent Component Analysis (

Hyvärinen, Karhunen, and Oja (2004)) have been used in blind source separation (Jung et al. (2000)) and EEG signal processing (Makeig et al. (1996)). Nonlinear dimension reduction (e.g. Autoencoder (Goodfellow, Bengio, and Courville (2016))) begins to further learn abstract representation and has been used in semantic hashing (Salakhutdinov and Hinton (2009) and many other tasks. Recently, a new technique, called variational autoencoder (Kingma and Welling (2013); Rezende, Mohamed, and Wierstra (2014)) has attracted much attention of researchers, due to its capability in extracting nonlinear independent representation. The method can further model causal relationship, represent disentangled visual variants (Mathieu et al. (2016)) and interpretable time series variants (Hsu, Zhang, and Glass (2017)) and this method can serve for generating signals with abundant diversities in a “factor-controllable” way (Suzuki, Nakayama, and Matsuo (2016); Higgins et al. (2017b)). The related techniques enable the knowledge transferring through shared factors among different tasks (Higgins et al. (2017a)).

However, the usage of VAE on extracting factors are unclear and we lack efficient methodologies to quantify the influence of each learned factor on data representation. In application, sometimes some pre-set factors remain unused 111 Montage (D) in Fig.(1) is a typical traversal of the unused factor. (Goodfellow, Bengio, and Courville (2016)

), and the relation between the learned factors and original data has to be discovered by manually intervention (visual or aural observation). This leads to the waste on extra factors and hinders the factor selection for the subsequent tasks such as generating meaningful image/audio. Besides, some classical influence determination methods including estimating the variance of each factor lose its utility on the VAE. Therefore, identifying and monitoring the influential factor of VAE becomes a critical issue along this line of research.

In order to efficiently determine and supervise the learned factors, this paper has made the following efforts.

• We first adopt mutual information as the quantitative indicator of assessing the influence of each factor on data representation in the VAE model. Besides, in order to analyze the rationality of this indicator, we theoretically prove that how mutual information influence the lower bound of VAE’s reconstruction error and subsequent classification task.

• We propose an estimation algorithms to calculate the mutual information for all the factors of VAE, and then we prove its consistency.

• We substantiate the effectiveness of the proposed indicator by experiments on Mnist (Lécun et al. (1998)), CelebA (Liu et al. (2015)) and Deap (Koelstra et al. (2012)). Especially, some discovered factors by the proposed indicator are found meaningful and interpretable for data representation and other left ones are generally ignorable for the task. The capability of the selected factors on generalization and classification tasks are also verified.

This paper is organized as the following. We introduce the VAE model for generation and classification in Section 2. We argue the necessity of mutual information as a indicator in Section 3. Specifically, we introduce the mutual information of input data and factors, analyze the cause through the perspective of mutual information and data intrinsic dimension, discuss the relationship of mutual information and recover as well as the classification and propose the estimator and prove its consistency. We review the related work on supervising the factors of VAE in Section 4. The experiments are in Section 5.

## 2 VAE model

VAE (Kingma and Welling (2013), Rezende, Mohamed, and Wierstra (2014)) is a scalable unsupervised representation learning model (Higgins et al. (2016)): VAE assumes that input

is generated by several independent Gaussian random variables

, that is

. Since Gaussian distribution can be continuously and reversibly mapping to many other distributions, the theoretical analysis on it might be also instructive for other continuous-latent VAEs. The generating/decoding process is modeled as

and the inference/encoding process is treated as the approximate posterior distribution. Note that it yields that

. We assume both of them are parameterized by the neural network with parameter

and .

Factor: Let denote random variables determined by and a factor in the latter literature refers to a dimension of .

### 2.1 Generation

In VAE setting, the approximate inference method is applied to maximizing the variational lower bound of ,

 Lrec = Ez∼qenc(z|x)logpdec(z|x)−DKL(qenc(z|x)||pdec(z)) (1) ≤ logpdec(x),

with the equality holds iff

 DKL(qenc(z|x)||pdec(z|x))=0. (2)

In order to limit the information channel capacity (Higgins et al. (2016)), -VAE introduces to the second term of the objective,

 Lrec−β = Ez∼qenc(z|x)logpdec(z|x)− (3) βDKL(qenc(z|x)||pdec(z)) < logpdec(x).

After training the objective, by sampling from the or setting with purpose, the learned can generate new samples.

### 2.2 Classification

The can further support latter tasks such as classification. Let denote the predicting process, and the classification objective is the following,

 Lpre=Ez∼qenc(z|x)logppre(y|z). (4)

In real implementation the above objectives should further take expectation on the data distribution. However, sometimes only part factors are manually found useful for the generation (Goodfellow, Bengio, and Courville (2016)), and the factor which is irrelevant to can not support classification either. Therefore, some approaches to automatically find the influential factor beneficial to the latter tasks are demanded.

## 3 Mutual Information as A Necessary Indicator

By exploring why factors are ignored, we argue that mutual information is a necessary indicator to find the influential factor.

### 3.1 Ignored Factor Analysis

#### 3.1.1 Low Intrinsic Dimension of Data

One aim of the VAE is to learn the data intrinsic factors but intrinsic dimension keeps the same under the continuous reversible mapping suggested by Theorem 1.

###### Theorem 1 (Information Conservation).

Suppose that and are sets of and () independent unit Gaussian random variables, respectively, then these two sets of random variables can not be the generating factor of each other. That is, there are no continuous functions and such that

 z=g(y)andy=f(z).

Proof is listed in Appendix A. Suppose the oracle data, denoted by random variable , is generated by (with independent unit Gaussian random variables) with a homeomorphism mapping . Factors (with independent unit Gaussian random variables) generates the with a homeomorphism mapping . It yields and . Then according to the information conservation theorem, it must hold that .

For example, 10 Gaussian factors and 128 Gaussian factors can not generate each other. Analogically, if the data are generated by 10 intrinsic Gaussian factors, it intuitively would not be inferred to 128 Gaussian factors by VAE and some factors would be independent with data although we may pre-set in this way.

#### 3.1.2 Mutual Information Reflexes the Absolute Statistic Dependence

In order to quantify the dependence and estimate which factor influences the generating process or has no effect at all, the mutual information of and , can be taken as a rational indicator(Peng, Long, and Ding (2005)) . That is,

 I(x;zench)=Ex∼pdata(x)DKL(qenc(zh|x)||qenc(zh)). (5)

The mutual information can reflect the absolute statistic dependence: if and only if and are independent. The larger is, the more information conveys regarding , and the more influential factor it should be to represent the data.

#### 3.1.3 Sparsity in Mutual Information

Actually, mutual information is implicitly involved in the VAE objective. The following theorem further suggests the VAE objective induces the sparsity in mutual information. It then explains why factors are ignored from the perspective of mutual information.

###### Theorem 2 (Objective Decomposition).

If 222That is the support of is contained in support of ., for any , and then it yields the following decomposition:

• norm expression of the KL-divergence term in VAE:

 Ex∼pdata(x)DKL(qenc(z|x)||pdec(z)) (6) = H∑h=1Ex∼pdata(x)DKL(qenc(zh|x)||pdec(zh)) = ∥(Ex∼pdata(x)DKL(qenc(z1|x)||pdec(z1)), Ex∼pdata(x)DKL(qenc(z2|x)||pdec(z2)),⋯, Ex∼pdata(x)DKL(qenc(zh|x)||pdec(zh)))∥1.
• Further decomposition of an entity in the norm expression:

 Ex∼pdata(x)DKL(qenc(zh|x)||pdec(zh)) (7) = I(x;zench)+DKL(qenc(zh)||pdec(zh)).

Proof is given in Appendix A. The theorem demonstrates that the expectation of the second term in variation lower bound in Eq. (1) can be represented in the form of norm which inclines to induce the sparsity of and together in , clipping down the non-intrinsic factor dimension to some extent. The sparsity of Expectation actually leads to sparsity of both its summarization terms and together in , since both of them are non-negative. For any zero value summarization, both of its elements should also be zero. Thus this regularization term inclines to intrinsically conduct sparsity of mutual information , which have been comprehensively substantiated by all our experiments, as can be easily seen in Fig.(1) and Fig.(2).

Therefore VAE objective inclines to induce mutual information sparsity in factor dimension over the data intrinsic dimension and the factor ignored phenomenon occurs. On the one hand, with this KL divergence regularization, even when the number of latent factors are set large, unlike auto-encoder, the over-fitting issue still tends not to occur. On the other hand, this helps us get influential factors to represent the variants of data, and facilitate an efficient generalization of data by varying these useful factors while neglecting others.

By the way, the following theorem suggests the condition that we can use to estimate the whole mutual information.

###### Theorem 3 (Mutual Information Separation).

Let be independent unit Gaussian distribution, and be conditional independent given . Then

 I(x;z1,⋯,zh) = H∑h=1I(x;zh) (8) = ∥(I(x;z1),I(x;z2),⋯,I(x;zh))∥1.

Proof is presented in Appendix A. This theorem suggests that if the learnt can factorize and the can factorize, then we could use the sum of to direct estimate the whole mutual information.

### 3.2 Reconstruction and Classification Theoretical Supports

According to Cover and Thomas (2012), the mutual information can also provide a lower bound for the best mean recover error.

###### Theorem 4.

Suppose is with differential entropy , then let be an estimation of , and give side information , and then it holds that

 E(x−^x(zenc))2≥12πee2(H(x)−I(x;zenc)). (9)

Therefore, if we set , then has as the lower bound for recover. Let us only use a major set of factors : . With the assumption that can factorize, it yields the separation of mutual information . It yields the following bound,

 E(x−xrecc)2 (10) ≥ 12πee2(H(x)−I(x;zmajor)) ≥ 12πee2(H(x)−I(x;zmajor))e−2I(x;zminor).

The theorem implies that the mutual information carried by the selecting factors directly influences on the lower bound of the best recover and we may select some top influential factors carrying the most information to represent and generate the data with less reconstruction distortion.

We further provide some theoretical supports for the proposed mutual information as the factor indicator in classification.

Suppose that Markov chain condition,

, holds, according to the Fano’s inequality (Cover and Thomas (2012)) and the information processing inequality the mutual information also correlates with the classification error.

###### Theorem 5 (Fano’s inequality).

For any estimation such that , with , we have

 H(Pe)+Pelog|Y|≥H(y)−I(y;zenc)≥H(y)−I(x;zenc). (11)

This inequality can be weakened to

 1+Pelog|Y|≥H(y)−I(y;zenc)≥H(y)−I(x;zenc). (12)

or

 Pe≥H(y)−I(y;zenc)−1log|Y|≥H(y)−I(x;zenc)−1log|Y|. (13)

Note that according to information processing inequality . If , the factor will not influence the prediction. With the assumption that can factorize, the theorem suggests the mutual information carried by the selecting factors directly influences the lower bound of the classification error and therefore we can remove minor factors according to the mutual information without significantly lifting the lower bound of the prediction error.

### 3.3 Algorithms to Quantitatively Calculate the Proposed indicators

In order to calculate in practice, we assume that is a factorized zero mean Gaussian estimation for .

We can then list the indicators to be estimated as:

###### Definition 1 (Estimation for I(x;zenc): the information conveyed by whole factors).
 Iest(x;zenc)M=1MM∑m=1DKL(qenc(z|xm)||q∗(z)). (14)

This estimation uses sample according to the empirical form of Corollary 2.

###### Definition 2 (Estimation for I(x;zench): the information conveyed by a factor).
 Iest(x;zench)M=1MM∑m=1DKL(qenc(zench|xm)||q∗(zench)). (15)

This indicator quantifies mutual information of a specific factor and input data.

Note that the above indicators need the value of , and thus we need to design algorithms to calculate this term. Based on Theorem 2 through the minimization equivalence, we know that

 minqEx∼pdata(x)DKL(qenc(z|x)||q(z))⇔minqDKL(qenc(z)||q(z))dz, (16)

and then we can prove the following result:

###### Corollary 1.

if then

 Ex∼pdata(x)DKL(qenc(z|x)||q∗(z)) (17) = I(x;zenc)+DKL(qenc(z)||q∗(z)).

The proof of Corollary 1 is the same as that of Theorem 2. This corollary suggests that the estimation defined in Definition 1 provides another upper bound for the capacity of the encoder network. Empirically, this estimation is a much tighter estimation than the second term of the Objective (1).

can then be obtained by solving the following optimization problem:

 q∗(z)=argminq1MM∑m=1DKL(qenc(z|xm)||q(z)). (18)

The above procedure is summarized and presented in Algorithm 1 to calculate the proposed indicators.

The following definition and theorem clarify the consistency of the estimation on mutual information.

###### Definition 3 (Consistency).

The estimator is consistent to if and only if: , , and ,

, with probability greater than

, we have

 |Iest(x;zenc)M−I(x;zenc)|<ε. (19)
###### Theorem 6.

The estimator is consistent to . That is, if the choice of satisfied the condition that , then , , , with probability greater than , we have

 |Iest(x;zenc)M−I(x;zenc)|<ε. (20)
###### Proof.

Let . According to the law of big number, we have , , , with probability greater than , we have

 |Iest(x;zenc)M−~I[q∗]|<ε/2, (21)
 |Iest(x;zenc)M−I(x;zenc)| (22) ≤ |Iest(x;zenc)M−~I[q∗]|+|~I[q∗]−I(x;zenc)| < ε2+|DKL(qenc(z)||q∗(z))|<ε.

This theorem suggests the estimation under a high probability could be arbitrary close to the real mutual information provided that the estimation is arbitrary close to the learned and the number of the sample is bigger enough. Besides, the minimization of in theorem 6 inspires the derivation of .

## 4 Related Work

There are not too many works on such indicator designing issue to discover influential factors in VAE. A general and easy approach for determine the VAE’s factor influence is through intuitive visual (Goodfellow, Bengio, and Courville (2016), Higgins et al. (2016)) or aural (Hsu, Zhang, and Glass (2017)) observation. However, it might be labor-intensive to select factors for latter tasks.

In (Alemi et al. (2016)),

are visualized by plotting the 95% confidence interval as an ellipse to supervise the behavior of network and it reflects the factor influence directly. However, it still needs human to interpret the plot.

In classical PCA, it’s common to select factor with high variance and (Higgins et al. (2017b)) suggests that the variance of factor may indicate the usage of the factors. However, the variance could not always represent the absolute statistical relationship between the factors and data, which can be easily observed by Fig.(1) and Fig.(2).

Our work emphasizes mutual information which conveys the absolute statistical relationship between the factors and the data and uses it as an indictor to find the influential factors, substantiated with the relationship of the total information of selected factors and the reconstruction and relationship of mutual information and classification. All our experiments substantiate that designed indicator can discover the influential factors significantly relevant for data representation.

## 5 Experimental Results

### 5.1 Datasets

MNIST is a database of handwritten digits (Lécun et al. (1998)). We estimate all mutual information of factors learned from it and then use different ratio of top influential factors for the latter generation task.

CelebA (Liu et al. (2015)) is a large-scale celebfaces attributes datasets and we only use its images to sustain influential factor discovery.

DEAP is a publicly famous multi-modalities emotion recognition dataset proposed by (Koelstra et al. (2012)) and we used the transformed the signal -to-video sequence for 4-class emotion prediction by using different ratio of top influential factors and for emotion relevant influential factor extraction.

More details are presented in Appendix B.

### 5.2 Influential Factor Discovery Tests

According to Fig.(1), the proposed mutual information estimator effectively determines the influential as well as the non-influential factors. The factors with small values of estimated mutual information can be found with little generation effects and factors with large values of mutual information can be found with influential generation effects. Comparatively, it can be observed that the variance as used in classical methods can not significantly indicate the usage of factors.

In order to substantiate the validity of our mutual information estimator, we use it to automatically select influential factors with estimated of CelebA shown in the Fig.(B.5) and many of them are possess the interpretable variants such as background color, smile and face angle etc. This verifies that mutual information is an effective indicator to automatically determine the influential factors in VAE setting.

### 5.3 Generation Capability Test for Discovered Factors

Estimated mutual information can instruct the latter generation task with few but influential factors. We select the different ratios of the top influential factors according to the quantity of the mutual information to generate the later image. The factors are sorted according to the values of its mutual information indicators and the other non-influential factors estimated by the indicator are constantly set to zero in the generating process.

According to Fig.(4), we can find that by on using of the top influential factors discovered by the proposed algorithm, the VAE model can still generate images almost similar to the one reconstructed by using whole factors.

Table 1 shows the detailed total information and the reconstruction error corresponding to the different ratio of factor. The top factors contain almost the whole information and therefore their reconstructions have the almost the same reconstruct error compared to using all the factors. As suggested by the information and reconstruction relationship, the less information is contained in the used factors, the higher minimum reconstruction loss bound is raised.

### 5.4 Classification Capability Test by Discovered Factors

Estimated mutual information can instruct the latter classification task with few but influential factors. We select the different ratio of the top influential factors according to the quantity of the mutual information to predict emotions. The factors are sorted according to its mutual information and the estimated non-influential factors are constantly set to zero in the prediction procedure.

According to Table 2, by only using half of the factors, the model still possesses the similar prediction accuracy. Besides, the estimated mutual information on the other side also helps us to determine the several variants which are relevant with the emotion classification as shown in the following Fig.(5).

## 6 Conclusion

This paper explains the necessity of using mutual information of the input data and each factor as the indicator to estimate the intrinsic influence of a factor to represent data in the VAE model. The mutual information reflects the absolute statistical dependence. The second term in VAE objective and excess pre-set factors inclines to induce the mutual information sparsity and helps achieve influential, as well as ignored, factors in VAE. We have also proved that the mutual information also involves in the lower bound of the mean square error of the reconstruction and of prediction error of the classification. We design a feasible algorithm to calculate the indicator for estimating the mutual information for all factors in VAE and proves its consistency. The experiments show that both the influential factors and non-influential factors can be automatically and effectively found. The interpretability of the discovered factors is substantiated intuitively, and the generalization and classification capability on these factors have also been verified. Specially, some variants relevant to classification are found. The experiments also inspire the idea that we can using a small amount of top influential factors for the latter data processing tasks including generation and classification by still keeping the performance of all factors, just similar to the dimensionality reduction capability as classical PCA, ICA and so on.

The VAE combined with mutual information indicator helps extract knowledge under the data and may be beneficial to extensive latter applications including blind source separation, interpretable feature learning, information bottleneck and data bias elimination. We will investigate these issues in our future research.

## 7 Acknowledgments

We would like to thank Zilu Ma and Tao Yu for discussing the information conservation theorems. We would like to thank Lingjiang Xie and Rui Qin for EEG data processing.

## Appendix A Appendix A

Theorem 1 (Information Conservation) Suppose that and are sets of and () independent unit Gaussian random variables, respectively, then these two sets of random variables can not be the generating factor of each other. That is, there are no continuous functions and such that

 z=g(y)andy=f(z).
###### Proof.

For theorem 1. Proof by Contradiction. Suppose those two function exist, and we will show that they will be inverse mapping of each other and the homeomorphism mapping of and . Since and have different topology structures (), the homeomorphism mapping will not exist.

Since both and are continuous, there is a homeomorphism mapping between and and it leads to the contradiction. ∎

Theorem 2 (Objective Decomposition) If , i.e., the support of is contained in support of , for any , and then it yields the following decomposition:

• norm expression of the KL-divergence term in VAE:

 Ex∼pdata(x)DKL(qenc(z|x)||pdec(z))=H∑h=1Ex∼pdata(x)DKL(qenc(zh|x)||pdec(zh))=∥(Ex∼pdata(x)DKL(qenc(z1|x)||pdec(z1)),Ex∼pdata(x)DKL(qenc(z2|x)||pdec(z2)),⋯,Ex∼pdata(x)DKL(qenc(zh|x)||pdec(zh)))∥1. (23)
• Further decomposition of an entity in the norm expression:

 Ex∼pdata(x)DKL(qenc(zh|x)||pdec(zh))=I(x;zench)+DKL(qenc(zh)||pdec(zh)). (24)
###### Proof.

The norm expression is obvious. We prove the further decomposition of an entity in the norm expression:

 Ex∼pdata(x)DKL(qenc(zh|x)||pdec(zh))=∫qenc(zh|x)pdata(x)qenc(zh|x)pdata(x)pdec(zh)pdata(x)dx=∫qenc(zh|x)pdata(x)qdec(zh|x)pdata(x)qenc(zh)pdata(x)qenc(zh)pdec(zh)dx=I(x;zench)+DKL(qenc(zh)||pdec(zh)). (25)

Theorem 3 (Mutual Information Separation) Let be independent unit Gaussian distribution, and be conditional independent given . Then

 I(x;z1,⋯,zh)=H∑h=1I(x;zh)=∥(I(x;z1),I(x;z2),⋯,I(x;zh))∥1. (26)
###### Proof.
 I(x;z1,⋯,zH)=∫p(z1,⋯,zH,x)logp(x,z1,⋯,zH)p(z1,⋯,zH)p(x)dz1⋯dzHdx
 =H∑h=1I(x;zh).

## Appendix B Appendix B

### b.1 Mnist

We split 7000 data points by ratio into training, validation, testing set. The estimated mutual information and are calculated on 10000 data points in the testing set. Seed images from the testing set are used to infer factor value and draw the traversal.

In traversal figures, each block corresponds to the traversal of a single factor over the range while keeping others fixed to their inferred (by -VAE, VAE). Each row is generated with a different seed image.

The setting for -VAE is enumerated from .

### b.2 CelebA

We split randomly roughly 200000 data points by ratio into training, validation (no use), testing set.

The estimated mutual information and are calculated on 10000 data points in the testing set. Seed images from the testing set are used to infer factor value and draw the traversal.

In traversal figures, each block corresponds to the traversal of a single factor over the range while keeping others fixed to their inferred (by -VAE, VAE). Each row is generated with a different seed image.

The setting for -VAE is enumerated from .

### b.3 Deap

DEAP is a well-known public multi-modalities (e.g. EEG, video, etc.) dataset proposed by Koelstra et al. (2012). The EEG signals are recorded from 32 channels by 32 participants watching 40 videos for 63 seconds each. The EEG data was preprocessed which down-sampling into 128Hz and band range 4-45 Hz. By the same transformation idea from Bashivan et al. (2015)

, we applied fast Fourier transform (FFT) on 1-second EEG signal and convert it to an image. In this experiment, alpha (8-13Hz), beta (13-30Hz) and gamma (30-45Hz) are extracted as the frequency band which represented the activities related to brain emotion emerging. The next step is similar as

Bashivan et al. (2015) work which mentioned in section II [PLEASE CHECK IT IN THE PAPER], by Azimuthal Equidistant Projection (AEP) and Clough-Tocher scheme resulting in three 32x32 size topographical activity maps corresponding to each frequency bands shown as RGB plot. The transformation work conduct the total of 1280 EEG videos where each has 63 frames. The two emotional dimensions are arousal and valence, which were labeled from the scale 1-9. For each of them, we applied 5 as the boundary for separating high and low level to generate 4 classes (e.g. high-arousal (HA), high-valence (HV), low-arousal (LA) and low-valence(LV)). In this paper we perform this 4-class classification task as same as the one in [baseline paper].

We split randomly roughly 1280 samples by ratio [0.8: 0.1: 0.1] into training, validation, testing set. (= 6)-VAE is trained on each frame and LSTM was used to combine all the frames together for each video.

The estimated mutual information are calculated on 100*63 imagewise(100 videos) data points in the testing set. Seed images from the testing set are used to infer factor value and draw the traversal.

In traversal figures, each block corresponds to the traversal of a single factor over the range while keeping others fixed to their inferred (by -VAE). Each row is generated with a different seed image.

### b.4 Network Structure

c—c—c—c Dataset & Optimiser& Architecture

Mnist& Adam & Input & 28x28x1

&

& Encoder & Conv 32x4x4,32x4x4 (stride 2).

& & &FC 256. ReLU activation.

&Epoch 200 & Latents & 128

& & Decoder & FC 256. Linear. Deconv reverse of encoder.

& & &ReLU activation. Gaussian.

CelebA& Adam & Input & 64x64x3

& & Encoder & Conv 32x4x4,32x4x4,64x4x4,64x4x4 (stride 2).

& & &FC 256. ReLU activation.

&Epoch 20 & Latents & 128/32

& & Decoder & FC 256. Linear. Deconv reverse of encoder.

& & & ReLU activation. Mixture of 2-Gaussian.

Deap& Adam & Input & 32x32x3

& & Encoder & Conv 32x4x4,32x4x4,64x4x4,64x4x4 (stride 2).

& & &FC 256. ReLU activation.

&Epoch 300 & Latents & 128/32

& & Decoder & FC 256. Linear. Deconv reverse of encoder.

& & & ReLU activation. Gaussian.

& & Input & 63x128

& & Recurrent & LSTM dim128. Time-Step 63.

& & Predictor & FC 4. ReLU activation.

### b.5 Experiment Plot

In the following subsection, we present the influential factor () traversals, mutual information and variance plot of different data sets.