1 Introduction
Learning efficient low dimension representation of data is important in machine learning and related applications. Efficient and intrinsic low dimension representation is helpful to exploit the underlying knowledge of data and serves for latter tasks including generation, classification and association. Early linear dimension reduction (Principle Component Analysis) has been widely used in primary data analysis and its variant has been applied in face identification (Yang et al. (2004)
) and classical linear independent representation (Independent Component Analysis (
Hyvärinen, Karhunen, and Oja (2004)) have been used in blind source separation (Jung et al. (2000)) and EEG signal processing (Makeig et al. (1996)). Nonlinear dimension reduction (e.g. Autoencoder (Goodfellow, Bengio, and Courville (2016))) begins to further learn abstract representation and has been used in semantic hashing (Salakhutdinov and Hinton (2009) and many other tasks. Recently, a new technique, called variational autoencoder (Kingma and Welling (2013); Rezende, Mohamed, and Wierstra (2014)) has attracted much attention of researchers, due to its capability in extracting nonlinear independent representation. The method can further model causal relationship, represent disentangled visual variants (Mathieu et al. (2016)) and interpretable time series variants (Hsu, Zhang, and Glass (2017)) and this method can serve for generating signals with abundant diversities in a “factorcontrollable” way (Suzuki, Nakayama, and Matsuo (2016); Higgins et al. (2017b)). The related techniques enable the knowledge transferring through shared factors among different tasks (Higgins et al. (2017a)).However, the usage of VAE on extracting factors are unclear and we lack efficient methodologies to quantify the influence of each learned factor on data representation. In application, sometimes some preset factors remain unused ^{1}^{1}1 Montage (D) in Fig.(1) is a typical traversal of the unused factor. (Goodfellow, Bengio, and Courville (2016)
), and the relation between the learned factors and original data has to be discovered by manually intervention (visual or aural observation). This leads to the waste on extra factors and hinders the factor selection for the subsequent tasks such as generating meaningful image/audio. Besides, some classical influence determination methods including estimating the variance of each factor lose its utility on the VAE. Therefore, identifying and monitoring the influential factor of VAE becomes a critical issue along this line of research.
In order to efficiently determine and supervise the learned factors, this paper has made the following efforts.

We first adopt mutual information as the quantitative indicator of assessing the influence of each factor on data representation in the VAE model. Besides, in order to analyze the rationality of this indicator, we theoretically prove that how mutual information influence the lower bound of VAE’s reconstruction error and subsequent classification task.

We propose an estimation algorithms to calculate the mutual information for all the factors of VAE, and then we prove its consistency.

We substantiate the effectiveness of the proposed indicator by experiments on Mnist (Lécun et al. (1998)), CelebA (Liu et al. (2015)) and Deap (Koelstra et al. (2012)). Especially, some discovered factors by the proposed indicator are found meaningful and interpretable for data representation and other left ones are generally ignorable for the task. The capability of the selected factors on generalization and classification tasks are also verified.
This paper is organized as the following. We introduce the VAE model for generation and classification in Section 2. We argue the necessity of mutual information as a indicator in Section 3. Specifically, we introduce the mutual information of input data and factors, analyze the cause through the perspective of mutual information and data intrinsic dimension, discuss the relationship of mutual information and recover as well as the classification and propose the estimator and prove its consistency. We review the related work on supervising the factors of VAE in Section 4. The experiments are in Section 5.
2 VAE model
VAE (Kingma and Welling (2013), Rezende, Mohamed, and Wierstra (2014)) is a scalable unsupervised representation learning model (Higgins et al. (2016)): VAE assumes that input
is generated by several independent Gaussian random variables
, that is. Since Gaussian distribution can be continuously and reversibly mapping to many other distributions, the theoretical analysis on it might be also instructive for other continuouslatent VAEs. The generating/decoding process is modeled as
and the inference/encoding process is treated as the approximate posterior distribution. Note that it yields that. We assume both of them are parameterized by the neural network with parameter
and .Factor: Let denote random variables determined by and a factor in the latter literature refers to a dimension of .
2.1 Generation
In VAE setting, the approximate inference method is applied to maximizing the variational lower bound of ,
(1)  
with the equality holds iff
(2) 
In order to limit the information channel capacity (Higgins et al. (2016)), VAE introduces to the second term of the objective,
(3)  
After training the objective, by sampling from the or setting with purpose, the learned can generate new samples.
2.2 Classification
The can further support latter tasks such as classification. Let denote the predicting process, and the classification objective is the following,
(4) 
In real implementation the above objectives should further take expectation on the data distribution. However, sometimes only part factors are manually found useful for the generation (Goodfellow, Bengio, and Courville (2016)), and the factor which is irrelevant to can not support classification either. Therefore, some approaches to automatically find the influential factor beneficial to the latter tasks are demanded.
3 Mutual Information as A Necessary Indicator
By exploring why factors are ignored, we argue that mutual information is a necessary indicator to find the influential factor.
3.1 Ignored Factor Analysis
3.1.1 Low Intrinsic Dimension of Data
One aim of the VAE is to learn the data intrinsic factors but intrinsic dimension keeps the same under the continuous reversible mapping suggested by Theorem 1.
Theorem 1 (Information Conservation).
Suppose that and are sets of and () independent unit Gaussian random variables, respectively, then these two sets of random variables can not be the generating factor of each other. That is, there are no continuous functions and such that
Proof is listed in Appendix A. Suppose the oracle data, denoted by random variable , is generated by (with independent unit Gaussian random variables) with a homeomorphism mapping . Factors (with independent unit Gaussian random variables) generates the with a homeomorphism mapping . It yields and . Then according to the information conservation theorem, it must hold that .
For example, 10 Gaussian factors and 128 Gaussian factors can not generate each other. Analogically, if the data are generated by 10 intrinsic Gaussian factors, it intuitively would not be inferred to 128 Gaussian factors by VAE and some factors would be independent with data although we may preset in this way.
3.1.2 Mutual Information Reflexes the Absolute Statistic Dependence
In order to quantify the dependence and estimate which factor influences the generating process or has no effect at all, the mutual information of and , can be taken as a rational indicator(Peng, Long, and Ding (2005)) . That is,
(5) 
The mutual information can reflect the absolute statistic dependence: if and only if and are independent. The larger is, the more information conveys regarding , and the more influential factor it should be to represent the data.
3.1.3 Sparsity in Mutual Information
Actually, mutual information is implicitly involved in the VAE objective. The following theorem further suggests the VAE objective induces the sparsity in mutual information. It then explains why factors are ignored from the perspective of mutual information.
Theorem 2 (Objective Decomposition).
If ^{2}^{2}2That is the support of is contained in support of ., for any , and then it yields the following decomposition:

norm expression of the KLdivergence term in VAE:
(6) 
Further decomposition of an entity in the norm expression:
(7)
Proof is given in Appendix A. The theorem demonstrates that the expectation of the second term in variation lower bound in Eq. (1) can be represented in the form of norm which inclines to induce the sparsity of and together in , clipping down the nonintrinsic factor dimension to some extent. The sparsity of Expectation actually leads to sparsity of both its summarization terms and together in , since both of them are nonnegative. For any zero value summarization, both of its elements should also be zero. Thus this regularization term inclines to intrinsically conduct sparsity of mutual information , which have been comprehensively substantiated by all our experiments, as can be easily seen in Fig.(1) and Fig.(2).
Therefore VAE objective inclines to induce mutual information sparsity in factor dimension over the data intrinsic dimension and the factor ignored phenomenon occurs. On the one hand, with this KL divergence regularization, even when the number of latent factors are set large, unlike autoencoder, the overfitting issue still tends not to occur. On the other hand, this helps us get influential factors to represent the variants of data, and facilitate an efficient generalization of data by varying these useful factors while neglecting others.
By the way, the following theorem suggests the condition that we can use to estimate the whole mutual information.
Theorem 3 (Mutual Information Separation).
Let be independent unit Gaussian distribution, and be conditional independent given . Then
(8)  
Proof is presented in Appendix A. This theorem suggests that if the learnt can factorize and the can factorize, then we could use the sum of to direct estimate the whole mutual information.
3.2 Reconstruction and Classification Theoretical Supports
According to Cover and Thomas (2012), the mutual information can also provide a lower bound for the best mean recover error.
Theorem 4.
Suppose is with differential entropy , then let be an estimation of , and give side information , and then it holds that
(9) 
Therefore, if we set , then has as the lower bound for recover. Let us only use a major set of factors : . With the assumption that can factorize, it yields the separation of mutual information . It yields the following bound,
(10)  
The theorem implies that the mutual information carried by the selecting factors directly influences on the lower bound of the best recover and we may select some top influential factors carrying the most information to represent and generate the data with less reconstruction distortion.
We further provide some theoretical supports for the proposed mutual information as the factor indicator in classification.
Suppose that Markov chain condition,
, holds, according to the Fano’s inequality (Cover and Thomas (2012)) and the information processing inequality the mutual information also correlates with the classification error.Theorem 5 (Fano’s inequality).
For any estimation such that , with , we have
(11) 
This inequality can be weakened to
(12) 
or
(13) 
Note that according to information processing inequality . If , the factor will not influence the prediction. With the assumption that can factorize, the theorem suggests the mutual information carried by the selecting factors directly influences the lower bound of the classification error and therefore we can remove minor factors according to the mutual information without significantly lifting the lower bound of the prediction error.
3.3 Algorithms to Quantitatively Calculate the Proposed indicators
In order to calculate in practice, we assume that is a factorized zero mean Gaussian estimation for .
We can then list the indicators to be estimated as:
Definition 1 (Estimation for : the information conveyed by whole factors).
(14) 
This estimation uses sample according to the empirical form of Corollary 2.
Definition 2 (Estimation for : the information conveyed by a factor).
(15) 
This indicator quantifies mutual information of a specific factor and input data.
Note that the above indicators need the value of , and thus we need to design algorithms to calculate this term. Based on Theorem 2 through the minimization equivalence, we know that
(16) 
and then we can prove the following result:
Corollary 1.
if then
(17)  
The proof of Corollary 1 is the same as that of Theorem 2. This corollary suggests that the estimation defined in Definition 1 provides another upper bound for the capacity of the encoder network. Empirically, this estimation is a much tighter estimation than the second term of the Objective (1).
can then be obtained by solving the following optimization problem:
(18) 
The above procedure is summarized and presented in Algorithm 1 to calculate the proposed indicators.
The following definition and theorem clarify the consistency of the estimation on mutual information.
Definition 3 (Consistency).
The estimator is consistent to if and only if: , , and ,
, with probability greater than
, we have(19) 
Theorem 6.
The estimator is consistent to . That is, if the choice of satisfied the condition that , then , , , with probability greater than , we have
(20) 
Proof.
Let . According to the law of big number, we have , , , with probability greater than , we have
(21) 
(22)  
∎
This theorem suggests the estimation under a high probability could be arbitrary close to the real mutual information provided that the estimation is arbitrary close to the learned and the number of the sample is bigger enough. Besides, the minimization of in theorem 6 inspires the derivation of .
4 Related Work
There are not too many works on such indicator designing issue to discover influential factors in VAE. A general and easy approach for determine the VAE’s factor influence is through intuitive visual (Goodfellow, Bengio, and Courville (2016), Higgins et al. (2016)) or aural (Hsu, Zhang, and Glass (2017)) observation. However, it might be laborintensive to select factors for latter tasks.
In (Alemi et al. (2016)),
are visualized by plotting the 95% confidence interval as an ellipse to supervise the behavior of network and it reflects the factor influence directly. However, it still needs human to interpret the plot.
In classical PCA, it’s common to select factor with high variance and (Higgins et al. (2017b)) suggests that the variance of factor may indicate the usage of the factors. However, the variance could not always represent the absolute statistical relationship between the factors and data, which can be easily observed by Fig.(1) and Fig.(2).
Our work emphasizes mutual information which conveys the absolute statistical relationship between the factors and the data and uses it as an indictor to find the influential factors, substantiated with the relationship of the total information of selected factors and the reconstruction and relationship of mutual information and classification. All our experiments substantiate that designed indicator can discover the influential factors significantly relevant for data representation.
5 Experimental Results
5.1 Datasets
MNIST is a database of handwritten digits (Lécun et al. (1998)). We estimate all mutual information of factors learned from it and then use different ratio of top influential factors for the latter generation task.
CelebA (Liu et al. (2015)) is a largescale celebfaces attributes datasets and we only use its images to sustain influential factor discovery.
DEAP is a publicly famous multimodalities emotion recognition dataset proposed by (Koelstra et al. (2012)) and we used the transformed the signal tovideo sequence for 4class emotion prediction by using different ratio of top influential factors and for emotion relevant influential factor extraction.
More details are presented in Appendix B.
5.2 Influential Factor Discovery Tests
According to Fig.(1), the proposed mutual information estimator effectively determines the influential as well as the noninfluential factors. The factors with small values of estimated mutual information can be found with little generation effects and factors with large values of mutual information can be found with influential generation effects. Comparatively, it can be observed that the variance as used in classical methods can not significantly indicate the usage of factors.
In order to substantiate the validity of our mutual information estimator, we use it to automatically select influential factors with estimated of CelebA shown in the Fig.(B.5) and many of them are possess the interpretable variants such as background color, smile and face angle etc. This verifies that mutual information is an effective indicator to automatically determine the influential factors in VAE setting.
5.3 Generation Capability Test for Discovered Factors
Estimated mutual information can instruct the latter generation task with few but influential factors. We select the different ratios of the top influential factors according to the quantity of the mutual information to generate the later image. The factors are sorted according to the values of its mutual information indicators and the other noninfluential factors estimated by the indicator are constantly set to zero in the generating process.
According to Fig.(4), we can find that by on using of the top influential factors discovered by the proposed algorithm, the VAE model can still generate images almost similar to the one reconstructed by using whole factors.
Table 1 shows the detailed total information and the reconstruction error corresponding to the different ratio of factor. The top factors contain almost the whole information and therefore their reconstructions have the almost the same reconstruct error compared to using all the factors. As suggested by the information and reconstruction relationship, the less information is contained in the used factors, the higher minimum reconstruction loss bound is raised.
Top(%) factors in used  100  20  10  7  5  4  3  2  1  0 

24.3  24.3  24.3  19.6  16.5  14.7  10.6  8.4  5.8  0  
mean square error  5.6  5.6  5.6  13.4  15.0  18.9  27.6  31.3  44.4  71.7 
5.4 Classification Capability Test by Discovered Factors
Estimated mutual information can instruct the latter classification task with few but influential factors. We select the different ratio of the top influential factors according to the quantity of the mutual information to predict emotions. The factors are sorted according to its mutual information and the estimated noninfluential factors are constantly set to zero in the prediction procedure.
Top(%) factors in used  100  50  10  7  5  4  3  2  1  0 

53.8  53.5  38.3  28.0  22.5  19.6  13.5  10.2  7.0  0  
mean test accuracy  0.53  0.52  0.46  0.32  0.34  0.36  0.29  0.29  0.3  0.23 
According to Table 2, by only using half of the factors, the model still possesses the similar prediction accuracy. Besides, the estimated mutual information on the other side also helps us to determine the several variants which are relevant with the emotion classification as shown in the following Fig.(5).
6 Conclusion
This paper explains the necessity of using mutual information of the input data and each factor as the indicator to estimate the intrinsic influence of a factor to represent data in the VAE model. The mutual information reflects the absolute statistical dependence. The second term in VAE objective and excess preset factors inclines to induce the mutual information sparsity and helps achieve influential, as well as ignored, factors in VAE. We have also proved that the mutual information also involves in the lower bound of the mean square error of the reconstruction and of prediction error of the classification. We design a feasible algorithm to calculate the indicator for estimating the mutual information for all factors in VAE and proves its consistency. The experiments show that both the influential factors and noninfluential factors can be automatically and effectively found. The interpretability of the discovered factors is substantiated intuitively, and the generalization and classification capability on these factors have also been verified. Specially, some variants relevant to classification are found. The experiments also inspire the idea that we can using a small amount of top influential factors for the latter data processing tasks including generation and classification by still keeping the performance of all factors, just similar to the dimensionality reduction capability as classical PCA, ICA and so on.
The VAE combined with mutual information indicator helps extract knowledge under the data and may be beneficial to extensive latter applications including blind source separation, interpretable feature learning, information bottleneck and data bias elimination. We will investigate these issues in our future research.
7 Acknowledgments
We would like to thank Zilu Ma and Tao Yu for discussing the information conservation theorems. We would like to thank Lingjiang Xie and Rui Qin for EEG data processing.
References
 Alemi et al. (2016) Alemi, A. A.; Fischer, I.; Dillon, J. V.; and Murphy, K. 2016. Deep variational information bottleneck. arXiv preprint arXiv:1612.00410.
 Bashivan et al. (2015) Bashivan, P.; Rish, I.; Yeasin, M.; and Codella, N. 2015. Learning representations from eeg with deep recurrentconvolutional neural networks. arXiv preprint arXiv:1511.06448.
 Cover and Thomas (2012) Cover, T. M., and Thomas, J. A. 2012. Elements of information theory. John Wiley & Sons.
 Goodfellow, Bengio, and Courville (2016) Goodfellow, I.; Bengio, Y.; and Courville, A. 2016. Deep learning. MIT press.
 Higgins et al. (2016) Higgins, I.; Matthey, L.; Pal, A.; Burgess, C.; Glorot, X.; Botvinick, M.; Mohamed, S.; and Lerchner, A. 2016. betavae: Learning basic visual concepts with a constrained variational framework.
 Higgins et al. (2017a) Higgins, I.; Pal, A.; Rusu, A. A.; Matthey, L.; Burgess, C. P.; Pritzel, A.; Botvinick, M.; Blundell, C.; and Lerchner, A. 2017a. Darla: Improving zeroshot transfer in reinforcement learning. arXiv preprint arXiv:1707.08475.
 Higgins et al. (2017b) Higgins, I.; Sonnerat, N.; Matthey, L.; Pal, A.; Burgess, C. P.; Botvinick, M.; Hassabis, D.; and Lerchner, A. 2017b. Scan: Learning abstract hierarchical compositional visual concepts. arXiv preprint arXiv:1707.03389.
 Hsu, Zhang, and Glass (2017) Hsu, W.N.; Zhang, Y.; and Glass, J. 2017. Unsupervised learning of disentangled and interpretable representations from sequential data. In Advances in neural information processing systems, 1876–1887.
 Hyvärinen, Karhunen, and Oja (2004) Hyvärinen, A.; Karhunen, J.; and Oja, E. 2004. Independent component analysis, volume 46. John Wiley & Sons.
 Jung et al. (2000) Jung, T.P.; Makeig, S.; Humphries, C.; Lee, T.W.; Mckeown, M. J.; Iragui, V.; and Sejnowski, T. J. 2000. Removing electroencephalographic artifacts by blind source separation. Psychophysiology 37(2):163–178.
 Kingma and Welling (2013) Kingma, D. P., and Welling, M. 2013. Autoencoding variational bayes. arXiv preprint arXiv:1312.6114.
 Koelstra et al. (2012) Koelstra, S.; Muhl, C.; Soleymani, M.; Lee, J.S.; Yazdani, A.; Ebrahimi, T.; Pun, T.; Nijholt, A.; and Patras, I. 2012. Deap: A database for emotion analysis; using physiological signals. IEEE Transactions on Affective Computing 3(1):18–31.
 Lécun et al. (1998) Lécun, Y.; Bottou, L.; Bengio, Y.; and Haffner, P. 1998. Gradientbased learning applied to document recognition. Proceedings of the IEEE 86(11):2278–2324.

Liu et al. (2015)
Liu, Z.; Luo, P.; Wang, X.; and Tang, X.
2015.
Deep learning face attributes in the wild.
In
Proceedings of the IEEE International Conference on Computer Vision
, 3730–3738.  Makeig et al. (1996) Makeig, S.; Bell, A. J.; Jung, T.P.; and Sejnowski, T. J. 1996. Independent component analysis of electroencephalographic data. In Advances in neural information processing systems, 145–151.
 Mathieu et al. (2016) Mathieu, M. F.; Zhao, J. J.; Zhao, J.; Ramesh, A.; Sprechmann, P.; and LeCun, Y. 2016. Disentangling factors of variation in deep representation using adversarial training. In Advances in Neural Information Processing Systems, 5040–5048.
 Peng, Long, and Ding (2005) Peng, H.; Long, F.; and Ding, C. 2005. Feature selection based on mutual information criteria of maxdependency, maxrelevance, and minredundancy. IEEE Transactions on pattern analysis and machine intelligence 27(8):1226–1238.
 Rezende, Mohamed, and Wierstra (2014) Rezende, D. J.; Mohamed, S.; and Wierstra, D. 2014. Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082.
 Salakhutdinov and Hinton (2009) Salakhutdinov, R., and Hinton, G. 2009. Semantic hashing. International Journal of Approximate Reasoning 50(7):969–978.
 Suzuki, Nakayama, and Matsuo (2016) Suzuki, M.; Nakayama, K.; and Matsuo, Y. 2016. Joint multimodal learning with deep generative models. arXiv preprint arXiv:1611.01891.
 Yang et al. (2004) Yang, J.; Zhang, D.; Frangi, A. F.; and Yang, J.y. 2004. Twodimensional pca: a new approach to appearancebased face representation and recognition. IEEE transactions on pattern analysis and machine intelligence 26(1):131–137.
Appendix A Appendix A
Theorem 1 (Information Conservation) Suppose that and are sets of and () independent unit Gaussian random variables, respectively, then these two sets of random variables can not be the generating factor of each other. That is, there are no continuous functions and such that
Proof.
For theorem 1. Proof by Contradiction. Suppose those two function exist, and we will show that they will be inverse mapping of each other and the homeomorphism mapping of and . Since and have different topology structures (), the homeomorphism mapping will not exist.
Since both and are continuous, there is a homeomorphism mapping between and and it leads to the contradiction. ∎
Theorem 2 (Objective Decomposition) If , i.e., the support of is contained in support of , for any , and then it yields the following decomposition:

norm expression of the KLdivergence term in VAE:
(23) 
Further decomposition of an entity in the norm expression:
(24)
Proof.
The norm expression is obvious. We prove the further decomposition of an entity in the norm expression:
(25) 
∎
Theorem 3 (Mutual Information Separation) Let be independent unit Gaussian distribution, and be conditional independent given . Then
(26) 
Proof.
∎
Appendix B Appendix B
b.1 Mnist
We split 7000 data points by ratio into training, validation, testing set. The estimated mutual information and are calculated on 10000 data points in the testing set. Seed images from the testing set are used to infer factor value and draw the traversal.
In traversal figures, each block corresponds to the traversal of a single factor over the range while keeping others fixed to their inferred (by VAE, VAE). Each row is generated with a different seed image.
The setting for VAE is enumerated from .
b.2 CelebA
We split randomly roughly 200000 data points by ratio into training, validation (no use), testing set.
The estimated mutual information and are calculated on 10000 data points in the testing set. Seed images from the testing set are used to infer factor value and draw the traversal.
In traversal figures, each block corresponds to the traversal of a single factor over the range while keeping others fixed to their inferred (by VAE, VAE). Each row is generated with a different seed image.
The setting for VAE is enumerated from .
b.3 Deap
DEAP is a wellknown public multimodalities (e.g. EEG, video, etc.) dataset proposed by Koelstra et al. (2012). The EEG signals are recorded from 32 channels by 32 participants watching 40 videos for 63 seconds each. The EEG data was preprocessed which downsampling into 128Hz and band range 445 Hz. By the same transformation idea from Bashivan et al. (2015)
, we applied fast Fourier transform (FFT) on 1second EEG signal and convert it to an image. In this experiment, alpha (813Hz), beta (1330Hz) and gamma (3045Hz) are extracted as the frequency band which represented the activities related to brain emotion emerging. The next step is similar as
Bashivan et al. (2015) work which mentioned in section II [PLEASE CHECK IT IN THE PAPER], by Azimuthal Equidistant Projection (AEP) and CloughTocher scheme resulting in three 32x32 size topographical activity maps corresponding to each frequency bands shown as RGB plot. The transformation work conduct the total of 1280 EEG videos where each has 63 frames. The two emotional dimensions are arousal and valence, which were labeled from the scale 19. For each of them, we applied 5 as the boundary for separating high and low level to generate 4 classes (e.g. higharousal (HA), highvalence (HV), lowarousal (LA) and lowvalence(LV)). In this paper we perform this 4class classification task as same as the one in [baseline paper].We split randomly roughly 1280 samples by ratio [0.8: 0.1: 0.1] into training, validation, testing set. (= 6)VAE is trained on each frame and LSTM was used to combine all the frames together for each video.
The estimated mutual information are calculated on 100*63 imagewise(100 videos) data points in the testing set. Seed images from the testing set are used to infer factor value and draw the traversal.
In traversal figures, each block corresponds to the traversal of a single factor over the range while keeping others fixed to their inferred (by VAE). Each row is generated with a different seed image.
b.4 Network Structure
c—c—c—c Dataset & Optimiser& Architecture
Mnist& Adam & Input & 28x28x1
&
& Encoder & Conv 32x4x4,32x4x4 (stride 2).
& & &FC 256. ReLU activation.
&Epoch 200 & Latents & 128
& & Decoder & FC 256. Linear. Deconv reverse of encoder.
& & &ReLU activation. Gaussian.
CelebA& Adam & Input & 64x64x3
& & Encoder & Conv 32x4x4,32x4x4,64x4x4,64x4x4 (stride 2).
& & &FC 256. ReLU activation.
&Epoch 20 & Latents & 128/32
& & Decoder & FC 256. Linear. Deconv reverse of encoder.
& & & ReLU activation. Mixture of 2Gaussian.
Deap& Adam & Input & 32x32x3
& & Encoder & Conv 32x4x4,32x4x4,64x4x4,64x4x4 (stride 2).
& & &FC 256. ReLU activation.
&Epoch 300 & Latents & 128/32
& & Decoder & FC 256. Linear. Deconv reverse of encoder.
& & & ReLU activation. Gaussian.
& & Input & 63x128
& & Recurrent & LSTM dim128. TimeStep 63.
& & Predictor & FC 4. ReLU activation.
b.5 Experiment Plot
In the following subsection, we present the influential factor () traversals, mutual information and variance plot of different data sets.
Comments
There are no comments yet.