Discovering Influential Factors in Variational Autoencoder

09/06/2018 ∙ by Shiqi Liu, et al. ∙ 0

In the field of machine learning, it is still a critical issue to identify and supervise the learned representation without manually intervention or intuition assistance to extract useful knowledge or serve for the latter tasks in machine learning. In this work, we focus on supervising the influential factors extracted by the variational autoencoder(VAE). The VAE is proposed to learn independent low dimension representation while facing the problem that sometimes pre-set factors are ignored. We argue that the mutual information of the input and each learned factor of the representation plays a necessary indicator. We find the VAE objective inclines to induce mutual information sparsity in factor dimension over the data intrinsic dimension and results in some non-influential factors whose function on data reconstruction could be ignored. We show mutual information also influences the lower bound of VAE's reconstruction error and latter classification task. To make such indicator applicable, we design an algorithm on calculating the mutual information for VAE and prove its consistency. Experimental results on Mnist, CelebA and Deap datasets show that mutual information can help determine influential factors, of which some are interpretable and can be used to further generation and classification tasks, and help discover the variant that connects with emotion on Deap dataset.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Learning efficient low dimension representation of data is important in machine learning and related applications. Efficient and intrinsic low dimension representation is helpful to exploit the underlying knowledge of data and serves for latter tasks including generation, classification and association. Early linear dimension reduction (Principle Component Analysis) has been widely used in primary data analysis and its variant has been applied in face identification (Yang et al. (2004)

) and classical linear independent representation (Independent Component Analysis (

Hyvärinen, Karhunen, and Oja (2004)) have been used in blind source separation (Jung et al. (2000)) and EEG signal processing (Makeig et al. (1996)). Nonlinear dimension reduction (e.g. Autoencoder (Goodfellow, Bengio, and Courville (2016))) begins to further learn abstract representation and has been used in semantic hashing (Salakhutdinov and Hinton (2009) and many other tasks. Recently, a new technique, called variational autoencoder (Kingma and Welling (2013); Rezende, Mohamed, and Wierstra (2014)) has attracted much attention of researchers, due to its capability in extracting nonlinear independent representation. The method can further model causal relationship, represent disentangled visual variants (Mathieu et al. (2016)) and interpretable time series variants (Hsu, Zhang, and Glass (2017)) and this method can serve for generating signals with abundant diversities in a “factor-controllable” way (Suzuki, Nakayama, and Matsuo (2016); Higgins et al. (2017b)). The related techniques enable the knowledge transferring through shared factors among different tasks (Higgins et al. (2017a)).

However, the usage of VAE on extracting factors are unclear and we lack efficient methodologies to quantify the influence of each learned factor on data representation. In application, sometimes some pre-set factors remain unused 111 Montage (D) in Fig.(1) is a typical traversal of the unused factor. (Goodfellow, Bengio, and Courville (2016)

), and the relation between the learned factors and original data has to be discovered by manually intervention (visual or aural observation). This leads to the waste on extra factors and hinders the factor selection for the subsequent tasks such as generating meaningful image/audio. Besides, some classical influence determination methods including estimating the variance of each factor lose its utility on the VAE. Therefore, identifying and monitoring the influential factor of VAE becomes a critical issue along this line of research.

Figure 1: Estimated determines the influential factors; , and qualitatively influential factor traversals of -VAE on Mnist. The top pulse subgraph: of each factor. The bottom reverse pulse subgraph: the estimated variance of each factor. The A,B,C montages: influential factor traversals corresponding to factor A,B,C noted in the pulse graph and the whole influential factor traversals are listed in Fig.(B.5) in Appendix B. The montages D is the traversal of ignored factors with little estimated mutual information. According to the four montages, the variance can’t determine the influential factors as mutual information indicator does.

In order to efficiently determine and supervise the learned factors, this paper has made the following efforts.

  • We first adopt mutual information as the quantitative indicator of assessing the influence of each factor on data representation in the VAE model. Besides, in order to analyze the rationality of this indicator, we theoretically prove that how mutual information influence the lower bound of VAE’s reconstruction error and subsequent classification task.

  • We propose an estimation algorithms to calculate the mutual information for all the factors of VAE, and then we prove its consistency.

  • We substantiate the effectiveness of the proposed indicator by experiments on Mnist (Lécun et al. (1998)), CelebA (Liu et al. (2015)) and Deap (Koelstra et al. (2012)). Especially, some discovered factors by the proposed indicator are found meaningful and interpretable for data representation and other left ones are generally ignorable for the task. The capability of the selected factors on generalization and classification tasks are also verified.

This paper is organized as the following. We introduce the VAE model for generation and classification in Section 2. We argue the necessity of mutual information as a indicator in Section 3. Specifically, we introduce the mutual information of input data and factors, analyze the cause through the perspective of mutual information and data intrinsic dimension, discuss the relationship of mutual information and recover as well as the classification and propose the estimator and prove its consistency. We review the related work on supervising the factors of VAE in Section 4. The experiments are in Section 5.

2 VAE model

VAE (Kingma and Welling (2013), Rezende, Mohamed, and Wierstra (2014)) is a scalable unsupervised representation learning model (Higgins et al. (2016)): VAE assumes that input

is generated by several independent Gaussian random variables

, that is

. Since Gaussian distribution can be continuously and reversibly mapping to many other distributions, the theoretical analysis on it might be also instructive for other continuous-latent VAEs. The generating/decoding process is modeled as

and the inference/encoding process is treated as the approximate posterior distribution. Note that it yields that

. We assume both of them are parameterized by the neural network with parameter

and .

Factor: Let denote random variables determined by and a factor in the latter literature refers to a dimension of .

2.1 Generation

In VAE setting, the approximate inference method is applied to maximizing the variational lower bound of ,

(1)

with the equality holds iff

(2)

In order to limit the information channel capacity (Higgins et al. (2016)), -VAE introduces to the second term of the objective,

(3)

After training the objective, by sampling from the or setting with purpose, the learned can generate new samples.

2.2 Classification

The can further support latter tasks such as classification. Let denote the predicting process, and the classification objective is the following,

(4)

In real implementation the above objectives should further take expectation on the data distribution. However, sometimes only part factors are manually found useful for the generation (Goodfellow, Bengio, and Courville (2016)), and the factor which is irrelevant to can not support classification either. Therefore, some approaches to automatically find the influential factor beneficial to the latter tasks are demanded.

3 Mutual Information as A Necessary Indicator

By exploring why factors are ignored, we argue that mutual information is a necessary indicator to find the influential factor.

3.1 Ignored Factor Analysis

3.1.1 Low Intrinsic Dimension of Data

One aim of the VAE is to learn the data intrinsic factors but intrinsic dimension keeps the same under the continuous reversible mapping suggested by Theorem 1.

Theorem 1 (Information Conservation).

Suppose that and are sets of and () independent unit Gaussian random variables, respectively, then these two sets of random variables can not be the generating factor of each other. That is, there are no continuous functions and such that

Proof is listed in Appendix A. Suppose the oracle data, denoted by random variable , is generated by (with independent unit Gaussian random variables) with a homeomorphism mapping . Factors (with independent unit Gaussian random variables) generates the with a homeomorphism mapping . It yields and . Then according to the information conservation theorem, it must hold that .

For example, 10 Gaussian factors and 128 Gaussian factors can not generate each other. Analogically, if the data are generated by 10 intrinsic Gaussian factors, it intuitively would not be inferred to 128 Gaussian factors by VAE and some factors would be independent with data although we may pre-set in this way.

3.1.2 Mutual Information Reflexes the Absolute Statistic Dependence

In order to quantify the dependence and estimate which factor influences the generating process or has no effect at all, the mutual information of and , can be taken as a rational indicator(Peng, Long, and Ding (2005)) . That is,

(5)

The mutual information can reflect the absolute statistic dependence: if and only if and are independent. The larger is, the more information conveys regarding , and the more influential factor it should be to represent the data.

3.1.3 Sparsity in Mutual Information

Actually, mutual information is implicitly involved in the VAE objective. The following theorem further suggests the VAE objective induces the sparsity in mutual information. It then explains why factors are ignored from the perspective of mutual information.

Theorem 2 (Objective Decomposition).

If 222That is the support of is contained in support of ., for any , and then it yields the following decomposition:

  • norm expression of the KL-divergence term in VAE:

    (6)
  • Further decomposition of an entity in the norm expression:

    (7)

Proof is given in Appendix A. The theorem demonstrates that the expectation of the second term in variation lower bound in Eq. (1) can be represented in the form of norm which inclines to induce the sparsity of and together in , clipping down the non-intrinsic factor dimension to some extent. The sparsity of Expectation actually leads to sparsity of both its summarization terms and together in , since both of them are non-negative. For any zero value summarization, both of its elements should also be zero. Thus this regularization term inclines to intrinsically conduct sparsity of mutual information , which have been comprehensively substantiated by all our experiments, as can be easily seen in Fig.(1) and Fig.(2).

Figure 2: Mutual information sparsity occurs on CelebA and Deap.

Therefore VAE objective inclines to induce mutual information sparsity in factor dimension over the data intrinsic dimension and the factor ignored phenomenon occurs. On the one hand, with this KL divergence regularization, even when the number of latent factors are set large, unlike auto-encoder, the over-fitting issue still tends not to occur. On the other hand, this helps us get influential factors to represent the variants of data, and facilitate an efficient generalization of data by varying these useful factors while neglecting others.

By the way, the following theorem suggests the condition that we can use to estimate the whole mutual information.

Theorem 3 (Mutual Information Separation).

Let be independent unit Gaussian distribution, and be conditional independent given . Then

(8)

Proof is presented in Appendix A. This theorem suggests that if the learnt can factorize and the can factorize, then we could use the sum of to direct estimate the whole mutual information.

3.2 Reconstruction and Classification Theoretical Supports

According to Cover and Thomas (2012), the mutual information can also provide a lower bound for the best mean recover error.

Theorem 4.

Suppose is with differential entropy , then let be an estimation of , and give side information , and then it holds that

(9)

Therefore, if we set , then has as the lower bound for recover. Let us only use a major set of factors : . With the assumption that can factorize, it yields the separation of mutual information . It yields the following bound,

(10)

The theorem implies that the mutual information carried by the selecting factors directly influences on the lower bound of the best recover and we may select some top influential factors carrying the most information to represent and generate the data with less reconstruction distortion.

We further provide some theoretical supports for the proposed mutual information as the factor indicator in classification.

Suppose that Markov chain condition,

, holds, according to the Fano’s inequality (Cover and Thomas (2012)) and the information processing inequality the mutual information also correlates with the classification error.

Theorem 5 (Fano’s inequality).

For any estimation such that , with , we have

(11)

This inequality can be weakened to

(12)

or

(13)

Note that according to information processing inequality . If , the factor will not influence the prediction. With the assumption that can factorize, the theorem suggests the mutual information carried by the selecting factors directly influences the lower bound of the classification error and therefore we can remove minor factors according to the mutual information without significantly lifting the lower bound of the prediction error.

3.3 Algorithms to Quantitatively Calculate the Proposed indicators

In order to calculate in practice, we assume that is a factorized zero mean Gaussian estimation for .

We can then list the indicators to be estimated as:

Definition 1 (Estimation for : the information conveyed by whole factors).
(14)

This estimation uses sample according to the empirical form of Corollary 2.

Definition 2 (Estimation for : the information conveyed by a factor).
(15)

This indicator quantifies mutual information of a specific factor and input data.

Note that the above indicators need the value of , and thus we need to design algorithms to calculate this term. Based on Theorem 2 through the minimization equivalence, we know that

(16)

and then we can prove the following result:

Corollary 1.

if then

(17)

The proof of Corollary 1 is the same as that of Theorem 2. This corollary suggests that the estimation defined in Definition 1 provides another upper bound for the capacity of the encoder network. Empirically, this estimation is a much tighter estimation than the second term of the Objective (1).

can then be obtained by solving the following optimization problem:

(18)

The above procedure is summarized and presented in Algorithm 1 to calculate the proposed indicators.

1:  Input: Sampled Data ,Encoder Network
2:  Obtain:
3:  Calculate:
4:  for  to  do
5:     .
6:  end for
7:  Calculate:
8:  
9:  Output: , ,
Algorithm 1 Mutual Information Estimation

The following definition and theorem clarify the consistency of the estimation on mutual information.

Definition 3 (Consistency).

The estimator is consistent to if and only if: , , and ,

, with probability greater than

, we have

(19)
Theorem 6.

The estimator is consistent to . That is, if the choice of satisfied the condition that , then , , , with probability greater than , we have

(20)
Proof.

Let . According to the law of big number, we have , , , with probability greater than , we have

(21)
(22)

This theorem suggests the estimation under a high probability could be arbitrary close to the real mutual information provided that the estimation is arbitrary close to the learned and the number of the sample is bigger enough. Besides, the minimization of in theorem 6 inspires the derivation of .

4 Related Work

There are not too many works on such indicator designing issue to discover influential factors in VAE. A general and easy approach for determine the VAE’s factor influence is through intuitive visual (Goodfellow, Bengio, and Courville (2016), Higgins et al. (2016)) or aural (Hsu, Zhang, and Glass (2017)) observation. However, it might be labor-intensive to select factors for latter tasks.

In (Alemi et al. (2016)),

are visualized by plotting the 95% confidence interval as an ellipse to supervise the behavior of network and it reflects the factor influence directly. However, it still needs human to interpret the plot.

In classical PCA, it’s common to select factor with high variance and (Higgins et al. (2017b)) suggests that the variance of factor may indicate the usage of the factors. However, the variance could not always represent the absolute statistical relationship between the factors and data, which can be easily observed by Fig.(1) and Fig.(2).

Our work emphasizes mutual information which conveys the absolute statistical relationship between the factors and the data and uses it as an indictor to find the influential factors, substantiated with the relationship of the total information of selected factors and the reconstruction and relationship of mutual information and classification. All our experiments substantiate that designed indicator can discover the influential factors significantly relevant for data representation.

5 Experimental Results

5.1 Datasets

MNIST is a database of handwritten digits (Lécun et al. (1998)). We estimate all mutual information of factors learned from it and then use different ratio of top influential factors for the latter generation task.

CelebA (Liu et al. (2015)) is a large-scale celebfaces attributes datasets and we only use its images to sustain influential factor discovery.

DEAP is a publicly famous multi-modalities emotion recognition dataset proposed by (Koelstra et al. (2012)) and we used the transformed the signal -to-video sequence for 4-class emotion prediction by using different ratio of top influential factors and for emotion relevant influential factor extraction.

More details are presented in Appendix B.

5.2 Influential Factor Discovery Tests

According to Fig.(1), the proposed mutual information estimator effectively determines the influential as well as the non-influential factors. The factors with small values of estimated mutual information can be found with little generation effects and factors with large values of mutual information can be found with influential generation effects. Comparatively, it can be observed that the variance as used in classical methods can not significantly indicate the usage of factors.

Factor 7: , Factor 8: , Factor 12: ,
Figure 3: CelebA: Generating Factors Traversal of (=40)-VAE. We present the first 3 influential factors determined by estimated mutual information. The whole influential factor traversals are listed in appendix B.5.

In order to substantiate the validity of our mutual information estimator, we use it to automatically select influential factors with estimated of CelebA shown in the Fig.(B.5) and many of them are possess the interpretable variants such as background color, smile and face angle etc. This verifies that mutual information is an effective indicator to automatically determine the influential factors in VAE setting.

5.3 Generation Capability Test for Discovered Factors

Estimated mutual information can instruct the latter generation task with few but influential factors. We select the different ratios of the top influential factors according to the quantity of the mutual information to generate the later image. The factors are sorted according to the values of its mutual information indicators and the other non-influential factors estimated by the indicator are constantly set to zero in the generating process.

Figure 4: Generation plot with different ratio of factors.

According to Fig.(4), we can find that by on using of the top influential factors discovered by the proposed algorithm, the VAE model can still generate images almost similar to the one reconstructed by using whole factors.

Table 1 shows the detailed total information and the reconstruction error corresponding to the different ratio of factor. The top factors contain almost the whole information and therefore their reconstructions have the almost the same reconstruct error compared to using all the factors. As suggested by the information and reconstruction relationship, the less information is contained in the used factors, the higher minimum reconstruction loss bound is raised.

Top(%) factors in used 100 20 10 7 5 4 3 2 1 0
24.3 24.3 24.3 19.6 16.5 14.7 10.6 8.4 5.8 0
mean square error 5.6 5.6 5.6 13.4 15.0 18.9 27.6 31.3 44.4 71.7
Table 1: mutual information and reconstruction error plot

5.4 Classification Capability Test by Discovered Factors

Estimated mutual information can instruct the latter classification task with few but influential factors. We select the different ratio of the top influential factors according to the quantity of the mutual information to predict emotions. The factors are sorted according to its mutual information and the estimated non-influential factors are constantly set to zero in the prediction procedure.

Top(%) factors in used 100 50 10 7 5 4 3 2 1 0
53.8 53.5 38.3 28.0 22.5 19.6 13.5 10.2 7.0 0
mean test accuracy 0.53 0.52 0.46 0.32 0.34 0.36 0.29 0.29 0.3 0.23
Table 2: Mutual information and EEG-emotion classification with -VAE

According to Table 2, by only using half of the factors, the model still possesses the similar prediction accuracy. Besides, the estimated mutual information on the other side also helps us to determine the several variants which are relevant with the emotion classification as shown in the following Fig.(5).

Factor 1: , C4-T8-P4-P8  Block, Factor 26: , PO3-O1-Oz & Fp1-Fp2-AF4-AF3 Block, Factor 31: , PO3-O1-Oz  Block,
Figure 5: Emotion relevant factors discovery. We present 3 influential factors determined by estimated mutual information. The whole influential factor traversals are listed in Fig.(B.5) in appendix.

6 Conclusion

This paper explains the necessity of using mutual information of the input data and each factor as the indicator to estimate the intrinsic influence of a factor to represent data in the VAE model. The mutual information reflects the absolute statistical dependence. The second term in VAE objective and excess pre-set factors inclines to induce the mutual information sparsity and helps achieve influential, as well as ignored, factors in VAE. We have also proved that the mutual information also involves in the lower bound of the mean square error of the reconstruction and of prediction error of the classification. We design a feasible algorithm to calculate the indicator for estimating the mutual information for all factors in VAE and proves its consistency. The experiments show that both the influential factors and non-influential factors can be automatically and effectively found. The interpretability of the discovered factors is substantiated intuitively, and the generalization and classification capability on these factors have also been verified. Specially, some variants relevant to classification are found. The experiments also inspire the idea that we can using a small amount of top influential factors for the latter data processing tasks including generation and classification by still keeping the performance of all factors, just similar to the dimensionality reduction capability as classical PCA, ICA and so on.

The VAE combined with mutual information indicator helps extract knowledge under the data and may be beneficial to extensive latter applications including blind source separation, interpretable feature learning, information bottleneck and data bias elimination. We will investigate these issues in our future research.

7 Acknowledgments

We would like to thank Zilu Ma and Tao Yu for discussing the information conservation theorems. We would like to thank Lingjiang Xie and Rui Qin for EEG data processing.


References

Appendix A Appendix A

Theorem 1 (Information Conservation) Suppose that and are sets of and () independent unit Gaussian random variables, respectively, then these two sets of random variables can not be the generating factor of each other. That is, there are no continuous functions and such that

Proof.

For theorem 1. Proof by Contradiction. Suppose those two function exist, and we will show that they will be inverse mapping of each other and the homeomorphism mapping of and . Since and have different topology structures (), the homeomorphism mapping will not exist.

Since both and are continuous, there is a homeomorphism mapping between and and it leads to the contradiction. ∎

Theorem 2 (Objective Decomposition) If , i.e., the support of is contained in support of , for any , and then it yields the following decomposition:

  • norm expression of the KL-divergence term in VAE:

    (23)
  • Further decomposition of an entity in the norm expression:

    (24)
Proof.

The norm expression is obvious. We prove the further decomposition of an entity in the norm expression:

(25)

Theorem 3 (Mutual Information Separation) Let be independent unit Gaussian distribution, and be conditional independent given . Then

(26)
Proof.

Appendix B Appendix B

b.1 Mnist

We split 7000 data points by ratio into training, validation, testing set. The estimated mutual information and are calculated on 10000 data points in the testing set. Seed images from the testing set are used to infer factor value and draw the traversal.

In traversal figures, each block corresponds to the traversal of a single factor over the range while keeping others fixed to their inferred (by -VAE, VAE). Each row is generated with a different seed image.

The setting for -VAE is enumerated from .

b.2 CelebA

We split randomly roughly 200000 data points by ratio into training, validation (no use), testing set.

The estimated mutual information and are calculated on 10000 data points in the testing set. Seed images from the testing set are used to infer factor value and draw the traversal.

In traversal figures, each block corresponds to the traversal of a single factor over the range while keeping others fixed to their inferred (by -VAE, VAE). Each row is generated with a different seed image.

The setting for -VAE is enumerated from .

b.3 Deap

DEAP is a well-known public multi-modalities (e.g. EEG, video, etc.) dataset proposed by Koelstra et al. (2012). The EEG signals are recorded from 32 channels by 32 participants watching 40 videos for 63 seconds each. The EEG data was preprocessed which down-sampling into 128Hz and band range 4-45 Hz. By the same transformation idea from Bashivan et al. (2015)

, we applied fast Fourier transform (FFT) on 1-second EEG signal and convert it to an image. In this experiment, alpha (8-13Hz), beta (13-30Hz) and gamma (30-45Hz) are extracted as the frequency band which represented the activities related to brain emotion emerging. The next step is similar as

Bashivan et al. (2015) work which mentioned in section II [PLEASE CHECK IT IN THE PAPER], by Azimuthal Equidistant Projection (AEP) and Clough-Tocher scheme resulting in three 32x32 size topographical activity maps corresponding to each frequency bands shown as RGB plot. The transformation work conduct the total of 1280 EEG videos where each has 63 frames. The two emotional dimensions are arousal and valence, which were labeled from the scale 1-9. For each of them, we applied 5 as the boundary for separating high and low level to generate 4 classes (e.g. high-arousal (HA), high-valence (HV), low-arousal (LA) and low-valence(LV)). In this paper we perform this 4-class classification task as same as the one in [baseline paper].

We split randomly roughly 1280 samples by ratio [0.8: 0.1: 0.1] into training, validation, testing set. (= 6)-VAE is trained on each frame and LSTM was used to combine all the frames together for each video.

The estimated mutual information are calculated on 100*63 imagewise(100 videos) data points in the testing set. Seed images from the testing set are used to infer factor value and draw the traversal.

In traversal figures, each block corresponds to the traversal of a single factor over the range while keeping others fixed to their inferred (by -VAE). Each row is generated with a different seed image.

b.4 Network Structure

c—c—c—c Dataset & Optimiser& Architecture

Mnist& Adam & Input & 28x28x1

&

& Encoder & Conv 32x4x4,32x4x4 (stride 2).

& & &FC 256. ReLU activation.

&Epoch 200 & Latents & 128

& & Decoder & FC 256. Linear. Deconv reverse of encoder.

& & &ReLU activation. Gaussian.


CelebA& Adam & Input & 64x64x3

& & Encoder & Conv 32x4x4,32x4x4,64x4x4,64x4x4 (stride 2).

& & &FC 256. ReLU activation.

&Epoch 20 & Latents & 128/32

& & Decoder & FC 256. Linear. Deconv reverse of encoder.

& & & ReLU activation. Mixture of 2-Gaussian.

Deap& Adam & Input & 32x32x3

& & Encoder & Conv 32x4x4,32x4x4,64x4x4,64x4x4 (stride 2).

& & &FC 256. ReLU activation.

&Epoch 300 & Latents & 128/32

& & Decoder & FC 256. Linear. Deconv reverse of encoder.

& & & ReLU activation. Gaussian.

& & Input & 63x128

& & Recurrent & LSTM dim128. Time-Step 63.

& & Predictor & FC 4. ReLU activation.


b.5 Experiment Plot

In the following subsection, we present the influential factor () traversals, mutual information and variance plot of different data sets.

i͡n 7,8, 12,13,19,26,28,31,37,40,45,48, 56,63,73,77,81,82,88,90,96,102,110,118,120 Factor -

Figure 6: Mutual Information Sparsity in CelebA: Generating Factors Traversal of (=40)-VAE

i͡n 16,23,24,39,52,56,57,60,65,74,83,91,93,121,126 Factor -

Figure 7: Mutual Information Sparsity in MNIST: Generating Factor Traversal of (=10)-VAE

i͡n 1,26,31,36,40,59,60,61,64,65,69,83,86,88,100,104,113,114,116,117 Factor -

Figure 8: Mutual Information Sparsity in Deap: Generating Factor Traversal of (=6)-VAE