Disentangled Variational Auto-Encoder for Semi-supervised Learning

09/15/2017 ∙ by Yang Li, et al. ∙ 0

In this paper, we develop a novel approach for semi-supervised VAE without classifier. Specifically, we propose a new model called SDVAE, which encodes the input data into disentangled representation and non-interpretable representation, then the category information is directly utilized to regularize the disentangled representation via equation constraint. To further enhance the feature learning ability of the proposed VAE, we incorporate reinforcement learning to relieve the lack of data. The dynamic framework is capable of dealing with both image and text data with its corresponding encoder and decoder networks. Extensive experiments on image and text datasets demonstrate the effectiveness of the proposed framework.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The abundant data generated online every day has greatly advanced machine learning, data mining and computer vision communities. However, manual labeling of large dataset is very time and labor consuming. Sometimes it even requires domain knowledge. All the above results the majority of the data with limited labels. Therefore, semi-supervised learning, which utilizes both labeled and unlabeled data for model training, is attracting increasing attention 

[4, 22, 14, 33]. Existing semi-supervised models can be generally categorized into those categories, i.e., discriminative model, generative model, graph-based model and the combined model with those categories [29, 5, 9, 23, 31].

Among various semi-supervised models proposed, the semi-supervised generative models based on variational auto-encoder have shown strong performance in image classification [14, 19] and text classification [33]

. The effectiveness of VAE for semi-supervised learning comes from its efficiency in posterior distribution estimation and its powerful ability in feature extracting from text data 

[2] and image data [14, 19]. To adapt VAE for semi-supervised learning, the semi-supervised VAEs are typically composed of three main components: an encoder network , a decoder and a classifier . In the application, the encoder, decoder and classifier can be implemented using various models, e.g., MLP or CNN networks [19, 34]. Though the classifier plays a vital role in achieving the semi-supervised goal, it introduces extra parameters of itself to learn. With the limited labeled data, it may not be an optimal choice to introduce more parameters to VAE for semi-supervised learning because it may memorize the limited data with large quantities of parameters, namely overfiting.

Therefore, in this paper, we investigate if we can directly incorporate the limited label information to VAE without introducing a classifier so as to achieve the goal of semi-supervised learning and at the same time to reduce the number of parameters to be learned. In particular, we investigate the following two challenges: (1) Without introducing classifier, how do we incorporate the label information to VAE for semi-supervised learning? and (2) How can we effectively use the label information for representation learning of VAE? In an attempt to solve these two challenges, we propose a novel semi-supervised learning model named Semi-supervised Disentangled Variational Auto-Encoder (SDVAE). SDVAE adopts the VAE with KKT conditions as it has better representation learning ability than VAE. Unlike existing semi-supervised VAEs that utilize classifiers, SDVAE encodes the input data into disentangled representation and non-interpretable representation, and the category information is directly utilized to regularize the disentangled representation as an equation constraint. As the labeled data is limited, the labeled information may not affects the model much. To remedy this, we further change the equation constraints into the reinforcement learning format, which helps the objective gain the category information heuristics. The inverse auto-regression (IAF) is also applied to improve the latent variable learning. The proposed framework is flexible in which it can deal with both image and text data by choosing corresponding encoder and decoder networks. The main contributions of the paper are:

  • Propose a novel semi-supervised framework which directly exploits the label information to regularize disentangled representation with reinforcement learning;

  • Extract the disentangle variable for classification and the non-interpretable variable for the reconstruction from the data directly; and

  • Conduct extensive experiments on image and text datasets to demonstrate the effectiveness of the proposed SDVAE.

2 Preliminaries

In this section, we introduce preliminaries that will be useful to understand our model.

2.1 Variational Auto-Encoder

Variational Auto-Encoders (VAEs) have emerged as one of the most popular deep generative models. One key step of VAE is to evaluate , which can be interpreted as

(1)

where

is Kullback-Leibler divergence between two distributions Q and P and

is the evidence lower bound (ELBO). It is defined as

(2)

The term is to extract latent feature from the observed data and it is called encoder generally. By minimizing KL divergence, we try to find that can approximate the true posterior distribution . Because is non-negative and is fixed, then minimizing is equivalent to maximizing . We can rewrite as

(3)

where the first term in the RHS of Eq.(3) is the reconstruction error (RCE), and the second term in the RHS is the KL divergence between the prior and the posterior (KLD). Those two values play different roles during the approximation. We will introduce them in details in the next section.

2.2 VAE with KKT Conditions

In practice, we find that the RCE is usually the main error, while the term of KLD is regarded as the regularization to enforce to be close to , which is relatively small. If we constrain the KL divergence term into a small component to gain a tighter lower bound, the goal is transformed to maximize the RCE, namely  [6]. Then the objective function is changed with the inequation constraints:

(4)

Using KKT conditions [1], Eq.(4) can be rewritten as follows:

(5)

where is the Lagrangian multiplier, which is used to penalize the deviation of the constraint . Given that and , we have

(6)

If , then Eq.(6) reduces to the original variational auto-encoder problems that proposed by Kingma [16]. However, if , then , which is closer to the target . This is just the mathematical description of the fact that the more information in the latent variable , the tighter of the lower bound is. Through the KKT condition, a loose constraint over the decoder is introduced. Empirical results show that VAE with KKT condition performs better than original VAE. Thus, in this paper, we use VAE with KKT condition as our basic model.

2.3 Semi-supervised VAE

When there is label information in the observed data, it is easy to extend Eq.(6) to include label information as follows [14].

(7)

To achieve the semi-supervised learning,  [14] introduce a classifier to Eq.(7), which results in

(8)

Apart from the Eq.(7) and Eq.(8), the classification loss over the label information is added into the objective function when facing with the labeled data. However, in this paper, the discriminative information is added from scratch and an equation constrained VAE is proposed, in order to highlight the contribution of labeled data.

3 The Proposed Framework

In this section, we introduce the details of the proposed framework. Instead of using a classifier to incorporate the label information, we seek to directly use label information to regularize the latent representation so as to reduce the number of parameters.

3.1 Disentangled Representation

In order to incorporate the label information to the latent representation, we assume that the latent representation can be divided into two parts, i.e., the disentangle variable and non-interpretable variable. The disentangle variable captures the categorical information, which can be used for prediction task. Therefore, we can use label information to constrain disentangled variable. The non-interpretable variable can be vectors comprised of any dimensions that combine other uncertain information from the data. For the simplicity of notation, we use

to denote the disentangled variable and to denote the non-interpretable representation. With and , the encoder can be rewritten as . We further assume that the disentangled variable and the non-interpretable variable are independent condition on , i.e.,

(9)

It is a reasonable assumption, because given , the categorical information is only dependent on and thus , which captures the categorical information, is independent of given . This means there are seldom information about the category information in , which is validated in the experiment part.

Now is the encoder for the non-interpretable representation, and is the encoder for the disentangle representation. Based on those assumptions, Eq.(7) is written as:

(10)

where , which represents the reconstruction error given the variable . and denote the and respectively. From the above equation, we can see that the categorical information is extracted from the data, i.e., captured in disentangled variable . Now if we have partial labels given, we can directly use the label information to regularize .

With capturing the categorical information, there are many ways to regularize . Inspired by the work of [6], we add equation constraints on over the ELBO, where the equation constraint is to enforce the disentangled representation to be close to the label information . In this work, we consider two ways to add the constraint over the ELBO as discussed below.

3.2 Sdvae-I

The first way we consider is the cross entropy between and , i.e.,

(11)

where is the observed label information, is encoder for the disentangle variable

. This is a popular loss function for supervised learning and doesn’t introduce any new parameters. Therefore, we choose this as the loss function for regularizing the disentangled variable

. We name this method Semi-supervised Disentangled VAE (SDVAE-I). By adding this loss function to Eq.(10), the objective function of SDVAE-I is given as:

(12)

is the weight parameter. When there is no labeled data, the equation condition will be .

3.3 Sdvae-Ii

The drawback of the SDVAE-I is obviously, because the training results depend on the number of the labeled data heavily. However, for semi-supervised learning, there is usually a small size of the labeled data available. Thus, it is hard for the disentangle variable to capture the category information. To remedy this, we got inspired by the idea in [33]. The equation constraints can be expressed by the reinforcement learning, in which ELBO can be seen as the reward of the equation constraint. The disentangle variable acts as the agent which decides the output category information. This can be seen in Eq.(14). Finally, a constant number is added to act as the bias. The parameter is changed into as followed:

(13)

The partial update of this part is as followed:

(14)

Where denotes the .

However, those terms only take effect over the labeled data. To make up for this drawback, another term of log-likelihood expectation on disentangle variable is added as the information entropy in Eq.(12

), and this will be calculated both in labeled data and the unlabeled data. It helps to reduce the large variance of the disentangle information. Then the objective function in Eq.(

12) is changed into Eq.(15).

(15)

where is the label information, and are the coefficient parameters, and we name this model SDVAE-II.

3.4 With Inverse Autoregressive Flow

Because the two different latent variables are extracted from the data directly, to make the posterior inference more flexible and enhance the ability in disentangle representation in high-dimension space, the inverse autoregressive flow (IAF) [35] is applied in SDVAE-I and SDVAE-II. The chain is initialized with the output and from the encoder. Together with the random sample , the latent variable is calculated as . The way to update IAF chain is the same as that in the LSTM shown in Eq.(16).

(16)

where

are the outputs of the auto-regression neural networks, whose input is the last latent variable

, and is the flow length.

3.5 Training of SDVAE

The models can be trained end-to-end using mini-batch with the ADAM optimizer [13]. The training algorithm is summarized in Algorithm 1. In Line 1, we initialize the parameters. From Line 3 to Line 5, we sample a mini-batch to encode the input data as and . From Line 6 to Line 10, we apply IAF. We then update the parameters from Line 11 to Line 13.

1:  Initialize the parameters
2:  repeat
3:      Sample a miniBatch from the datapoints
4:      Random sample from the noise distribution
5:     
6:     if IAF then
7:        for  do
8:           
9:        end for
10:     end if
11:     
12:      Calculate the gradients of Eq.(15) for SDVAE-II, and Eq.(12) for SDVAE-I.
13:     Update with gradients
14:  until model convergence
Algorithm 1 Training algorithm of the proposed models.

3.6 Discussion

The differences between the previous work [14, 33] and our work will be discussed in this section.

Firstly, the assumptions are different. In this work, we assume that the disentangle variable and non-interpretable variable at the same time are from both the labeled data and the unlabeled data. Furthermore, we assume that these two variables are independent. However, it is not the same in the previous works, they only extract the latent variable from the data. When there is no label information, label variable infers from the with the shared parameters from or infers from directly.

Then, based on different assumptions, there are differences with the previous works in mathematics. The ELBO with two independent latent variable inferences is written as Eq.(10), and it is different from that in Eq.(7) who only has one latent variable inference. Furthermore, if we ignore the assumption difference, when facing with the labeled data in previous works, their objective function is a special case in Eq.(15) when .

When the label is missing, previous works apply the marginal posterior inference over the label information which is shown in Eq.(8). In this paper, it is the inference for both latent variable inference over the and , and this is shown in Eq.(17).

(17)

4 Experimental Results

In this section, we conduct experiments to validate the effectiveness of the proposed framework. Specifically, we want to answer the following questions: (1) Is the disentangled representation able to capture the categorical information? (2) Is the non-interpretable variable helpful for the data reconstruction? (3) Is the proposed framework effective for semi-supervised learning? To answer the above questions, we conduct experiments on image and text datasets, respectively.

4.1 Experiments on Image Datasets

4.1.1 Datasets Description

For image domain, we choose two widely used benchmark datasets for evaluating the effectiveness of SDVAE, i.e., MNIST [18] and SVHN [21]. In the MNIST, there are 55,000 data samples in the train set and 10,000 data samples in the test set. In the SVHN, there are 73,257 data samples in the train set, and 26,032 data samples in the test set. Both datasets contain 10 categories.

Models 600 1000 3000
NN ([14]) 11.44 10.07 6.04
CNN 7.68 6.45 3.35
TSVM 6.16 5.38 3.45
CAE 6.3 4.77 3.22
MTC 5.13 3.64 2.57
AtlasRBF - 3.68(0.12) -
Semi-VAE(M1)+TSVM 5.72(0.05) 4.24(0.07) 3.49(0.04)
Semi-VAE(M2) 4.94(0.13) 3.60(0.56) 3.92(0.63)
Semi-VAE(M1+M2) 2.59(0.05) 2.40(0.02) 2.18(0.04)
SDVAE-I 2.75(0.11) 2.42(0.08) 1.70(0.09)
SDVAE-I&IAF 2.74(0.06) 2.24(0.08) 1.33(0.09)
SDVAE-II 2.49(0.10) 1.96(0.09) 1.58(0.09)
SDVAE-II&IAF 1.97(0.14) 1.29(0.11) 1.00(0.05)
Table 1: The classification errors on the MNIST data with part of labeled data

4.1.2 Model Structure

For the image data, the encoder is a deep net composed of two convolutional layers followed by two fully connected layers. The convolutional layers are used to extract features from the images while the fully connected layers are used to convert the features to the non-interpretable variable and the disentangle variable. The decoder is a network composed of two fully connected layers to map the latent features back to images. Dropout [26] is applied to both the encoder and decoder networks.

4.1.3 Disentangle Representation

The first experiment is to explore how the disentangle variable and non-interpretable variable perform in the image reconstruction. The experiment is conducted on the MNIST dataset. In the training data, we randomly select 3000 data samples as labeled data and the remainings are unlabeled. The dimension of the disentangle variable is 10 which is same as the category number, and the dimension of is 50.

We first train the model to learn the parameters. Then we use the trained model to learn latent representation on the test data. After learning the representations, we mask and in turn to see how they affect the reconstruction of the input image. Two sample results are shown in Fig.1. We also use t-SNE [28] to visualize of the testing data. The results from those four models (SDVAE-I, SDVAE-I&IAF SDVAE-II and SDVAE-II&IAF) are shown in Fig.2.

Figure 1: The first row in left figure and the right figure are the reconstruction images with the variable and variable masked respectively, and the images in the second row in both figures are the test images original.
Figure 2: The t-SNE distribution of the latent variable from proposed models, and different categories are in different colors with number.

From Fig.1 and Fig.2, we can see that the disentangle variable mainly captures the categorical information, and it has little influence over the reconstruction task. More specifically, from Fig.2, we can see that images of the same class are clustered together, implying that the disentangled representation captures the categorical information. In addition, we find that cluster SDVAE-I gives the worst visualization as clusters have intersections, while SDVAE-I&IAF and SDVAE-II&IAF give better visualization, which suggests that SDVAE-I&IAF and SDVAE-II&IAF are better at capturing the categorical information.

From Fig.1, we can see that when is masked, still reconstructs the input image well, indicating that is appropriate for reconstruction. To explore how variable takes effect in the image reconstruction, we range a certain dimension of from -2 to 2 on the specific labeled image, and the selected results are shown in Fig.3.

Figure 3: The reconstruction images by varying in a certain dimension.

From the image, we can see that can control different properties in image reconstruction with different dimensions, such as italic controlling, bold controlling, transform control, and the style controlling, etc. These can be seen from images in Fig.3 left to right.

4.1.4 Semi-Supervised Learning

Furthermore, we conduct experiments to test the proposed models in semi-supervised learning on MNIST. We randomly select points from the training set as labeled data, where is varied as . The rest training data are used as unlabeled data. We compare with state-of-the-art supervised and semi-supervised classification algorithms, which are used in [14]

. The experiments are conducted 10 times and the average accuracy with standard deviation are showed in Table 

1. Note that the performances of the compared methods are from  [14] too. From this table, we can see the proposed model SDVAE-II&IAF performs best in classification and makes the least classification errors (in black bold format) with small part of the labeled data. Although SDVAE-I performs not as good as other proposed models, it still can achieve state-of-the-art results.

To further validate the observation, we also conduct the semi-supervised learning over the SVHN, another popularly used dataset. SVHN has 73,257 training samples and 26032 test samples. Among the training data, we randomly select 1000 data samples as labeled data and the rest as unlabeled data. The results are shown in Table 2. Similarly, we can observe that SDVAE-II&IAF gives the best performance.

Method Test error rate
KNN ([14]) 77.93% (0.08)
TSVM 66.55% (0.10)
Semi-VAE(M1)+KNN 65.63% (0.15)
Semi-VAE(M1)+TSVM 54.33% (0.11)
Semi-VAE(M1+M2) 36.02% (0.10)
SDVAE-I 47.32% (0.13)
SDVAE-I&IAF 46.92% (0.12)
SDVAE-II 44.16% (0.14)
SDVAE-II&IAF 34.25% (0.13)
Table 2: The results on the SVHN data

4.2 Experiments on Text Dataset

4.2.1 Dataset Description

To test the model on text data, the IMDB data [20] is used. This dataset contains 25,000 train samples and 25,000 test samples in two categories.

4.2.2 Model Structure

In the application of the text data, the encoder is also the convolutional neural networks, but different from the case in image data, there are two convolutional neural networks which are referring from 

[12] parallelized together. One is extracting the feature at the word level, and the other is extracting the feature at the character level. As to the decoder, we applied the conditioned LSTM [32], which is given as follows:

(18)

The conditional LSTM is same as the vanilla LSTM except for the current variable, which is replaced by the concatenation of the latent variable and . The techniques of dropout [26]

and batch normalized 

[10] are both utilized in the encoder and decoder networks.

4.2.3 Disentangle Representation

We randomly select 20k samples from the training set as the labeled data, and others are unlabeled during the training. Similarly, we use the t-SNE to visualize the disentangle variable and the non-interpretable variable from the proposed model on the test data and unlabeled data. Results are showed in Fig.4,

(a) Unlabeled Data
(b) Test Data
Figure 4: The left figure is the t-SNE distribution of the non-interpretable variable , the right figure is the t-SNE distribution of the disentangle variable correspondingly. Different categories are in different colors with number.

From the left figure in Fig.4, we can see that the disentangle representation can clearly separate the positive and negative samples while non-interpretable representation cannot, i.e., data points from two clusters are interleaved with each other. This suggests that the disentangle representation captures categorical information well, and there is seldom categorical information in non-interpretable variable.

4.2.4 Semi-Supervised Learning

We further conduct semi-supervised classification on the text dataset using the representation learned from previous experiments and fine tuning the model. Similarly, we compare with state-of-the-art semi-supervised learning algorithms. The average test error rate is reported in Table 3. From the results, we can see that: (i) SDVAE-II&IAF outperforms the compared methods, which implies the effectiveness of the proposed framework for semi-supervised learning; and (ii) As we add reinforcement learning and IAF, the performance increases, which suggests the two components contribute to the model.

Method Test error rate
LSTM ([4]) 13.50%
Full+Unlabeled+BoW ([20]) 11.11%
WRRBM+BoW ([20]) 10.77%
NBSVM-bi ([30]) 8.78%
seq2-bown-CNN ([11]) 7.67%
Paragraph Vectors ([17]) 7.42%
LM-LSTM ([4]) 7.64%
SA-LSTM ([4]) 7.24%
SSVAE-II&LM ([33]) 7.23%
SDVAE-I 12.56%
SDVAE-I&IAF 11.60%
SDVAE-II 7.37%
SDVAE-II&IAF 7.18%
Table 3: The results on the IMDB data

4.3 Parameters Analysis

There are several important parameters need to be tuned for the model, i.e., , , and the length of IAF. In this section, we conduct experiments to analyze the sensitiveness of the model to the parameters.

4.3.1 Effects of and the IAF Length

We firstly evaluate and the length of the IAF chain which are proposed in the works of -VAE [6] and IAF [15]. These experiments are conducted over the MNIST training dataset.

The objective function in the proper finding is depicted in Eq.(6). Results with different values are shown in Fig.5(a) From the results, we can see that it is better for to have a small value, which not only leads to a rich information in the latent variable but also gets a better reconstruction error. But as described before, the large value of KL-divergence is also the cause of overfitting or the underfitting for the model. However, in the case of , there is a low reconstruction error, which is the sign of the good performance.

Then the model structure about the IAF chain is built according to the Eq.(16), and the results with different length are shown in the right figure in Fig.5(b). From the figure, we can see that it is not good to set the chain too long if it is a long IAF. The RCEs are not so good together with the KLDs, and the latent variable isvery unstable. On the contrary, there is a stable increase about the KL divergence, and a stable decrease reconstruction error when the length of the IAF chain is set to . This means that under the good reconstruction, the latent variable captures more useful information. This is also validated in the results of the SDVAE-I&IAF and SDVAE-II&IAF. Thus, in the experiments about the IAF, its length is set to by default.

(a) Validate
(b) Validate Length of IAF
Figure 5: The left y-axis in each figure is the reconstruction error (BCE) which is axis of the solid lines, and the right y-axis is the KL divergence (KLD) which is axis of the dash lines.

4.3.2 Effects of and

To decide the parameter and that in SDVAE-II, we made the grid search both on the text data and the image data. For the image data, the experiment is conducted on the SVHN dataset with 1000 labeled samples. Experimental result with ranges from to , and ranges from to are shown in Fig.6(a). For the text data, the experiment is conducted on the IMDB data with 20,000 labeled samples. Experimental result with and range from to are shown in Fig.6(b).

(a) The Image data
(b) The Text data
Figure 6: The grid search results for the proper and finding.

From the Fig.6(a), we can see that, an acceptable range for in the image data is [0.1:100] and [0.01:10] for the . Especially, when and , it is achieving the best result.

For the text data, the results in the Fig.6(b) show that the accuracy is not sensitive to . However, when is small, the result will be more precise. In conclusion, it is better to set to 0.1 and can be set randomly.

5 Related Works

Semi-supervised VAE Semi-supervised learning is attracting increasing attention, and lots of works are proposed [33, 8, 22, 31, 23, 14, 27, 5]. Those works can be divided into the discriminative models [29, 4], the generative models [33, 8, 22], graph based models [27], and the combined model with those [5]. Because of the effectiveness of deep generative models in capturing the data distribution, semi-supervised models based on deep generative models such as generative adversarial network [25] and variational auto-encoder (VAE)  [14] become popular. Semi-VAE [14] incorporates the learned latent variable into the classifier and improves the performance greatly. SSVAE [33] extends Semi-VAE for sequence data, and also demonstrates its effectiveness in the semi-supervised learning on the text data. The aforementioned semi-supervised VAE all use a parametric classifier, which increases the burden to learn more parameters given the limited labeled data. The proposed framework incorporates the label information directly into the disentangled representation and thus avoids the parametric classifier.

Variants of VAE Because of the great potential of VAE in image and text mining, various models based on VAE are proposed to further improve its performance [16, 6, 7, 15]. For example,  [6] apply the KKT condition in the VAE, which gave a tighter lower bound. Similarly,  [3] introduce importance weighting to VAE, which also tries to give a tighter bound.  [24] consider the stein based sampling to minimize the KL divergence. [7] rewrite the evidence lower bound objective by decomposition, and give a clear explanation for each term. To extend the flexible of the posterior inference, IAF is introduced  [15] which improves the VAE a lot.

6 Conclusions

In this work, we propose models that extract the disentangle variable and the non-interpretable variable from data at the same time. The disentangle variable is designed to capture the category information and thus relieves the use of classifiers in semi-supervised learning. The non-interpretable variable is designed to reconstruct the data. Experiments show that it could even reflect certain textual features, such as italic, bold, transform and style in the hand writing digital data during the reconstruction. These two variables cooperate well and each performs its own functions in the SDVAE. The IAF improves the model effectively on the basis of SDVAE-I and SDVAE-II. Especially in which, SDVAE-II&IAF achieves the state-of-the-art results both in image data and the text data in the semi-supervised learning tasks.

References