1 Introduction
The abundant data generated online every day has greatly advanced machine learning, data mining and computer vision communities. However, manual labeling of large dataset is very time and labor consuming. Sometimes it even requires domain knowledge. All the above results the majority of the data with limited labels. Therefore, semisupervised learning, which utilizes both labeled and unlabeled data for model training, is attracting increasing attention
[4, 22, 14, 33]. Existing semisupervised models can be generally categorized into those categories, i.e., discriminative model, generative model, graphbased model and the combined model with those categories [29, 5, 9, 23, 31].Among various semisupervised models proposed, the semisupervised generative models based on variational autoencoder have shown strong performance in image classification [14, 19] and text classification [33]
. The effectiveness of VAE for semisupervised learning comes from its efficiency in posterior distribution estimation and its powerful ability in feature extracting from text data
[2] and image data [14, 19]. To adapt VAE for semisupervised learning, the semisupervised VAEs are typically composed of three main components: an encoder network , a decoder and a classifier . In the application, the encoder, decoder and classifier can be implemented using various models, e.g., MLP or CNN networks [19, 34]. Though the classifier plays a vital role in achieving the semisupervised goal, it introduces extra parameters of itself to learn. With the limited labeled data, it may not be an optimal choice to introduce more parameters to VAE for semisupervised learning because it may memorize the limited data with large quantities of parameters, namely overfiting.Therefore, in this paper, we investigate if we can directly incorporate the limited label information to VAE without introducing a classifier so as to achieve the goal of semisupervised learning and at the same time to reduce the number of parameters to be learned. In particular, we investigate the following two challenges: (1) Without introducing classifier, how do we incorporate the label information to VAE for semisupervised learning? and (2) How can we effectively use the label information for representation learning of VAE? In an attempt to solve these two challenges, we propose a novel semisupervised learning model named Semisupervised Disentangled Variational AutoEncoder (SDVAE). SDVAE adopts the VAE with KKT conditions as it has better representation learning ability than VAE. Unlike existing semisupervised VAEs that utilize classifiers, SDVAE encodes the input data into disentangled representation and noninterpretable representation, and the category information is directly utilized to regularize the disentangled representation as an equation constraint. As the labeled data is limited, the labeled information may not affects the model much. To remedy this, we further change the equation constraints into the reinforcement learning format, which helps the objective gain the category information heuristics. The inverse autoregression (IAF) is also applied to improve the latent variable learning. The proposed framework is flexible in which it can deal with both image and text data by choosing corresponding encoder and decoder networks. The main contributions of the paper are:

Propose a novel semisupervised framework which directly exploits the label information to regularize disentangled representation with reinforcement learning;

Extract the disentangle variable for classification and the noninterpretable variable for the reconstruction from the data directly; and

Conduct extensive experiments on image and text datasets to demonstrate the effectiveness of the proposed SDVAE.
2 Preliminaries
In this section, we introduce preliminaries that will be useful to understand our model.
2.1 Variational AutoEncoder
Variational AutoEncoders (VAEs) have emerged as one of the most popular deep generative models. One key step of VAE is to evaluate , which can be interpreted as
(1) 
where
is KullbackLeibler divergence between two distributions Q and P and
is the evidence lower bound (ELBO). It is defined as(2) 
The term is to extract latent feature from the observed data and it is called encoder generally. By minimizing KL divergence, we try to find that can approximate the true posterior distribution . Because is nonnegative and is fixed, then minimizing is equivalent to maximizing . We can rewrite as
(3) 
where the first term in the RHS of Eq.(3) is the reconstruction error (RCE), and the second term in the RHS is the KL divergence between the prior and the posterior (KLD). Those two values play different roles during the approximation. We will introduce them in details in the next section.
2.2 VAE with KKT Conditions
In practice, we find that the RCE is usually the main error, while the term of KLD is regarded as the regularization to enforce to be close to , which is relatively small. If we constrain the KL divergence term into a small component to gain a tighter lower bound, the goal is transformed to maximize the RCE, namely [6]. Then the objective function is changed with the inequation constraints:
(4)  
Using KKT conditions [1], Eq.(4) can be rewritten as follows:
(5)  
where is the Lagrangian multiplier, which is used to penalize the deviation of the constraint . Given that and , we have
(6) 
If , then Eq.(6) reduces to the original variational autoencoder problems that proposed by Kingma [16]. However, if , then , which is closer to the target . This is just the mathematical description of the fact that the more information in the latent variable , the tighter of the lower bound is. Through the KKT condition, a loose constraint over the decoder is introduced. Empirical results show that VAE with KKT condition performs better than original VAE. Thus, in this paper, we use VAE with KKT condition as our basic model.
2.3 Semisupervised VAE
When there is label information in the observed data, it is easy to extend Eq.(6) to include label information as follows [14].
(7)  
To achieve the semisupervised learning, [14] introduce a classifier to Eq.(7), which results in
(8) 
Apart from the Eq.(7) and Eq.(8), the classification loss over the label information is added into the objective function when facing with the labeled data. However, in this paper, the discriminative information is added from scratch and an equation constrained VAE is proposed, in order to highlight the contribution of labeled data.
3 The Proposed Framework
In this section, we introduce the details of the proposed framework. Instead of using a classifier to incorporate the label information, we seek to directly use label information to regularize the latent representation so as to reduce the number of parameters.
3.1 Disentangled Representation
In order to incorporate the label information to the latent representation, we assume that the latent representation can be divided into two parts, i.e., the disentangle variable and noninterpretable variable. The disentangle variable captures the categorical information, which can be used for prediction task. Therefore, we can use label information to constrain disentangled variable. The noninterpretable variable can be vectors comprised of any dimensions that combine other uncertain information from the data. For the simplicity of notation, we use
to denote the disentangled variable and to denote the noninterpretable representation. With and , the encoder can be rewritten as . We further assume that the disentangled variable and the noninterpretable variable are independent condition on , i.e.,(9) 
It is a reasonable assumption, because given , the categorical information is only dependent on and thus , which captures the categorical information, is independent of given . This means there are seldom information about the category information in , which is validated in the experiment part.
Now is the encoder for the noninterpretable representation, and is the encoder for the disentangle representation. Based on those assumptions, Eq.(7) is written as:
(10)  
where , which represents the reconstruction error given the variable . and denote the and respectively. From the above equation, we can see that the categorical information is extracted from the data, i.e., captured in disentangled variable . Now if we have partial labels given, we can directly use the label information to regularize .
With capturing the categorical information, there are many ways to regularize . Inspired by the work of [6], we add equation constraints on over the ELBO, where the equation constraint is to enforce the disentangled representation to be close to the label information . In this work, we consider two ways to add the constraint over the ELBO as discussed below.
3.2 SdvaeI
The first way we consider is the cross entropy between and , i.e.,
(11) 
where is the observed label information, is encoder for the disentangle variable
. This is a popular loss function for supervised learning and doesn’t introduce any new parameters. Therefore, we choose this as the loss function for regularizing the disentangled variable
. We name this method Semisupervised Disentangled VAE (SDVAEI). By adding this loss function to Eq.(10), the objective function of SDVAEI is given as:(12) 
is the weight parameter. When there is no labeled data, the equation condition will be .
3.3 SdvaeIi
The drawback of the SDVAEI is obviously, because the training results depend on the number of the labeled data heavily. However, for semisupervised learning, there is usually a small size of the labeled data available. Thus, it is hard for the disentangle variable to capture the category information. To remedy this, we got inspired by the idea in [33]. The equation constraints can be expressed by the reinforcement learning, in which ELBO can be seen as the reward of the equation constraint. The disentangle variable acts as the agent which decides the output category information. This can be seen in Eq.(14). Finally, a constant number is added to act as the bias. The parameter is changed into as followed:
(13) 
The partial update of this part is as followed:
(14) 
Where denotes the .
However, those terms only take effect over the labeled data. To make up for this drawback, another term of loglikelihood expectation on disentangle variable is added as the information entropy in Eq.(12
), and this will be calculated both in labeled data and the unlabeled data. It helps to reduce the large variance of the disentangle information. Then the objective function in Eq.(
12) is changed into Eq.(15).(15)  
where is the label information, and are the coefficient parameters, and we name this model SDVAEII.
3.4 With Inverse Autoregressive Flow
Because the two different latent variables are extracted from the data directly, to make the posterior inference more flexible and enhance the ability in disentangle representation in highdimension space, the inverse autoregressive flow (IAF) [35] is applied in SDVAEI and SDVAEII. The chain is initialized with the output and from the encoder. Together with the random sample , the latent variable is calculated as . The way to update IAF chain is the same as that in the LSTM shown in Eq.(16).
(16) 
where
are the outputs of the autoregression neural networks, whose input is the last latent variable
, and is the flow length.3.5 Training of SDVAE
The models can be trained endtoend using minibatch with the ADAM optimizer [13]. The training algorithm is summarized in Algorithm 1. In Line 1, we initialize the parameters. From Line 3 to Line 5, we sample a minibatch to encode the input data as and . From Line 6 to Line 10, we apply IAF. We then update the parameters from Line 11 to Line 13.
3.6 Discussion
Firstly, the assumptions are different. In this work, we assume that the disentangle variable and noninterpretable variable at the same time are from both the labeled data and the unlabeled data. Furthermore, we assume that these two variables are independent. However, it is not the same in the previous works, they only extract the latent variable from the data. When there is no label information, label variable infers from the with the shared parameters from or infers from directly.
Then, based on different assumptions, there are differences with the previous works in mathematics. The ELBO with two independent latent variable inferences is written as Eq.(10), and it is different from that in Eq.(7) who only has one latent variable inference. Furthermore, if we ignore the assumption difference, when facing with the labeled data in previous works, their objective function is a special case in Eq.(15) when .
4 Experimental Results
In this section, we conduct experiments to validate the effectiveness of the proposed framework. Specifically, we want to answer the following questions: (1) Is the disentangled representation able to capture the categorical information? (2) Is the noninterpretable variable helpful for the data reconstruction? (3) Is the proposed framework effective for semisupervised learning? To answer the above questions, we conduct experiments on image and text datasets, respectively.
4.1 Experiments on Image Datasets
4.1.1 Datasets Description
For image domain, we choose two widely used benchmark datasets for evaluating the effectiveness of SDVAE, i.e., MNIST [18] and SVHN [21]. In the MNIST, there are 55,000 data samples in the train set and 10,000 data samples in the test set. In the SVHN, there are 73,257 data samples in the train set, and 26,032 data samples in the test set. Both datasets contain 10 categories.
Models  600  1000  3000  
NN  ([14])  11.44  10.07  6.04 
CNN  7.68  6.45  3.35  
TSVM  6.16  5.38  3.45  
CAE  6.3  4.77  3.22  
MTC  5.13  3.64  2.57  
AtlasRBF    3.68(0.12)    
SemiVAE(M1)+TSVM  5.72(0.05)  4.24(0.07)  3.49(0.04)  
SemiVAE(M2)  4.94(0.13)  3.60(0.56)  3.92(0.63)  
SemiVAE(M1+M2)  2.59(0.05)  2.40(0.02)  2.18(0.04)  
SDVAEI  2.75(0.11)  2.42(0.08)  1.70(0.09)  
SDVAEI&IAF  2.74(0.06)  2.24(0.08)  1.33(0.09)  
SDVAEII  2.49(0.10)  1.96(0.09)  1.58(0.09)  
SDVAEII&IAF  1.97(0.14)  1.29(0.11)  1.00(0.05) 
4.1.2 Model Structure
For the image data, the encoder is a deep net composed of two convolutional layers followed by two fully connected layers. The convolutional layers are used to extract features from the images while the fully connected layers are used to convert the features to the noninterpretable variable and the disentangle variable. The decoder is a network composed of two fully connected layers to map the latent features back to images. Dropout [26] is applied to both the encoder and decoder networks.
4.1.3 Disentangle Representation
The first experiment is to explore how the disentangle variable and noninterpretable variable perform in the image reconstruction. The experiment is conducted on the MNIST dataset. In the training data, we randomly select 3000 data samples as labeled data and the remainings are unlabeled. The dimension of the disentangle variable is 10 which is same as the category number, and the dimension of is 50.
We first train the model to learn the parameters. Then we use the trained model to learn latent representation on the test data. After learning the representations, we mask and in turn to see how they affect the reconstruction of the input image. Two sample results are shown in Fig.1. We also use tSNE [28] to visualize of the testing data. The results from those four models (SDVAEI, SDVAEI&IAF SDVAEII and SDVAEII&IAF) are shown in Fig.2.
From Fig.1 and Fig.2, we can see that the disentangle variable mainly captures the categorical information, and it has little influence over the reconstruction task. More specifically, from Fig.2, we can see that images of the same class are clustered together, implying that the disentangled representation captures the categorical information. In addition, we find that cluster SDVAEI gives the worst visualization as clusters have intersections, while SDVAEI&IAF and SDVAEII&IAF give better visualization, which suggests that SDVAEI&IAF and SDVAEII&IAF are better at capturing the categorical information.
From Fig.1, we can see that when is masked, still reconstructs the input image well, indicating that is appropriate for reconstruction. To explore how variable takes effect in the image reconstruction, we range a certain dimension of from 2 to 2 on the specific labeled image, and the selected results are shown in Fig.3.
From the image, we can see that can control different properties in image reconstruction with different dimensions, such as italic controlling, bold controlling, transform control, and the style controlling, etc. These can be seen from images in Fig.3 left to right.
4.1.4 SemiSupervised Learning
Furthermore, we conduct experiments to test the proposed models in semisupervised learning on MNIST. We randomly select points from the training set as labeled data, where is varied as . The rest training data are used as unlabeled data. We compare with stateoftheart supervised and semisupervised classification algorithms, which are used in [14]
. The experiments are conducted 10 times and the average accuracy with standard deviation are showed in Table
1. Note that the performances of the compared methods are from [14] too. From this table, we can see the proposed model SDVAEII&IAF performs best in classification and makes the least classification errors (in black bold format) with small part of the labeled data. Although SDVAEI performs not as good as other proposed models, it still can achieve stateoftheart results.To further validate the observation, we also conduct the semisupervised learning over the SVHN, another popularly used dataset. SVHN has 73,257 training samples and 26032 test samples. Among the training data, we randomly select 1000 data samples as labeled data and the rest as unlabeled data. The results are shown in Table 2. Similarly, we can observe that SDVAEII&IAF gives the best performance.
4.2 Experiments on Text Dataset
4.2.1 Dataset Description
To test the model on text data, the IMDB data [20] is used. This dataset contains 25,000 train samples and 25,000 test samples in two categories.
4.2.2 Model Structure
In the application of the text data, the encoder is also the convolutional neural networks, but different from the case in image data, there are two convolutional neural networks which are referring from
[12] parallelized together. One is extracting the feature at the word level, and the other is extracting the feature at the character level. As to the decoder, we applied the conditioned LSTM [32], which is given as follows:(18)  
The conditional LSTM is same as the vanilla LSTM except for the current variable, which is replaced by the concatenation of the latent variable and . The techniques of dropout [26]
and batch normalized
[10] are both utilized in the encoder and decoder networks.4.2.3 Disentangle Representation
We randomly select 20k samples from the training set as the labeled data, and others are unlabeled during the training. Similarly, we use the tSNE to visualize the disentangle variable and the noninterpretable variable from the proposed model on the test data and unlabeled data. Results are showed in Fig.4,
From the left figure in Fig.4, we can see that the disentangle representation can clearly separate the positive and negative samples while noninterpretable representation cannot, i.e., data points from two clusters are interleaved with each other. This suggests that the disentangle representation captures categorical information well, and there is seldom categorical information in noninterpretable variable.
4.2.4 SemiSupervised Learning
We further conduct semisupervised classification on the text dataset using the representation learned from previous experiments and fine tuning the model. Similarly, we compare with stateoftheart semisupervised learning algorithms. The average test error rate is reported in Table 3. From the results, we can see that: (i) SDVAEII&IAF outperforms the compared methods, which implies the effectiveness of the proposed framework for semisupervised learning; and (ii) As we add reinforcement learning and IAF, the performance increases, which suggests the two components contribute to the model.
Method  Test error rate 

LSTM ([4])  13.50% 
Full+Unlabeled+BoW ([20])  11.11% 
WRRBM+BoW ([20])  10.77% 
NBSVMbi ([30])  8.78% 
seq2bownCNN ([11])  7.67% 
Paragraph Vectors ([17])  7.42% 
LMLSTM ([4])  7.64% 
SALSTM ([4])  7.24% 
SSVAEII&LM ([33])  7.23% 
SDVAEI  12.56% 
SDVAEI&IAF  11.60% 
SDVAEII  7.37% 
SDVAEII&IAF  7.18% 
4.3 Parameters Analysis
There are several important parameters need to be tuned for the model, i.e., , , and the length of IAF. In this section, we conduct experiments to analyze the sensitiveness of the model to the parameters.
4.3.1 Effects of and the IAF Length
We firstly evaluate and the length of the IAF chain which are proposed in the works of VAE [6] and IAF [15]. These experiments are conducted over the MNIST training dataset.
The objective function in the proper finding is depicted in Eq.(6). Results with different values are shown in Fig.5(a) From the results, we can see that it is better for to have a small value, which not only leads to a rich information in the latent variable but also gets a better reconstruction error. But as described before, the large value of KLdivergence is also the cause of overfitting or the underfitting for the model. However, in the case of , there is a low reconstruction error, which is the sign of the good performance.
Then the model structure about the IAF chain is built according to the Eq.(16), and the results with different length are shown in the right figure in Fig.5(b). From the figure, we can see that it is not good to set the chain too long if it is a long IAF. The RCEs are not so good together with the KLDs, and the latent variable isvery unstable. On the contrary, there is a stable increase about the KL divergence, and a stable decrease reconstruction error when the length of the IAF chain is set to . This means that under the good reconstruction, the latent variable captures more useful information. This is also validated in the results of the SDVAEI&IAF and SDVAEII&IAF. Thus, in the experiments about the IAF, its length is set to by default.
4.3.2 Effects of and
To decide the parameter and that in SDVAEII, we made the grid search both on the text data and the image data. For the image data, the experiment is conducted on the SVHN dataset with 1000 labeled samples. Experimental result with ranges from to , and ranges from to are shown in Fig.6(a). For the text data, the experiment is conducted on the IMDB data with 20,000 labeled samples. Experimental result with and range from to are shown in Fig.6(b).
From the Fig.6(a), we can see that, an acceptable range for in the image data is [0.1:100] and [0.01:10] for the . Especially, when and , it is achieving the best result.
For the text data, the results in the Fig.6(b) show that the accuracy is not sensitive to . However, when is small, the result will be more precise. In conclusion, it is better to set to 0.1 and can be set randomly.
5 Related Works
Semisupervised VAE Semisupervised learning is attracting increasing attention, and lots of works are proposed [33, 8, 22, 31, 23, 14, 27, 5]. Those works can be divided into the discriminative models [29, 4], the generative models [33, 8, 22], graph based models [27], and the combined model with those [5]. Because of the effectiveness of deep generative models in capturing the data distribution, semisupervised models based on deep generative models such as generative adversarial network [25] and variational autoencoder (VAE) [14] become popular. SemiVAE [14] incorporates the learned latent variable into the classifier and improves the performance greatly. SSVAE [33] extends SemiVAE for sequence data, and also demonstrates its effectiveness in the semisupervised learning on the text data. The aforementioned semisupervised VAE all use a parametric classifier, which increases the burden to learn more parameters given the limited labeled data. The proposed framework incorporates the label information directly into the disentangled representation and thus avoids the parametric classifier.
Variants of VAE Because of the great potential of VAE in image and text mining, various models based on VAE are proposed to further improve its performance [16, 6, 7, 15]. For example, [6] apply the KKT condition in the VAE, which gave a tighter lower bound. Similarly, [3] introduce importance weighting to VAE, which also tries to give a tighter bound. [24] consider the stein based sampling to minimize the KL divergence. [7] rewrite the evidence lower bound objective by decomposition, and give a clear explanation for each term. To extend the flexible of the posterior inference, IAF is introduced [15] which improves the VAE a lot.
6 Conclusions
In this work, we propose models that extract the disentangle variable and the noninterpretable variable from data at the same time. The disentangle variable is designed to capture the category information and thus relieves the use of classifiers in semisupervised learning. The noninterpretable variable is designed to reconstruct the data. Experiments show that it could even reflect certain textual features, such as italic, bold, transform and style in the hand writing digital data during the reconstruction. These two variables cooperate well and each performs its own functions in the SDVAE. The IAF improves the model effectively on the basis of SDVAEI and SDVAEII. Especially in which, SDVAEII&IAF achieves the stateoftheart results both in image data and the text data in the semisupervised learning tasks.
References
 [1] Dimitri P Bertsekas. Nonlinear programming. Athena scientific Belmont, 1999.
 [2] Samuel R Bowman, Luke Vilnis, Oriol Vinyals, and et al. Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349, 2015.
 [3] Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. arXiv preprint arXiv:1509.00519, 2015.
 [4] Andrew M Dai and Quoc V Le. Semisupervised sequence learning. In Proceedings of the NIPS, pages 3079–3087, 2015.
 [5] Jingrui He, Jaime G Carbonell, and Yan Liu. Graphbased semisupervised learning as a generative model. In Proceedings of the IJCAI, volume 7, pages 2492–2497, 2007.
 [6] Irina Higgins, Loic Matthey, Arka Pal, and et al. betavae: Learning basic visual concepts with a constrained variational framework. In Proceedings of the ICLR, 2017.

[7]
Matthew D Hoffman and Matthew J Johnson.
Elbo surgery: yet another way to carve up the variational evidence
lower bound.
In
Proceedings of the NIPS, Workshop in Advances in Approximate Bayesian Inference
, 2016.  [8] Zhiting Hu, Zichao Yang, Xiaodan Liang, and et al. Toward controlled generation of text. In Proceedings of the ICML, pages 1587–1596, 2017.
 [9] Gao Huang, Shiji Song, Jatinder ND Gupta, and Cheng Wu. Semisupervised and unsupervised extreme learning machines. IEEE transactions on cybernetics, 44(12):2405–2417, 2014.
 [10] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the ICML, pages 448–456, 2015.
 [11] Rie Johnson and Tong Zhang. Effective use of word order for text categorization with convolutional neural networks. arXiv preprint arXiv:1412.1058, 2014.
 [12] Yoon Kim. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882, 2014.
 [13] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
 [14] Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semisupervised learning with deep generative models. In Proceedings of the NIPS, pages 3581–3589, 2014.
 [15] Diederik P Kingma, Tim Salimans, Rafal Jozefowicz, and et al. Improved variational inference with inverse autoregressive flow. In Proceedings of the NIPS, pages 4743–4751, 2016.
 [16] Diederik P Kingma and Max Welling. Autoencoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
 [17] Quoc Le and Tomas Mikolov. Distributed representations of sentences and documents. In Proceedings of the ICML, pages 1188–1196, 2014.
 [18] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradientbased learning applied to document recognition. Proceedings of the IEEE, 86:2278–2324, 1998.
 [19] Lars Maaløe, Casper Kaae Sønderby, Søren Kaae Sønderby, and Ole Winther. Auxiliary deep generative models. arXiv preprint arXiv:1602.05473, 2016.

[20]
Andrew L Maas, Raymond E Daly, Peter T Pham, and et al.
Learning word vectors for sentiment analysis.
In Proceedings of the ACL: Human Language TechnologiesVolume 1, pages 142–150, 2011. 
[21]
Yuval Netzer, Tao Wang, Adam Coates, and et al.
Reading digits in natural images with unsupervised feature learning.
In
Proceedings of the NIPS, Workshop on deep learning and unsupervised feature learning
, volume 2011, page 5, 2011.  [22] Augustus Odena. Semisupervised learning with generative adversarial networks. arXiv preprint arXiv:1606.01583, 2016.
 [23] Yong Peng, BaoLiang Lu, and Suhang Wang. Enhanced lowrank representation via sparse manifold adaption for semisupervised learning. Neural Networks, 65:1–17, 2015.
 [24] Yunchen Pu, Zhe Gan, Ricardo Henao, and et al. Stein variational autoencoder. arXiv preprint arXiv:1704.05155, 2017.
 [25] Jost Tobias Springenberg. Unsupervised and semisupervised learning with categorical generative adversarial networks. arXiv preprint arXiv:1511.06390, 2015.
 [26] Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, and et al. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929–1958, 2014.

[27]
Amarnag Subramanya and Partha Pratim Talukdar.
Graphbased semisupervised learning.
Synthesis Lectures on Artificial Intelligence and Machine Learning
, 8(4):1–125, 2014.  [28] Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using tsne. Journal of Machine Learning Research, 9:2579–2605, 2008.

[29]
V Vapnik and A Sterin.
On structural risk minimization or overall risk in a problem of pattern recognition.
Automation and Remote Control, 10(3):1495–1503, 1977.  [30] Sida Wang and Christopher D Manning. Baselines and bigrams: Simple, good sentiment and topic classification. In Proceedings of the ACL: Short PapersVolume 2, pages 90–94, 2012.
 [31] Suhang Wang, Jiliang Tang, Charu Aggarwal, and Huan Liu. Linked document embedding for classification. In Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, pages 115–124. ACM, 2016.
 [32] TsungHsien Wen, Milica Gasic, Nikola Mrksic, and et al. Semantically conditioned lstmbased natural language generation for spoken dialogue systems. arXiv preprint arXiv:1508.01745, 2015.

[33]
Weidi Xu, Haoze Sun, Chao Deng, and Ying Tan.
Variational autoencoder for semisupervised text classification.
In Proceedings of the AAAI, pages 3358–3364, 2017.  [34] Xinchen Yan, Jimei Yang, Kihyuk Sohn, and Honglak Lee. Attribute2image: Conditional image generation from visual attributes. In Proceedings of the ECCV, pages 776–791, 2016.
 [35] Zichao Yang, Zhiting Hu, Ruslan Salakhutdinov, and Taylor BergKirkpatrick. Improved variational autoencoders for text modeling using dilated convolutions. arXiv preprint arXiv:1702.08139, 2017.
Comments
There are no comments yet.