1 Introduction
Modern machine learning techniques usually require large sets of fully observed and well labelled data for training, which is usually unrealistic in realworld applications. Cooccurrence of missing features and partially observed label information is a widely witnessed issue in industrial data analytic business. On one hand, training features are not complete or heavily corrupted due to unpredictable sensor / enddevice failure, collapse of database and noisy communication channels. On the other hand, human and automatic annotators can not recognize the incomplete feature profiles. It is safe for them to leave the incomplete training instances unlabelled and provide labels to the fully observed data instances with noisefree feature profiles. Consequently, it is highly demanding for the machine learning system to tolerate cooccurrence of missing features and partially observed labels for robust learning in practical use.
We study both transductive and inductive multioutput classification. Multioutput classification, including multiclass and multilabel learning tasks, outputs class labels with higher dimensional representation. They thus define a more sophisticated learning scenario compared to binary classification. The former associates an input instance to one class of a finitely defined class set, while the latter allows one instance to be associated to multiple labels simultaneously. In the transductive case, the learning objective is to infer the missing features and labels based on the observed ones (no separate test set is required). In the inductive case, the model is trained over a set of incomplete training instances and the classification is performed by inferring the labels associated to incomplete test instances. Though the problem of multioutput learning with semisupervised labels has been discussed in previous works [Sun, Y.Zhang, and Zhou2010, Bucak, Jin, and Jain2011], they all assume features of both training and test instances to be fully observed and free from noise. In our work, we consider that both features and labels are missing completely at random, which means that the mask of missing information is assumed to be statistically independent on the data distribution.
Stateoftheart accuracies for classification with incomplete or noisecorrupted features and partially observed labels are given by CLE [Han et al.2018], NoisyIMC [Chiang, Hsieh, and S.Dhillon2015] and MC1 [Cabral et al.2015]. In particular, CLE and NoisyIMC are able to conduct both transductive and inductive learning for multilabel classification, while MC1 is a transductiveonly method. It is worth noting that CLE has been originally proposed to handle the issue of weak labeling in multilabel learning, where only positive labels are observed. NoisyIMC and MC1
can work in the semisupervised learning scenario. However, none of the three methods can handle all the challenges cocurrently raised in our work. First, all of these assume the testing instances to have fully observed feature profiles. They don’t consider coping with incomplete testing instances by design. Second, all of them are designed specifically for multilabel learning and adapting them to multiclass classification is not straightforward.
More recently, methods based on Deep Latent Variable Models (DLVM) have been proposed to deal with missing data. In [Mattei and Frellsen2019]
, the Variational Autoencoder
[Kingma and Welling2014] has been adapted to be trained with missing data and a sampling algorithm for data imputation is proposed. Other approaches based on Generative Adversarial Networks (GAN) by [Goodfellow et al.2014] are proposed in [Yoon, Jordon, and van der Schaar2018] and [Li, Jiang, and Marlin2019]. Impressive results on image datasets are displayed for these models, at the price of a rather high model complexity and the need for a large training set. In addition these works are focused on features reconstruction, and additional specifications and finetuning are required to be able to take partially observed labels into account. The models specifications are quite involved and any new specificity of the dataset may increase both the cost and the difficulty in training (especially for the approaches based on GANs).In this paper we choose to address this problem in a more economical and robust manner. We consider the old and simple architecture of the Restricted Boltzmann Machine and adapt it to the multioutput learning context (RBMMO) with missing data. The RBMMO method serves as a generative model which collaboratively learns the marginal distribution of features and label assignments of input data instances, despite the incomplete observations. Building on the ideas expressed in [Nijman and Kappen1994, Ghahramani and Jordan1994]
we adapt the approach to the more effective contrastive divergence training procedure
[Hinton2002] and provide results on various realworld datasets. The advantage of the RBMMO model is that of providing a robust and flexible method to deal with missing data, with little additional complexity with respect to the classic RBM. Indeed, the trained model can be naturally applied to both transductive and inductive scenarios, achieving superior multioutput classification performance then stateoftheart baselines. Moreover, it works seamlessly with multiclass and multilabel tasks, providing a unified framework for multioutput learning.In Section 2 we will present a brief overview of RBM. In Section 3 and 4 we will show how the RBM can be adapted to work with missing data and we will present an effective imputation procedure. In Section 5 we present the results of our experimental study. Here, in addition to presenting results on some commonly used public datasets, we will also introduce a dataset for security in the InternetofThings which serves as a typical example of multioutput classification with highly incomplete data [anonymous2018] and evaluate to what extent the proposed RBMMO method can improve the quality of security services.
2 Overview of Restricted Boltzmann Machines
An RBM is a Markov random field with pairwise interactions defined on a bipartite graph formed by two layers of noninteracting variables: the visible nodes represent instances of the input data while the hidden nodes provide a latent representation of the data instances. and will denote respectively the sets of visible and hidden variables. In our setting, the visible variables will further split into two subsets and corresponding respectively to features and labels, such that . The visible variables form an explicit representation of the data and are noted . The hidden nodes serve to approximate the underlying dependencies among the visible units.
In this paper, we will work with binary hidden nodes . The variables corresponding to the visible features will be either real with a Gaussian prior or binary, depending on the data to model, and labels variables will always be binary . The energy function of the RBM is defined as
(1) 
where and are biases acting respectively on the visible and hidden units and
is the weight matrix that couples visible and hidden nodes. The joint probability distribution over the nodes is then given by the Boltzmann measure
(2) 
where is in product form and encodes the nature of each visible variable, either with a Gaussian prior or a binary prior . is the partition function. The classical training method consists in maximizing the marginal likelihood over the visible nodes by tuning the RBM parameters via gradient ascent of the log likelihood . The update rules for the weights (and similarly for the fields) are
(3) 
with the parameter corresponding to the learning rate.
The tractability of the method relies heavily on the fact that the conditional probabilities and are given in closed forms. In our case these read:
(4)  
(5) 
where is the logistic function. The gradient of the likelihood w.r.t. the weights (and similarly w.r.t. the fields and ) is given by
(6) 
where the brackets and respectively indicate the average over the data and over the distribution (2
). The positive term is directly linked to the data and can be estimated exactly with (
5), while the negative term is intractable. Many strategies are used to compute this last term: the contrastive divergence (CD) approach [Hinton2002] consists in estimating the term over a finite number of Gibbs sampling steps, starting from a data point and making alternate use of (4) and (5); in its persistent version (PCD) [Tieleman2008] the chain is maintained over subsequent minibatches; using meanfield approximation [Marylou, Tramel, and Krzakala2015] the term is computed by means of a lowcouplings expansion.3 Learning RBM with incomplete data
The RBM is a generative model able to learn the joint distribution of some empirical data given as input. As such, it is intrinsically able to encode the relevant statistical properties found in the training data instances that relate features and labels, and this makes the RBM particularly suitable to be used in the multioutput setting in the presence of incomplete observations. In this sense, the most natural way to deal with incomplete observations is to marginalize over the missing variables; in this section we show how the contrastive divergence algorithm can be adapted to compute such marginals.
Given a partiallyobserved instance , we have a new partition of the visible space , where is a subset of observed values of that can correspond both to features and labels. and denote respectively the observed and missing values of . Let’s define the following quantity
corresponding to the marginalization of the numerator of the probability distribution over the visible variables, representing the parameters of the model. The probability over the observed variables is given by
Taking the loglikelihood and then computing the gradient with respect to the weight matrix element (also similarly for the fields and ), we obtain two different expressions for and .
(7)  
(8) 
The gradient of the LL over the weights (6) now reads
(9) 
where is the indicator function of the samples dependent set . The observed variables are pinned to the values given by the training samples. In terms of our model, the pinned variables play the role of an additional bias over the hidden variables of a RBM where the ensemble of visible variables is reduced to the missing ones. This results in the following shift to the bias of the hidden nodes:
(10) 
With respect to the nonlossy case where is given in closed form, here we need to sum over the missing variables in order to estimate . This means that also the positive term of the gradient (3) is now intractable and we need to approximate it. For CD training, we can simply perform Gibbs sampling over the missing variables (keeping fixed the observed variables). The training algorithm then becomes:
We note that the extra computational burden of LossyCD with respect to standard CD is due only to the extra Gibbs sampling steps in the positive term. Given that the observed variables strongly bias the sampling procedure speeding up convergence, only few sampling steps are needed to compute this term. Indeed, in our experiments we observed that a single sampling step (LossyCD1) is enough, making the additional complexity minimal. Finally, we note that the same method can be applied to PCD and meanfield training procedures. In the first case, it is sufficient to keep track of an additional persistent chain, which requires little extra memory and no extra computational complexity. In the second case, we only need to substitute Gibbs sampling with iterative meanfield equations.
4 Meanfield based imputation with RBM
As a generative model, the trained RBM can be used to sample new data. For imputation of missing features and labels we just need to use the observed portions of our data to bias the sampling procedure in the same way as for the computation of the positive term in Alg. 1. Namely, we estimate by pinning the observed variables and iterating CD/PCD or meanfield to approximate the equilibrium values of the missing variables. In case of a high percentage of missing observations, however, we might expect the observed variables to be correlated to many different equilibrium configurations, such that the sampling could be biased towards the wrong sample. This effect is present also during training, but in that case it is mitigated by the average over the minibatch while for imputation over a single data instance it becomes relevant. To overcome this problem, we simply average over multiple imputations for each incomplete data instance. Here we highlight the fact that the equilibrium configurations of the RBM are weighted according to the empirical data distribution; as a consequence, the bias toward an incorrect sample is easily discarded and generally a small number of different imputations need to be averaged to obtain the correct result. To further reduce the number of imputations, we employ meanfield. Indeed, a good equilibrium configuration could require more CD/PCD samples (simply due to the sampling noise) while we expect a nonextensive number of fixedpoints related to meaningful (equilibrium) configurations to be present [Decelle, Fissore, and Furtlehner2018] and these are directly obtained by iteration.
More in details, let and be the marginal probabilities respectively of visible labels and hidden variables to be activated and the marginal expectation of the visible features variables. Meanfield equations at lowest order (, being the size of the system) express selfconsistent relations among these quantities
(11)  
(12)  
(13) 
Higher order terms corresponding to TAP equations are discarded [Mézard2017]. These equations can be efficiently solved by iteration starting from random configurations until a fixed point is reached. Observed variables are simply introduced by pinning their corresponding probabilities ( or for label variables) or their marginal expectation (for feature variables) to the observed values. In practice we run these fixedpoint equations times and the imputations are obtained by simple average
In the multilabel setting, the predictor is the indicator function ( is learned, it is chosen to maximize the accuracy for known labels), while for class labels we have
Model  RMSE  Averaged AUC  Accuracy  

30%  50%  80%  30%  50%  80%  30%  50%  80%  
RBMMO(50%)  0.183  0.182  0.185  0.969  0.971  0.929  0.950  0.912  0.822 
CLE(50%)  0.195  0.195  0.195  0.686  0.718  0.742  0.256  0.232  0.282 
NoisyIMC(50%)  0.209  0.210  0.210  0.621  0.578  0.552  0.225  0.232  0.192 
MC1(50%)  0.334  0.335  0.337  0.495  0.493  0.500  0.110  0.111  0.112 
RBMMO(80%)  0.209  0.213  0.211  0.938  0.932  0.906  0.920  0.852  0.733 
CLE(80%)  0.206  0.208  0.206  0.673  0.678  0.625  0.230  0.215  0.220 
NoisyIMC(80%)  0.212  0.211  0.213  0.652  0.577  0.537  0.230  0.217  0.210 
MC1(80%)  0.334  0.334  0.335  0.500  0.501  0.500  0.112  0.110  0.110 
Model  RMSE  MicroAUC  HammingAccuracy  

30%  50%  80%  30%  50%  80%  30%  50%  80%  
RBMMO(50%)  0.131  0.137  0.123  0.943  0.934  0.888  0.919  0.907  0.873 
CLE(50%)  0.130  0.130  0.131  0.905  0.893  0.898  0.885  0.871  0.878 
NoisyIMC(50%)  0.132  0.133  0.133  0.865  0.863  0.858  0.845  0.841  0.848 
MC1(50%)  0.258  0.255  0.267  0.522  0.528  0.527  0.826  0.817  0.824 
RBMMO(80%)  0.160  0.158  0.158  0.875  0.867  0.826  0.856  0.858  0.832 
CLE(80%)  0.129  0.129  0.128  0.913  0.897  0.899  0.889  0.875  0.876 
NoisyIMC(80%)  0.133  0.134  0.134  0.853  0.857  0.849  0.839  0.835  0.826 
Model  RMSE  MicroAUC  HammingAccuracy  

30%  50%  80%  30%  50%  80%  30%  50%  80%  
RBMMO(50%)  2.316  2.427  2.463  0.842  0.852  0.794  0.972  0.956  0.912 
CLE(50%)  9.543  8.434  10.902  0.872  0.837  0.739  0.900  0.870  0.800 
NoisyIMC(50%)  8.443  8.792  9.850  0.835  0.817  0.799  0.820  0.801  0.738 
MC1(50%)  2.220  2.180  2.128  0.590  0.607  0.573  0.697  0.700  0.693 
RBMMO(80%)  3.235  3.128  3.238  0.843  0.782  0.746  0.972  0.945  0.910 
CLE(80%)  15.265  15.989  14.559  0.832  0.784  0.785  0.885  0.757  0.729 
NoisyIMC(80%)  14.155  14.289  14.395  0.852  0.822  0.803  0.895  0.807  0.735 
MC1(80%)  2.776  2.423  2.476  0.536  0.540  0.536  0.691  0.688  0.687 
Model  Averaged AUC  Accuracy  

30%  50%  80%  30%  50%  80%  
RBMMO(50%)  0.887  0.914  0.910  0.533  0.673  0.660 
CLE(50%)  0.785  0.791  0.791  0.297  0.256  0.268 
NoisyIMC(50%)  0.780  0.771  0.781  0.302  0.272  0.265 
RBMMO(80%)  0.891  0.909  0.889  0.562  0.682  0.647 
CLE(80%)  0.768  0.664  0.622  0.271  0.200  0.176 
NoisyIMC(80%)  0.748  0.687  0.615  0.264  0.220  0.178 
Model  MicroAUC  HammingAccuracy  

30%  50%  80%  30%  50%  80%  
RBMMO(50%)  0.839  0.826  0.793  0.970  0.970  0.965 
CLE(50%)  0.705  0.707  0.706  0.700  0.724  0.719 
NoisyIMC(50%)  0.704  0.702  0.700  0.710  0.717  0.718 
RBMMO(80%)  0.759  0.791  0.766  0.964  0.964  0.967 
CLE(80%)  0.693  0.688  0.694  0.718  0.706  0.718 
NoisyIMC(80%)  0.689  0.688  0.685  0.705  0.704  0.704 
5 Experimental Study
5.1 Experimental configuration
To evaluate the efficiency of RBMMO we compare its performance against CLE, NoisyIMC and MC1, which provide stateoftheart baselines.
For the transductive experiments we randomly hide features and labels of the whole dataset to generate incomplete data for training, and we compute appropriate scores for the reconstruction of missing features and labels. In the inductive test, instead, we split the whole dataset into nonoverlapping training and testing sets. Concerning the training set the same protocol is used as in the transductive test. For the test set the difference is that now all labels are hidden. Once the classifier is trained, it is applied on the test set to predict the labels. We still randomly hide the entries of test features vectors, so as to form an incomplete testing set. Finally, in the splitting we use 70% of the data instances for training and the remaining 30% for testing.
We denote by , and
the percentage of masked features, labels and classes labels respectively. Note that a masked class label means that all binary variables attached to the classes of a given label are masked together. These rates of masking are kept identical in the learning and test sets.
In the transductive test, we compute the Root Mean Squared Error (RMSE) to measure the reconstruction accuracy with respect to the missing feature values, defined w.r.t. the dataset as:
where and are the incomplete and reconstructed feature matrices. denotes the set of unobserved features. Furthermore, for the reconstructed labels we calculate MicroAUC scores and Hammingaccuracy [G.Madjarov, Kocev, and Gjorgjevikj2012] in the multilabel scenario, and Averaged AUC plus Accuracy [Li and Ye2015] in the multiclass case. In the tables, we define Hammingaccuracy as 1Hamming loss to keep a consistent variation tendency with the AUC scores. In the inductive test we only compute the scores on the reconstructed labels, since reconstructing missing features is not the goal of inductive classification.
We run the test as described
times with different realizations of the missing features and labels. Average and standard deviation of the computed scores are recorded to compare the overall performances. In the tables, we use red fonts to denote the best reconstruction and classification performances among all the algorithms involved in the empirical study. The bold black font is used to highlight the performance of the proposed
RBMMO method.For the baselines, we used grid search to choose the optimal parameter combination following the suggested ranges of parameters as in [Han et al.2018].
The RBMMO is trained following the guidelines in [Hinton2010]. We always use binary variables for the hidden layer, while in the visible layer we use binary variables for MNIST and Gaussian variables for the other datasets. In all the simulations, we fix the number of hidden nodes to 100. The learning rate is fixed to 0.001 and the size of the minibatches to 10. During training the number of Gibbs steps is set to while for imputation we iterate the meanfield equations 10 times. As a stopping condition, we considered the degradation of the transductive AUC scores with a lookahead of 500 epochs^{1}^{1}1Full code with instructions for reproducibility and extensions is released: link hidden for blind review .
5.2 Summary of datasets
We consider 4 publicly available datasets emerging from diverse disciplines, such as image processing and biology. These datasets cover both multilabel and multiclass learning tasks, and they are popularly used as benchmark datasets in multioutput learning research.
Dataset  No. of Instances  No. of Features  No.of Labels  No. of Classes 
Yeast  2,417  103  14   
Scene  2,407  294  6   
Pendigits  10992  16    10 
MNIST  70,000  784    10 
EventCat  5,93  72  6   
In addition, we consider the challenging scenario of abnormality detection on IoT devices. The relevant dataset consists in security telemetry data collected from various network appliances (e.g. smart watches, smartphones, driving assistance systems…), each reporting a features vector whose entries indicate the occurring frequency of a specific type of alert (e.g. downloading suspicious files, login failures, unfixed vulnerabilities…). Multiple labels are assigned to each device in the collected dataset, corresponding to a variety of categories of security threats. The learning problem is cast as a multilabel classification as the same telemetry features can be related to multiple threats (e.g. scanning activity and data breaching can occur simultaneously) and the collected records for training and testing are usually highly incomplete due to various causes. For example, customers can set up their privacy control policies to limit the coverage of the telemetry data shared with security vendors, or deployed sensors can unexpectedly fail. Simultaneously, it is safe for human analysts to only label the most typical and clearly defined cyber menaces, as mislabeling can reduce usability of the provided protection service by causing false alarm and/or false negative. In the following, we will test our method on a complete realization of such a dataset that we refer to as EventCat.
Some details about the datasets are reported in Table.6.
5.3 Qualitative results on MNIST
A qualitative evaluation of the performance of the RBMMO model is given by looking at features reconstruction for the MNIST dataset, as reported in Fig. 1. The model at hand has been trained over a dataset in which 50% of the features were missing. To assess the robustness of the method, we computed the reconstructions in the highly challenging case in which 80% of the features were missing. Apart from some smoothing due to the employment of meanfield imputations, the reconstructed samples look reasonably realistic. In general, from the qualitative point of you the results are comparable to those obtained with more complex and expensive DLVMs like MIWAE and MisGAN [Mattei and Frellsen2019, Li, Jiang, and Marlin2019].
5.4 Empirical study of transductive learning on public datasets
The transductive results for MNIST (multiclass) and Scene (multilabel) datasets are reported in tables 1 and 2, while for the other datasets the full results are reported in the supplementary material. Globally the results show better performances of RBMMO compared to the baseline methods CLE, NoisyIMC and MC1. Going into the details, we first observe that RBMMO is by a large margin more efficient than all of the baselines for the inference of class labels (see table 1 and the Pendigits table in the supplementary material), probably because it is able to encode more complex statistical properties. Among the baseline methods, CLE performs the best. Explicitly enforcing a predictive constraint on the subspace representation of features and labels in the model, provides more robustness against missing information.
On the multilabel problems, the situation is still in favour of RBMMO but with less margin (table 2 and Yeast table in the supplementary material), in particular at a larger percentage of missing features.
Now if we look at the reconstruction error on these datasets we observe that RBMMO generally achieves a higher reconstruction accuracy than the other opponents, especially on the MNIST dataset. The results verify empirically the basic motivation of using a generative model, such as the RBM, in the raised challenging learning scenarios: incomplete features and labels can provide complementary information to each other, so as to better recover the missing elements
. The variance of the results is omitted in the tables by lack of space. For
RBMMO the standard deviation of the derived RMSE, AUC and accuracy scores is not larger than over the different datasets. Although the RMSE scores reported by the baseline methods look comparable to the RBMMO ones, and in certain cases they are better, they also come with a slightly higher variance, such that the RBMMO seems to be more efficient and robust for features reconstruction.5.5 Empirical study of inductive learning on public datasets
Except MC1, all the other baseline methods are used for inductive learning. As in the transductive test, we show only the mean of the derived metrics in the tables. Nevertheless, we have similar variance ranges for the computed scores as reported in the transductive test. Clearly RBMMO is much better adapted to this setting than the baseline methods both for multiclass (table 4) and multilabel learning (Yeast and Scene tables in supplementary material). With a few exceptions at large missing rates () the results of RBMMO are distinctively better. The baseline inductive methods CLE and NoisyIMC are specifically designed for multilabel learning. In the multiclass scenario, unlabeled instances can’t be treated properly. If a data instance is unlabeled, the whole corresponding row in the label matrix will be considered as missing. These structured missing patterns violate the assumption of random entry missing in semisupervised learning, which thus leads to performance deterioration of the baseline methods. By comparison, RBMMO can be adapted seamlessly to multiclass and multilabel learning, producing consistently good performances.
5.6 Transductive and inductive evaluation on EventCat data
Transductive results are reported in table 3. Compared to the three baseline approaches, RBMMO shows an overall improvement of the combined features and labels reconstruction accuracy for all combinations of . When not the best one (at low missing percentage) RBMMO reaches comparable features reconstruction errors to the best baseline. Meanwhile the classification performance is consistently better, with generally better AUC scores and up to higher hamming accuracy scores compared to baseline methods. Note that MC1 achieves the best features imputation results in the transductive test when . However RBMMO can achieve a similar features imputation accuracy while producing significantly higher labels recovery precision.
As seen in table 5, the results of inductive test of RBMMO are equally encouraging. Even with highly incomplete training data, e.g. and , RBMMO produces the best predictions over partially observed testing data instances. In the extreme situation with and set to 0.8, the performances of all the methods are relatively low. Although the algorithms can not work on a standalone basis in this situation, their output can guide human analysts to effectively narrow down the hypotheses of correlation between event profiles and threat types. Despite the difficult learning scenario, RBMMO
still achieves the highest precision, which denotes its applicability for aiding human analysts to identify useful heuristic rules for detecting malicious events.
6 Conclusion
Machine learning is witnessing a race to high complexity models eager for large data and computational power. In the context of multioutput classification in a challenging scenario  (i) learning with highly incomplete features and partially observed labels; ii) applying the learnt classifier with incomplete testing instances)  we advocate instead for simple probabilistic and interpretable models. After refining the learning of the RBM model, we give empirical evidences that it can be efficiently adapted to this context on a great variety of datasets. Experiments are conducted on both public databases and a realworld IoT security dataset, showing various sizes of training sets as well as features and labels vectors. Our approach consistently outperforms the stateoftheart robust multiclass and multilabel learning approaches with imperfect training data, indicating good usability for practical applications.
References
 [anonymous2018] anonymous. 2018. Hidden for blind review.

[Bucak, Jin, and Jain2011]
Bucak, S. S.; Jin, R.; and Jain, A. K.
2011.
Multilabel learning with incomplete class assignments.
In
Proceedings of IEEE Conference on Computer Vision and Pattern Recognition 2011
, 2801–2808.  [Cabral et al.2015] Cabral, R.; l. Torre, F. D.; Costeira, J. P.; and Bernardino, A. 2015. Matrix completion for weaklysupervised multilabel image classification. IEEE Transactions on Pattern Analysis and Machine Intelligence 37(1):121–135.
 [Chiang, Hsieh, and S.Dhillon2015] Chiang, K.Y.; Hsieh, C.J.; and S.Dhillon, I. 2015. Matrix completion with noisy side information. In Proceedings of the 28th International Conference on Neural Information Processing Systems, NIPS’15, 3447–3455. Cambridge, MA, USA: MIT Press.
 [Decelle, Fissore, and Furtlehner2018] Decelle, A.; Fissore, G.; and Furtlehner, C. 2018. Thermodynamics of restricted Boltzmann machines and related learning dynamics. Journal of Statistical Physics 172(6):1576–1608.
 [Ghahramani and Jordan1994] Ghahramani, Z., and Jordan, M. I. 1994. Supervised learning from incomplete data via an em approach. In Advances in Neural Information Processing Systems 6, 120–127. Morgan Kaufmann.
 [G.Madjarov, Kocev, and Gjorgjevikj2012] G.Madjarov; Kocev, D.; and Gjorgjevikj, D. 2012. An extensive experimental comparison of methods for multilabel learning. Pattern Recognition 3083–3104.
 [Goodfellow et al.2014] Goodfellow, I.; PougetAbadie, J.; Mirza, M.; Xu, B.; WardeFarley, D.; Ozair, S.; Courville, A.; and Bengio, Y. 2014. Generative adversarial nets. In Ghahramani, Z.; Welling, M.; Cortes, C.; Lawrence, N. D.; and Weinberger, K. Q., eds., Advances in Neural Information Processing Systems 27. Curran Associates, Inc. 2672–2680.
 [Han et al.2018] Han, Y.; Sun, G.; Shen, Y.; and Zhang, X. 2018. Multilabel learning with highly incomplete data via collaborative embedding. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’18, 1494–1503.
 [Hinton2002] Hinton, G. E. 2002. Training products of experts by minimizing contrastive divergence. Neural computation 14:1771–1800.
 [Hinton2010] Hinton, G. 2010. A practical guide to training restricted Boltzmann machines. Momentum.
 [Kingma and Welling2014] Kingma, D. P., and Welling, M. 2014. Autoencoding variational bayes. In International Conference on Learning Representations (ICLR), 4413–4423.
 [Li and Ye2015] Li, Lin, Y. W., and Ye, M. 2015. Experimental comparisons of multiclass classifiers. Informatica.
 [Li, Jiang, and Marlin2019] Li, S. C.X.; Jiang, B.; and Marlin, B. 2019. Learning from incomplete data with generative adversarial networks. In International Conference on Learning Representations.
 [Marylou, Tramel, and Krzakala2015] Marylou, G.; Tramel, E.; and Krzakala, F. 2015. Training restricted Boltzmann machines via the ThoulessAndersonPalmer free energy. In Proceedings of the 28th International Conference on Neural Information Processing Systems, NIPS’15, 640–648.
 [Mattei and Frellsen2019] Mattei, P.A., and Frellsen, J. 2019. MIWAE: Deep generative modelling and imputation of incomplete data sets. In Chaudhuri, K., and Salakhutdinov, R., eds., Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, 4413–4423. Long Beach, California, USA: PMLR.
 [Mézard2017] Mézard, M. 2017. Meanfield messagepassing equations in the Hopfield model and its generalizations. Phys. Rev. E 95:022117.
 [Nijman and Kappen1994] Nijman, M. J., and Kappen. 1994. Using boltzmann machines to fill in missing values.

[Sun, Y.Zhang, and Zhou2010]
Sun, Y.; Y.Zhang; and Zhou, Z.
2010.
Multilabel learning with weak label.
In
Proceedings of the 24th AAAI Conferences on Artificial Intelligence
, AAAI10, 593–598.  [Tieleman2008] Tieleman, T. 2008. Training restricted Boltzmann machines using approximations to the likelihood gradient. In Proceedings of the 25th International Conference on Machine Learning, ICML ’08, 1064–1071.
 [Yoon, Jordon, and van der Schaar2018] Yoon, J.; Jordon, J.; and van der Schaar, M. 2018. GAIN: Missing data imputation using generative adversarial nets. In Dy, J., and Krause, A., eds., Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, 5689–5698. Stockholmsmässan, Stockholm Sweden: PMLR.
Comments
There are no comments yet.