A century after its inception [1, 2, 3], parameter estimation through maximum likelihood (ML) is still one of the most widely used statistical estimation techniques. In a more rudimentary form, maximum likelihood can even be traced back as far as the 18th century . ML estimation is employed in fields as diverse as genealogy, imaging, genetics, astrophysics, physiology, and quantum communication, as is illustrated by many recent research works such as [5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17]. Moreover, new tools and techniques based on or related to ML are still being developed within modern statistics and related fields. Some recent examples are [18, 19, 20, 21, 22, 23]. A satisfactory approach to ML-based estimation for semi-supervised classifiers, however, has not been developed so far.
In general, the aim of semi-supervised learning is to improve supervised classifiers by exploiting additional, typically easier to obtain, unlabeled data [24, 25]. Up to now, however, the literature has reported mixed results when it comes to such improvements; it is not always the case that semi-supervision leads to lower expected error rates or the like. On the contrary, severely deteriorated performances have been observed in empirical studies and theory shows that improvement guarantees can often only be provided under rather stringent conditions on the data we are dealing with [26, 27, 28, 29, 30].
In this work, we demonstrate when and how ML estimators for classification can be improved in the semi-supervised setting. We show that semi-supervised estimates can be constructed that are essentially closer to the estimates that would be obtained when also all the labels for all unlabeled data would be available in the training phase. That is, the semi-supervised estimates are closer to the estimates obtained with all labels available than the supervised estimates that rely on the same labeled instances as semi-supervision does, but that do not use the additional unlabeled data set. A crucial difference between the theory in this work and theories from, for instance, [26, 27, 28, 29, 30] is that the former can do without strict assumption on the data or the relation between the data and the classifier considered. In fact, as we will see, Theorem 2 in Section 4 especially relies on assumptions that are minimal and can be readily checked on the data at hand. Other results in semi-supervised learning resort to premises that generally cannot be conclusively tested for.
In order to show the potential improvements semi-supervised classifiers can deliver, we introduce a novel, generally applicable estimation principle that extends likelihood estimation to the semi-supervised case in a consistent way. In particular, our method is contrastive, which refers to the fact that the objective function takes into account the original supervised solution in an explicit way. This enables the semi-supervised solution to explicitly control the potential improvements over the supervised solution. In addition, our method is pessimistic, which refers to the fact that the unlabeled data is treated as if it behaves in a worst kind of way, i.e., such that the semi-supervised estimates benefit the least from it. It makes the estimates conservative, but resilient to any possible state in which the unlabeled data can be encountered. We refer to this principle as maximum contrastive pessimistic likelihood estimation or MCPL estimation for short.
In Section 3, the main theory is introduced, contrast and pessimism are further elucidated, and our core, general estimation principle, MCPL, is presented. In that same section, we also sketch the possibility of improved semi-supervised estimation by means of MCPL. Sections 4 and 5
provide a worked-out illustration and a further specification of our theory. The former section introduces the MCPL-based version of LDA, proves in what way the semi-supervised LDA parameters are expected to really improve over the regular supervised ones, and sketches the heuristic employed to tackle the related optimization problem. The latter section, Section5, provides extensive results on a range of data sets, comparing regular supervised LDA and an earlier proposed semi-supervised approach to LDA  with the novel semi-supervised LDA introduced here. Section 6 puts the results in a somewhat broader perspective and raises some open issues. Finally, Section 7 concludes. To begin with, however, we put our work in context, provide some preliminaries, introduce ML estimation and LDA, give an overview of the principal related works, and discuss related earlier findings.
2 Background and Preliminaries
The log-likelihood objective function for a -class supervised classification problem takes on the general form
where class contains a total of samples, is the total number of samples,
is the set of all labeled training pairs with
-dimensional feature vectors111As is also common in many mathematical statistics and analysis textbooks, plain italic lowercase letters may indicate vectors and not only scalars., and
are their corresponding labels. Denoted with is the th sample from class . Here, every model parameter—specific to a particular class or not—is absorbed in . The set contains all parameter settings possible, thus defining the full class of models under consideration. Now, the supervised ML estimate, , maximizes the above criterion:
What follows is an overview of the main approaches to semi-supervised learning with a particular focus on likelihood-based methods. Specific attention will furthermore be given to semi-supervised approaches to LDA. For broader and more extensive literature reviews, we refer to  and .
2.1 Self-Learning and Expectation Maximization
With the current work, we in essence revisit a problem in ML estimation that has already been considered as early as the late 1960s. In 1968, Hartley and Rao sketched a general way of exploiting unlabeled data
in likelihood estimation of model parameters for the analysis of variance. The basic idea is to consider all possible labelings that the unlabeled data could have and choose that labeling that achieves the largest log-likelihood. As such, this procedure still relies on ML estimation, but where the fully supervised model would merely optimize the log-likelihood of the parameters of the model, here the unobserved labels
of the unlabeled data in are considered parameters over which the likelihood is maximized as well:
Clearly, as the number of possible labelings grows exponentially with the number of unlabeled data points, even for fairly small sample sizes this procedure is generally intractable.
A learning strategy that is often referred to as self-learning or self-teaching approaches the problem in a similar though greedy way. In its most most simple form, the classifier of choice is trained on the available labeled data in an initial step. Using this trained classifier, all unlabeled data or part of it are assigned a label. Then, in a next step, this now labeled data is added to the training set and the classifier is retrained with this enlarged set. Given the newly trained classifier, one can relabel the initially unlabeled data and retrain the classifier again with these updated labels. This process is iterated until convergence, i.e., when the labeling of the initially unlabeled data remains unchanged.
, in 1975, was probably the first to apply this procedure and indeed suggested it as a computationally more tractable alternative to the one in. Similar procedures have been reintroduced throughout the last couple of decades (see, for instance, [35, 36, 37]). Outside of the literature on likelihood estimation, a procedure reminiscent of McLachlan’s had already been proposed. In 1966, while dealing with an issue slightly different from semi-supervised learning, Nagy and Shelton proposed a general technique similar to self-learning . One of the crucial differences is that the labeled data is only used to train the initial classifier. It does not play a role in any of the subsequent self-learning iterations. Also this procedure has been reconsidered many years after it was initially suggested, e.g. in .
Possibly the best known semi-supervised likelihood-based approach treats the absence of labels as a classical missing-data problem and integrates out these nuisance parameters to come to a new, full model likelihood [39, 40, 41]
Its maximization over
typically relies on the classical technique of expectation maximization (EM) in which the estimates are not updated on the basis of hard labels, but rather using posterior probabilities, which can equivalently be thought of as soft labels or assignments. In 1973, and  were possibly the first to consider this specific problem explicitly, though  had already employed such formulation in its applied work in 1972. A more modern overview of EM approaches to partial classification can be found in .
At a first glance, self-learning and EM may seem different ways of tackling the semi-supervised classification problem, but there are clear parallels. Indeed, where EM provides soft class assignments to all unlabeled data, self-learning just assigns every such instance in a hard way to one unique class in every iteration. In fact,  effectively shows that self-learners optimize the same objective as EM does. Similar observations have been made in  and .
The major problem with the aforementioned methods is that they can suffer from severely deteriorated performance with increasing numbers of unlabeled samples. This behavior, already extensively studied [48, 49, 31, 50], is often caused by model misspecification, i.e., the statistical class of models with parameters is not able to properly fit the actual data distribution. We note that this is in contrast with the supervised setting, where most classifiers are capable of handling mismatched data assumptions rather well and adding more labeled data typically improves performance. The latter is in line with the behavior many misspecified likelihood models display .
2.2 Density-Ratio Correction
A rather different approach to semi-supervised estimation for likelihood-based models is offered in , in which the problem of semi-supervised learning is basically treated as one of learning under covariate shift . Covariate shift is the setting in which the posterior distribution of the labels given the data, , remains the same, while the marginal might change when going from the training to the testing phase. Following , the main idea in  is that the marginal distribution over the feature space can be better estimated based on all data, both labeled and unlabeled. Subsequently, the density ratio between this estimate and the marginal estimate based on labeled data only can be exploited to weight the training data by means of their importance, as generally suggested in .
In their work, the authors prove that, asymptotically, this semi-supervised learning procedure works better than its regular, supervised counterpart. Next to the fact that results hold only asymptotically, the behavior of this semi-supervised learner seems to depend strongly on the way the density ratio is determined. In the finite sample setting, one may run into similar kind of problems as those sketched in the previous subsection: choosing the incorrect model for estimating the density ratio of the marginal feature distributions, could lead to deteriorated performance instead of performance improvements. Experimental results in both  and  seem to reflect this.
2.3 Intrinsically Constrained Estimation
In recent years, the author proposed an essentially different take on semi-supervised learning [55, 56]. On a conceptual level, the idea is that the available unlabeled data indirectly puts restrictions on the parameters possible, i.e., it basically allows us to look at a set that is smaller than the initial set . A first operationalization of this idea has been studied for the simple nearest mean classifier (NMC, ). It exploits constraints that are known to hold for this classifier, defining relationships between the class-specific parameters and certain statistics that are independent of the specific labeling. In particular, for the NMC the following constraint can be exploited:
with the estimated overall sample mean of the data, the sample means of the classes, and the estimates of the class priors. In the supervised setting this constraint is automatically fulfilled . Its benefit only becomes apparent, therefore, with the arrival of unlabeled data that can be used to improve the label-independent estimate . Using this more accurate estimate results in a violation of the constraint. Fixing it by properly adjusting the s, these label-dependent estimates become more accurate as well.
Supervised LDA can be improved in a similar way. The same constraint in Equation (4) holds, but for LDA additional ones involving the class-conditional covariance matrix apply. Notably, we have that the covariance matrix of all the data, the total covariance , equals the sum of the covariance between the class means, the between-class covariance , and the class-conditional covariance matrix (which is also referred to as the within-class covariance) :
The aforementioned works enforce the constraints imposed in a rather ad hoc way. A somewhat more principled constrained likelihood approach is suggested in [58, 59]. Generally, given any constraint that the parameters of the semi-supervised classifier should comply with, the idea is to maximize the original likelihood from Equation (1)—as in Equation (2), but subject to the constraint, i.e., we solve
Reference  shows, for instance, how to formulate the constrained NMC from  in this way. A major shortcoming of this approach is that such constraints must have been identified in the first place. For this reason, its applicability to other classifiers is currently limited.
A second and more recent instantiation of our general idea coined in  does allow for broader applicability [60, 61]. The optimization suggests to find those parameters that maximize the likelihood on the labeled data set , but only allows solutions that can be achieved with a data set that includes labeled versions of the initially unlabeled instances as well. In terms of a likelihood formulation, what it suggests to solve is the following:
The first important ingredient is the set , which is the labeled data set augmented with the unlabeled data combined with the labels in . So
is a fully labeled data set for all . The second important ingredient is the set , which typically is a proper subset of the original parameter set . This set contains all possible classifier parameters that are obtained by training classifiers on all of the possible fully labeled data sets . As we need to consider all possible labelings for the unlabeled data, this brings us back to Hartley and Rao’s intractable method . In  and , this problem is overcome by introducing the possibility of fractional or soft labels, resulting in a well-behaved quadratic programming problem for the case of the least squares classifier.
Putting our earlier work further in the appropriate context, we should finally mention  and , where likelihood-based semi-supervised learning guided by particular constraints is considered as well. The crucial difference is that the constraints proposed in these works are typically derived from domain knowledge and very task specific. If these a priori constraints are correct, a learner can obviously benefit from them, even in the supervised case. If they are incorrect they may lead to severely deteriorated performance. So where these constraints are classifier-extrinsically motivated, any other method in this subsection relies on intrinsically motivated constraints, which are fixed as soon as the data is available and the choice of classifier is made.
2.4 Supervised and Semi-Supervised LDA
As our worked-out example in Sections 4 and 5 concerns LDA, this subsection turns to its associated likelihood and the specific semi-supervised solutions that have been proposed for this classical technique.
where , the class priors, is the class means, and the class-conditional covariance matrix. The , on the last line, denotes the normal (or
aussian) probability density function. Of course, to find the supervised solution, we solve the maximization already noted in Equation (2), which leads to the well-known ML estimates of the parameters of regular supervised LDA.
Semi-supervised LDA has been considered both in theoretical and methodological work. The main example in Hartley and Rao’s work  treats univariate LDA in the semi-supervised setting. Also McLachlan  focusses on LDA. Following these contributions, other early studies of the use of unlabeled data in LDA can be found in [65, 40, 41] and . Self-learned and intrinsically constrained versions of LDA have been compared in  and .
Let us finally remark that various contributions from a large number of disciplines still employ classical, supervised LDA as their decision rule of choice. A handful of recent examples from the applied and natural sciences can be found in some of the earlier-mentioned references: [5, 6, 7, 8, 9]. Semi-supervised versions of LDA, however, have not been widely applied. The general shortcoming mentioned in Subsection 2.1, the fact that self-learned and EM versions can give sharply inferior performance, probably contributes to this.
3 Contrastive Pessimistic ML
For none of the aforementioned semi-supervised learning schemes and classifiers, there are currently any generally applicable guarantees when it comes to performance improvements, unless one makes strong assumptions about the data. The learning strategy that we devise in this section does allow for such a guarantee on the training set in a strict way. This we will show in Section 4. The main, general theory is provided in the current section.
Consider the fully labeled data set
It is similar to considered in Subsection 2.3, but we now assume that contains the true labels belonging to the feature vectors in . Define
which gives the classifier’s parameter estimates on the full training set in which also the unlabeled data is labeled. With respect to this enlarged training set , the estimate is optimal by construction and cannot be improved upon. As the supervised parameters in are estimated merely on a subset of , we have
In the semi-supervised setting, both and are at our disposal, but has not been observed. We have more information than in the supervised setting, but less than in the optimal, fully labeled case. The principal result obtained in this section is that, for likelihood-based classifiers, semi-supervised parameter estimates obtained by means of MCPL are essentially in between the corresponding supervised and the optimal estimates:
In itself, this result might not seem all too helpful as we can easily come up with a semi-supervised parameter estimate for which these inequalities are trivially fulfilled: take to equal . However, we first want to clarify that the inequality holds generally for MCPL before we proceed and make the claim that strict improvements by means of MCPL over regular supervised estimation can be expected. That is, we argue, at least for particular classifiers, that
i.e., the log-likelihood on the fully labeled set obtained by the semi-supervised estimates is strictly larger than that obtained under supervision. For LDA, this is proven in Section 4.
3.1 Contrast and Pessimism
To be able to construct a semi-supervised learner that improves upon its supervised counterpart, we take the supervised estimate into account explicitly and consider the difference in loss incurred by and .
Before doing so, however, we first introduce some notation. We define to be the hypothetical posterior of observing a particular label given the feature vector . We may interpret the as soft labels for every and will also refer to them as such. This respects the fact that classes may be overlapping and not every can be be assigned unambiguously to a single class. By definition, . More precisely, we can state that the -dimensional vector is an element of the -simplex in :
Provided that these posteriors are given, we can express the log-likelihood on the complete data set for any as
in which the dependence on the s is explicitly indicated also on the left-hand side by means of the variable . Note that use of these soft labels in allows more flexibility than just using a set of hard labels , such as was for instance done in Equations (3) and (6).
For a given , the relative improvement of any semi-supervised estimate over the supervised solution can now be expressed as follows:
This contrasts the semi-supervised solution with the regular supervised solution obtained on the data set , enabling us to explicitly check to what extent semi-supervised improvements are possible in terms of log-likelihood. As we are dealing with a semi-supervised problem, is unknown and we cannot use Equation (9) directly for optimization. The choice we make now is the most pessimistic one: we are going to assume that the true (soft) labeling is most adverse against any semi-supervised approach and consider the that minimizes the gain in likelihood. That is, our objective function becomes
where ; the Cartesian product of simplices.
3.2 MCPL Estimation
We are now ready to define MCPL estimation, which extends general likelihood estimation for supervised learners to the general semi-supervised case.
Definition 1 (Mcpl).
Let be the supervised ML estimate maximizing and let be a set of unlabeled data. A maximum contrastive pessimistic likelihood estimate, , is an estimate that maximizes the criterion in Equation (10), i.e.,
Maximizing the objective function for leads to a rather conservative estimate, because of the pessimistic choice of . But we need this choice, in combination with the contrastive nature of the objective function, to be able to guarantee that the following holds.
To see that the lemma indeed holds, consider Equation (11). Because we can take , 0 is always among the minimizers in this equation. As a consequence, the maximum will never be smaller than 0:
Looking at Equation (9), this means that the difference between the semi-supervised and the supervised log-likelihood is larger than 0, but as this holds even for the worst choice of , it must also hold for the true hard labeling considered in . From this, the first inequality follows in Equation (12), which shows the lemma to hold.
3.3 Prospects of Improved Estimates
If we can show for a classifier that we can expect the inequalities in Lemma 1 to be strict, then we can conclude that the semi-supervised parameter estimates are essentially better than those obtained under supervision. When can we expect this to happen? There are at least two different ways.
Firstly, a semi-supervised classifier can be better if the true underlying soft labeling is less adversarial than the worst-case that is considered in MCPL estimation. Even though we cannot give any general quantitative statement on how often this happens, we can imagine that this is quite likely. Secondly, we can expect improvements in case the set of feature vectors of the labeled instances, , is an ill representation of the complete set of labeled and unlabeled data, and . It is clear that nothing can be gained in the other extreme, where the feature vectors in are just exact copies of those in . In that case, MCPL estimation would just recover the supervised estimate. In the next section, we use such ill-representation argument to show that semi-supervised LDA typically outperforms its supervised counterpart.
4 MCPL Version of LDA
Combining MCPL estimation as defined in Subsection 3.2 with the log-likelihood formulation of regular supervised LDA from Equation (7) leads to our proposal of a proper semi-supervised version of LDA. Following the previous section, we have
Here and in what follows, the subscripted LDA makes explicit that we are specifically considering this classifier. Subsection 4.3 briefly presents the heuristic we used to carry out the necessary maximinimization to actually obtain . But first, in the next two subsections, we demonstrate that we can expect improved semi-supervised estimation.
As the set of normal densities makes up an exponential family, it can be reparameterized into a so-called canonical parametrization such that it is concave in its parameters [67, 68]. Denote this reparametrization by . For fixed , is also concave. Now, by definition of the MCPL estimate
From this, it is not difficult to see that for fixed , is concave in and for fixed , is linear in . So is in fact concave-convex on . In addition, is compact and so we can invoke the important minimax corollary by Sion  that allows us to interchange the maximization and minimization, which in turn means that the solution to the above maximinimzation is a saddle point . Moreover, the estimate is unique if is strictly concave in . This is ensured if is positive definite. From Equation (14) in Subsection 4.2, it follows that this holds, for instance, if is positive definite. Equivalently, we will assume the supervised estimation problem to be well-posed.
For normal distributions, both the standard parametrization and the canonical parametrization are complete parameterizations. We have: , where returns the upper triangular part of the square matrix . As we consider well-posed estimation problems, is invertible and so the mapping between and is a bijection (cf. ). So coming back from the canonical parametrization to our original , we see that the maximinimzation also leads to a unique solution for . This will be important in what follows.
4.2 Semi-Supervised Improvements
We consider , which is Equation (9) with the particular choice of the likelihood from Equation (7). Leaving fixed, we saw that there is a unique maximizer for . Fixing , the supervised part of the contrastive likelihood does not play an essential role in the objective function. It merely provides an offset, and the maximizer of is equal to the maximizer of . Now, the latter is a weighted version of standard LDA—the weights are provided by —and it is not difficult to show that, for every class , the optimal ML parameter estimates are given by
while the estimate of the average class-conditional covariance matrix becomes
Note that the total data mean equals
which is independent of the soft labels . We now additionally note that also for weighted LDA, for any choice of , the constraint in Equation (4) holds. The MCPL solution will have corresponding pessimistic soft labels and therefore satisfies the constraint as well: .
Now, if semi-supervised learning does not improve over the supervised estimate, should equal the initial supervised solution , because the estimate is unique (see Subsection 4.1). This, in turn, implies that we also have . But as the supervised solution is trained on only, it should simultaneously fulfil the constraint in Equation (4) with the total data mean equal to
i.e., the sample average of . We therefore have:
If the feature vectors of our classification problem come from a continuous distribution then, unless is empty, the probability that equals is zero. This, in turn, implies that we can expect to be different from and, therefore, improve upon it. With this, we have proven our first main result concerning semi-supervised LDA.
If the supervised estimation problem is well-posed, , and if the feature vectors are continuously distributed, the strict inequality
holds almost surely.
We should note that if the feature distribution is discrete, the inequality holds with a probability smaller than one. Nonetheless, when either the number of discrete elements of the distribution, the number of labeled points, or the number of unlabeled feature vectors is large, the probability that the inequality is strict typically gets close to one. We dare to conjecture that Theorem 1 will be accurate for many practical purposes, even in the discrete case.
What we can say in the discrete case is that the probability that does not equal is nonzero and, therefore, we at least have strict improvement in expectation.
If the supervised estimation problem is well-posed and , we have
where the expectation is taken over .
Hence, LDA parameter estimation by means of MCPL is, in the average, always better than classical supervised log-likelihood estimation.
4.3 Solving the Maximinimization
As was discussed in Subsection 4.1 already, the objective function, as provided by Equation (9), is linear in and strictly concave in . As a result, we know that we are looking for a saddle point solution with a unique optimizer for . Moreover, we know there are no other local saddle point solutions for this maximinimization problem . The basis of our heuristic to come to an MCPL estimate for the parameters of semi-supervised LDA are the following two steps between which the optimization alternates.
Given LDA parameters , the gradient for is calculated, and is changed to , with the step size. The following should be noted:
is not guaranteed to be in , so we project back into this set in every iteration ;
the objective function is linear in , so the gradient is easily obtained:
we want to minimize for , so we change its value in the direction opposite of the gradient, i.e., with .
In our experiments in Section 5, the step size is decreased as one over the number of iterations. Furthermore, we limit the maximum number of iterations to 1000. In addition, if the maximin objective does not change more than in one iteration, the optimization is halted. With these settings, in our experiments, the maximum number of iterations is reached seldom (in less than one in every thousand cases).
Finally, we remark that care should be taken when calculating the necessary log-likelihoods or any of the related quantities. For example, the logarithm of the determinant of the average class covariance matrices can, especially for moderate- and high-dimensional problems, easily results in numerical infinities. Fairly reliable results can, in this instance, be obtained by determining the singular values of the covariance matrix through an SVD and taking the sum of the logarithm of these values.
5 Experiments and Results with LDA
Having presented the specific theory for semi-supervised LDA and a heuristic approach to find its MCPL parameters in Section 4, there are four main issues we want to investigate experimentally. To start with, the theory states that semi-supervised LDA estimates are better on the training data at hand given the log-likelihood as the performance measure. The two questions this raises are, firstly, how do these estimates compare to the supervised estimates on new and previously unseen test data? And secondly, how do they perform and compare in terms of the 0-1 loss, i.e., the classification error? Concerning the second point, we remark that the relation between likelihood and error rate is not necessarily monotonic and a higher likelihood does not necessarily lead to a lower error. It is only in recent years that considerable effort has been spent on understanding the nontrivial relationship between the criterion a classifier optimizes (here the likelihood) and how that classifier performs in terms of any other criterion of interest (here the error rate). Refer, for instance, to [73, 74, 75, 76, 77, 78]. Thirdly, we measure the log-likelihood for the various parameter estimates also on the training set. This gives us a basic check on the performance of our optimization heuristic: we should find that the semi-supervised solutions never deteriorates the supervised solution and typically even improves upon it. The final, fourth point is to compare our theoretically underpinned method to the semi-supervised LDA technique from , which enforced the constraints in Equations (4) and (5) in an ad hoc way. It puts our novel method in a broader perspective, as the earlier method has been studied extensively already. Among others, this constrained LDA has been shown to perform much better than self-learning or EM approaches to LDA and to be competitive with transductive SVM 
and even entropy regularized logistic regression, especially in the small sample setting.
5.1 Data Sets and Preprocessing
|full data set name||abbreviated||cit.|
|climate model simulation||climate|||
|first-order theorem proving||first-order|||
|gas sensor array drift||gas|||
|low resolution spectrometer||low|
|magic gamma telescope||magic|
|optical recognition of||optical|
|pen-based recognition of||pen-based|
We chose 16 data sets from the UCI Machine Learning Repository to perform our experiments on. The full names can be found in Table 1. The same table contains abbreviated names that we use to refer to these sets in other tables and throughout the text.
A main criterion for choosing these particular data sets was their size. We wanted to be able to easily generate labeled and unlabeled training sets from them plus independent test sets and we wanted especially the last two sets to have a fair size. In addition, we wanted to limit the computational burden and therefore did not choose too high-dimensional sets. Moreover, in order to rid ourselves of potential problems with singular class-conditional covariance matrices (which would leave the supervised estimation problem ill-posed) or numerical challenges related to this, the complete data sets were preprocessed in the following way. In a first step, the variance of every individual feature was normalized to one. A feature was removed altogether if its variance was numerically zero. In a second step, PCA was applied to the full sets and
of the variance was retained in order to remove linearly dependent features. We note that reducing the dimensionality essentially changes the likelihood of a data set, but that any nonsingular linear transformation merely offsets the log-likelihood attained by LDA.
Table 2 provides various statistics for the 16 data sets. It also indicates, in the last column, which 6 of the 16 data sets consist purely of discrete feature values. The fourth-to-last to second-to-last column in the table gives the different sizes of labeled (), unlabeled (), and test sets we used in every run of our experiments. We do not expect much gain from employing unlabeled data if the number of labeled points is large. We therefore kept the labeled set small, choosing a size of twice the dimensionality plus once the number of classes: . We also took care that every class has at least one labeled instance in the training set. The remainder of the data was then randomly divided in two, more or less, equally sized sets that make up the unlabeled and test sets, respectively.
|data set (abbr.)||#objects||dim.||PCA/||largest||(%)||smallest||(%)||#test||discr.|
5.2 Performance Criteria and Results
|data set||estimated on test||estimated on full train||% test wins||% trn. wins|
|data set||estimated on test||estimated on full trn.||% test wins||% trn. wins|
|data set||test||trn.||test||trn.||win test lik.||win trn. lik.||win test err.||win trn. err.|
With the labeled, unlabeled, and test sets as described above, we determined , , and . In addition, we calculated , which are the parameters of the constrained LDA estimated by means of the more ad hoc procedure in . For , we of course had to use the true labels belonging to the unlabeled data. The parameters in can be estimated in closed form. For details, we refer to the original work in .
For every data set the experiments were repeated 1000 times. Using the estimates , , and , we calculated the following twelve criteria based on the log-likelihood for Table 3: the three average log-likelihoods (denoted , , and ) on the independent test data; the same three average log-likelihoods on the labeled plus unlabeled data, i.e., the training data ; the percentage of times that the log-likelihood of the semi-supervised learner is strictly larger than the log-likelihood of the supervised learner (, read: semi-supervised over supervised); the percentage that the log-likelihood of the optimal classifier is strictly larger than the semi-supervised one (this number, denoted , as well as the previously defined are calculated both on the test and the training set); and finally we expressed the relative improvement of the semi-supervised approach over the supervised approach in comparison with the optimal estimates by . Again this is done both on the test and the training set. The same quantities are also calculated for the corresponding error rates , , and (see Table 4), with the only difference that we check numbers to be strictly smaller, instead of larger, to determine and . Finally, Table 5 contains averaged log-likelihoods and error rates , both on training and test sets, for the more ad hoc semi-supervised approach. Similar to those in Tables 3 and 4, in the last four columns, comparisons to the corresponding log-likelihoods and classification errors of the supervised and our novel semi-supervised approach are made.
A permutation test on all different paired results , both for the four log-likelihoods , , , and and the four errors , , , and , showed that for almost all cases we cannot retain the hypothesis that their averages are the same (at ). There are a few exceptions though. For the test error rates and on spectf
, we cannot reject the null hypothesis of equality of expectation (at). On optical and qsar there is no statistically significant difference between and for the test log-likelihoods (at and , respectively). Finally, and are, both in training and testing, not significantly different on shuttle (at and ) and spambase (at and ), while and are not significantly different on skin (at and ). For easy reference, the related performance numbers are underlined in the respective result tables.
6.1 Guarantees on the Training Set
The results in Table 3 show that, on the training set, MCPL-based semi-supervised LDA is in between the regular supervised and the optimal estimate. That this happens to be the case in a strict sense, in all experiments we carried out, can be most readily deduced from the values under and on the training set. These numbers equal in all cases. This, in turn, indicates that in all of the 16,000 experiments we ran, the strict inequality from Theorem 1 was satisfied. Even for the discrete data sets this holds true, which was to be expected, given the number of different discrete vectors these data sets take on. Spectf has the smallest number, 267, implying that every feature vector in spectf is unique. With 267 distinct values, chances are indeed very small that the means from Equation (15) and (16) coincide.
6.2 Likelihood Behavior on the Test Set
The aforementioned guarantees are on the training set that includes the unlabeled samples in , but of course we are interested in the performance on independent test data as well. We are unaware of any theoretical results for the log-likelihood that provide a precise connection between performance on the training set and the test set, though we do expect that with more training data the likelihood of the supervised model on the test set becomes better in expectation. We need to consider such improvement in expectation, simply because, for a single instantiation of a classification problem, we might be unlucky in our draw of training or test set. In contrast with the situation in the training phase, we can therefore only get improvements in the average. Comparing the test log-likelihood in Table 3 for the supervised method with the one for the semi-supervised approach, we see the same as on the training data: for every data set, is smaller than . Also if we look at , we see that there are only two cases out of 16,000 in which the supervised estimate was better: we find a percentage of instead of on miniboone.
The story is different, however, if we compare the semi-supervised and the optimal estimates. First of all,
indicates that, on the independent test set, the semi-supervised estimate is better than the optimal one in about 5% of the cases. In itself, this does not have to be at odds with what we expect for the likelihood, as it concerns the number of wins or losses and not the average log-likelihood. Our results ongas, optical, and qsar, however, indicate that also when it comes to the expected log-likelihood, may outperform . Only the result on gas is statistically significant though. Moreover, the differences are anyway relatively small, as also the second-to-last column in Table 3 illustrates, where we find values basically equal to 1 for these sets.
Regarding the log-likelihood, we generally note the following. Overall, the relative improvements, as provided in the last two columns of Table 3, are considerable, sometimes enormous even. None of them is lower than 0.9 and many are virtually 1. This shows that the semi-supervised log-likelihood is, relative to the supervised value, very close to the optimal estimate. The immense improvements are probably explained by the fact that the averaged class-conditional covariance matrix is much more stably estimated in case of semi-supervision. The supervised estimate relies on samples, while the semi-supervised estimate, as can be readily seen from Equation (14), is based on all in the training set. In our experiments is considerably larger than . The latter is only slightly larger than twice the dimensionality, resulting in unstable covariance estimates. Clearly, the extreme difference in behavior for the various estimates will disappear with increasing numbers of labeled data.
6.3 Error Rates
Unlike the log-likelihood, the 0-1 loss is bounded and the differences and relative improvements stated in Table 4 are not that large. In almost all cases, is smaller than and is smaller than in turn. On the test set, the maximum relative improvement reported is 0.426 on optical, with a good second of 0.415 on shuttle.
There are three settings, however, in which no improvements of semi-supervised over supervised learning are attained: the first one is on the training set for low and the two others are in the training and test phase for spectf. In all cases, is better than . So we have the, possibly, somewhat counterintuitive behavior that the estimates improve in terms of the expected log-likelihood, but that the expected error rate still deteriorates. Similar phenomena for other classifiers have been described in [74, 75], where simple artificial examples are provided of how such behavior can be realized. It is a glimpse of the earlier mentioned difficult interrelationship two different performance criteria can display [73, 76, 77, 78], which we alluded to earlier on in Section 5. We checked the learning curves for low and spectf and they just showed the regular behavior: with increasing labeled sample sizes, the expected error rate of the supervised classifier decreases.
Finally, we remark that the increase in error rate going from the training to the test set is less for the semi-supervised classifier than for the supervised one. This shows that the semi-supervised classifier is less overtrained on the training set than supervised LDA.
6.4 Comparison to Constrained LDA
Looking at Table 5, we see that also the ad hoc approach can work well. Especially when looking at the likelihood and comparing it to the supervised estimates, we see that, both on the training and the test set, the estimated likelihood is often better than the one obtained by the regular supervised parameters. The reason for the constrained approach to often be so much better than the supervised approach is probably similar to the one given in Subsection 6.2 to explain why the new approach comes so close to the optimal log-likelihoods. The large improvements are probably due to the fact that the averaged class-conditional covariance matrix is much more stably estimated in case of semi-supervision. The estimated covariance matrix might still not be very good, but at least it is substantially better than the volatile and not so well conditioned supervised estimate. Nonetheless, the novel approach clearly outperforms the more ad hoc technique in most of the cases where the likelihood is concerned. In fact, compared to the constrained approach, MCPL provides the best average test log-likelihood on all data sets. The only expected log-likelihood that is worse during training is the one for spectf.
Looking at the error rate, we see that the ad hoc procedure does very bad on optical and shuttle (the reason for this remains as yet unclear). Still, leads to the best error rate on the test set on seven data sets. On the other nine data sets turns out to be preferred.
6.5 MCPL for Other Classifiers
MCPL is proposed as a general estimation principle, which delivers semi-supervised estimates that are at least as good as the regular supervised parameter estimates for any log-likelihood based classifier. To come to results such as Theorems 1 and 2, additional knowledge about the class-conditional distributions is needed. Because they are very similar to LDA and the same kind of mean constraints hold, classifiers for which it is almost immediate that strict or expected improvements can be obtained through semi-supervision, are the NMC (nearest mean classifier), quadratic discriminant analysis (QDA), and all kinds of kernelized or flexibilized versions of NMC, LDA, and QDA . We speculate that also many classifiers constructed on the basis of exponential families [67, 68] allow for theorems making equivalent statements. These include, for instance, the Bernoulli, multinomial, and exponential density.
Another interesting group of classifiers to study in the context of MCPL is that for which every class may consist of a mixture model. As the analysis of mixture models is in itself already rather difficult —for one, the likelihood function is not concave, such classifiers may be outside the reach of any helpful theoretical analysis. We do, however, expect to benefit, if only from the regularizing effect our semi-supervised approach has, similar to the situation mentioned at the end of Subsection 6.2. What does seem a problem still, is to find an appropriate solution to the optimization that needs to be carried out in order to find an MCPL estimate. It seem worthwhile, though, to try to get to the nearest saddle point that can be found by means of a combined gradient ascent (in ) and descent (in ).
Finally, we could try to extend our work to classifiers that do not rely on likelihood models. One possible path may be through 
, which presents a decision-theoretic interpretation of maximum entropy and considers generalized concepts of entropy that relate to a much broader class of loss function than merely the (negative) log-likelihood. Though the link with this work is certainly not one-to-one, it may be possible to interpret our contrastive loss as a form of relative entropy and to make use of the results in.
We presented a well-founded approach to likelihood-based semi-supervised learning. Our principle of maximum contrastive pessimistic likelihood (MCPL) estimation is generally applicable to supervised classifiers whose parameters are estimated by means of a maximization of the likelihood. Moreover, under certain concavity assumptions, improvements of the semi-supervised estimates can be expected and, in particular cases, even be guaranteed. A worked-out illustration based on classical LDA demonstrates the significant improvements that can be obtained by our novel approach.
Marleen de Bruijne (Erasmus MC and KU) is wholeheartedly acknowledged for scrutinizing an initial version of this article beginning to end. Jesse H. Krijthe (LUMC and TU Delft) and David M. J. Tax (TU Delft) are kindly thanked for their proofreading of parts of the text. Joris Mooij (UvA) is acknowledged for inviting me to give a talk that, eventually, triggered insights into a simplification and generalization of the theory. Are C. Jensen (UiO) is warmly thanked for all the semi-supervised inspiration he provided me with. Thanks also to Mads Nielsen (KU) who gave me some great opportunities throughout the past decade. Finally, I would like to thank the anonymous reviewers for their critical appraisal. This work has benefitted from all the input received.
-  Ronald A. Fisher. An absolute criterion for fitting frequency curves. Messenger of Mathematics, 41:155–160, 1912.
-  Ronald A. Fisher. On the mathematical foundations of theoretical statistics. Philosophical Transactions of the Royal Society of London. Series A, Containing Papers of a Mathematical or Physical Character, 222:309–368, 1922.
-  Ronald Aylmer Fisher. Theory of statistical estimation. In Mathematical Proceedings of the Cambridge Philosophical Society, volume 22, pages 700–725. Cambridge Univ Press, 1925.
-  Stephen M. Stigler. The epic story of maximum likelihood. Statistical Science, 22(4):598–620, 2007.
-  Markus Ackermann, M. Ajello, A. Allafort, L. Baldini, J. Ballet, G. Barbiellini, et al. Detection of the characteristic pion-decay signature in supernova remnants. Science, 339(6121):807–811, 2013.
-  Jenny Allen, Mason Weinrich, Will Hoppitt, and Luke Rendell. Network-based diffusion analysis reveals cultural transmission of lobtail feeding in humpback whales. Science, 340(6131):485–488, 2013.
-  Hoi Sung Chung and William A Eaton. Single-molecule fluorescence probes dynamics of barrier crossing. Nature, 2013.
-  Bingni W. Brunton, Matthew M. Botvinick, and Carlos D. Brody. Rats and humans can optimally accumulate evidence for decision-making. Science, 340(6128):95–98, 2013.
-  Dana C. Price, Cheong Xin Chan, Hwan Su Yoon, Eun Chan Yang, Huan Qiu, et al. Cyanophora paradoxa genome elucidates origin of photosynthesis in algae and plants. Science, 335(6070):843–847, 2012.
-  Hu Cang, Anna Labno, Changgui Lu, Xiaobo Yin, Ming Liu, Christopher Gladden, Yongmin Liu, and Xiang Zhang. Probing the electromagnetic field of a 15-nanometre hotspot by single molecule imaging. Nature, 469(7330):385–388, 2011.
-  Angélique D’Hont, France Denoeud, Jean-Marc Aury, Franc-Christophe Baurens, Françoise Carreel, et al. The banana (Musa acuminata) genome and the evolution of monocotyledonous plants. Nature, 488(7410):213–217, 2012.
-  Yuannian Jiao, Norman J Wickett, Saravanaraj Ayyampalayam, André S Chanderbali, Lena Landherr, et al. Ancestral polyploidy in seed plants and angiosperms. Nature, 473(7345):97–100, 2011.
-  Lauri Nummenmaa, Enrico Glerean, Riitta Hari, and Jari K Hietanen. Bodily maps of emotions. Proceedings of the National Academy of Sciences, 111(2):646–651, 2014.
-  E. Saglamyurek, N. Sinclair, J. Jin, J. A. Slater, D. Oblak, F. Bussières, M. George, R. Ricken, W. Sohler, and W. Tittel. Broadband waveguide quantum memory for entangled photons. Nature, 469(7331):512, 2011.
-  Koichiro Tamura, Daniel Peterson, Nicholas Peterson, Glen Stecher, Masatoshi Nei, and Sudhir Kumar. Mega5: molecular evolutionary genetics analysis using maximum likelihood, evolutionary distance, and maximum parsimony methods. Molecular Biology and Evolution, 28(10):2731–2739, 2011.
-  J. Wang. An improvement on the maximum likelihood reconstruction of pedigrees from marker data. Heredity, 2013.
-  Ziheng Yang and Bruce Rannala. Molecular phylogenetics: principles and practice. Nature Reviews Genetics, 13(5):303–314, 2012.
-  Jacob Bien and Robert J. Tibshirani. Sparse estimation of a covariance matrix. Biometrika, 98(4):807–820, 2011.
-  Madeleine Cule, Richard Samworth, and Michael Stewart. Maximum likelihood estimation of a multi-dimensional log-concave density. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 72(5):545–607, 2010.
-  Yeojin Chung, Sophia Rabe-Hesketh, Vincent Dorie, Andrew Gelman, and Jingchen Liu. A nondegenerate penalized likelihood estimator for variance parameters in multilevel models. Psychometrika, pages 1–25, 2013.
-  Ted A. Laurence and Brett A. Chromy. Efficient maximum likelihood estimator fitting of histograms. Nature Methods, 7(5):338–339, 2010.
-  Jason D. Lee and Trevor J. Hastie. Learning mixed graphical models. arXiv preprint arXiv:1205. 5012, 2012.
-  N. Simon and R. J. Tibshirani. Discriminant analysis with adaptively pooled covariance. arXiv preprint arXiv:1111. 1687, 2011.
-  O. Chapelle, B. Schölkopf, and A. Zien. Semi-Supervised Learning. MIT Press, Cambridge, MA, 2006.
-  X. Zhu and A. B. Goldberg. Introduction to Semi-Supervised Learning. Morgan & Claypool Publishers, 2009.
-  Maria-Florina Balcan and Avrim Blum. A discriminative model for semi-supervised learning. Journal of the ACM, 57(3):19, 2010.
-  V. Castelli and T. M. Cover. On the exponential value of labeled samples. Pattern Recognition Letters, 16(1):105–111, 1995.
-  S. Ben-David, T. Lu, and D. Pál. Does unlabeled data provably help? worst-case analysis of the sample complexity of semi-supervised learning. In Proceedings of COLT 2008, pages 33–44, 2008.
-  J. Lafferty and L. Wasserman. Statistical analysis of semi-supervised regression. In Advances in Neural Information Processing Systems, volume 20, pages 801–808, 2007.
-  A. Singh, R. Nowak, and X. Zhu. Unlabeled data: Now it helps, now it doesn’t. In Advances in Neural Information Processing Systems, volume 21, 2008.
Semi-supervised linear discriminant analysis through moment-constraint parameter estimation.Pattern Recognition Letters, 37(1):24 –31, 2014.
-  X. Zhu. Semi-supervised learning literature survey. Computer Sciences TR 1530, University of Wisconsin, 2008.
-  H. O. Hartley and J. N. K. Rao. Classification and estimation in analysis of variance problems. Review of the International Statistical Institute, 36(2):141–147, 1968.
-  G. J. McLachlan. Iterative reclassification procedure for constructing an asymptotically optimal rule of allocation in discriminant analysis. Journal of the American Statistical Association, 70(350):365–369, 1975.
-  S. Basu, A. Banerjee, and R. Mooney. Semi-supervised clustering by seeding. In Proceedings of the Nineteenth International Conference on Machine Learning, pages 19–26, 2002.
-  J. N. Vittaut, M. R. Amini, and P. Gallinari. Learning classification with both labeled and unlabeled data. In Machine Learning: ECML 2002, pages 69–78, 2002.
-  D. Yarowsky. Unsupervised word sense disambiguation rivaling supervised methods. In Proceedings of the 33rd annual meeting on Association for Computational Linguistics, pages 189–196, 1995.
-  G. Nagy and G.L. Shelton. Self-corrective character recognition system. IEEE Transactions on Information Theory, 12(2):215–222, 1966.
K. Nigam, A. McCallum, S. Thrun, and T. Mitchell.
Learning to classify text from labeled and unlabeled documents.
Proceedings of the Fifteenth National Conference on Artificial Intelligence, pages 792–799, 1998.
-  T. J. O’Neill. Normal discrimination with unclassified observations. Journal of the American Statistical Association, pages 821–826, 1978.
-  D. M. Titterington. Updating a diagnostic system using unconfirmed cases. Journal of the Royal Statistical Society. Series C (Applied Statistics), 25(3):238–247, 1976.
-  N. P. Dick and D. C. Bowden. Maximum-likelihood estimation for mixtures of two normal distributions. Biometrics, 29:781–791, 1973.
-  D. W. Hosmer Jr. A comparison of iterative maximum likelihood estimates of the parameters of a mixture of two normal distributions under three different types of sample. Biometrics, pages 761–770, 1973.
-  W. Y. Tan and W. C. Chang. Convolution approach to genetic analysis of quantitative characters of self-fertilized population. Biometrics, 28:1073–1090, 1972.
-  G. J. McLachlan. Discriminant analysis and statistical pattern recognition. John Wiley & Sons, 1992.
-  S. Abney. Understanding the Yarowsky algorithm. Computational Linguistics, 30(3):365–395, 2004.
-  G. Haffari and A. Sarkar. Analysis of semi-supervised learning with the Yarowsky algorithm. In Proceedings of the 23rd Conference on Uncertainty in Artificial Intelligence, 2007.
-  I. Cohen, F. G. Cozman, N. Sebe, M. C. Cirelo, and T. S. Huang. Semisupervised learning of classifiers: Theory, algorithms, and their application to human-computer interaction. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 1553–1567, 2004.
-  F. Cozman and I. Cohen. Risks of semi-supervised learning. In Semi-Supervised Learning, chapter 4. MIT Press, 2006.
-  Ting Yang and Carey E Priebe. The effect of model misspecification on semi-supervised classification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(10):2093–2103, 2011.
-  Halbert White. Maximum likelihood estimation of misspecified models. Econometrica, 50(1):1–25, 1982.
-  Masanori Kawakita and Jun ichi Takeuchi. Safe semi-supervised learning based on weighted likelihood. Neural Networks, 2014.
-  Hidetoshi Shimodaira. Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of statistical planning and inference, 90(2):227–244, 2000.
-  Nataliya Sokolovska, Olivier Cappé, and François Yvon. The asymptotics of semi-supervised learning in discriminative probabilistic models. In Proceedings of the 25th International Conference on Machine learning, pages 984–991. ACM, 2008.
-  M. Loog. Constrained parameter estimation for semi-supervised learning: the case of the nearest mean classifier. In Machine Learning and Knowledge Discovery in Databases (ECML PKDD 2010), volume 6322 of LNAI, pages 291–304. Springer, 2010.
-  M. Loog. Semi-supervised linear discriminant analysis using moment constraints. In Partially Supervised Learning (PSL 2011), volume 7081 of LNAI, pages 32–41. Springer, 2012.
-  K. Fukunaga. Introduction to Statistical Pattern Recognition. Academic Press, 1990.
-  M. Loog and A. C. Jensen. Constrained log-likelihood-based semi-supervised linear discriminant analysis. In Structural, Syntactic, and Statistical Pattern Recognition, volume 7626 of LNCS, pages 327–335. Springer, 2012.
-  M. Loog and A. C. Jensen. Semi-supervised nearest mean classification through a constrained log-likelihood. IEEE Transactions on Neural networks and Learning Systems, accepted, 2014.
-  J. H. Krijthe and M. Loog. Implicitly constrained semi-supervised least squares classification. submitted November 2013, available through http://www.jessekrijthe.com/papers/krijthe2013.pdf, 2013.
-  J. H. Krijthe and M. Loog. Implicitly constrained semi-supervised linear discriminant analysis. In Proceedings of the 22nd International Conference on Pattern Recognition, volume 22, pages —, Stockholm, Sweden, accepted, 2014.
-  Ming-Wei Chang, Lev Ratinov, and Dan Roth. Guiding semi-supervision with constraint-driven learning. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 280–287, Prague, Czech Republic, 2007.
-  G.S. Mann and A. McCallum. Generalized expectation criteria for semi-supervised learning with weakly labeled data. The Journal of Machine Learning Research, 11:955–984, 2010.
-  Brian D. Ripley. Pattern recognition and neural networks. Cambridge University Press, 1996.
-  G. J. McLachlan. Estimating the linear discriminant function from initial samples containing a small number of unclassified observations. Journal of the American Statistical Association, 72(358):403–406, 1977.
-  G. J. McLachlan and S. Ganesalingam. Updating a discriminant function on the basis of unclassified data. Communications in Statistics - Simulation and Computation, 11(6):753–767, 1982.
-  Lawrence D. Brown. Fundamentals of Statistical Exponential Families, volume 9 of Lecture Notes–Monograph Series. Institute of Mathematical Statistics, 1986.
-  P. J. Bickel and K. A. Doksum. Mathematical Statistics, volume 1. Prentice-Hall, Inc., second edition, 2001.
-  M. Sion. On general minimax theorems. Pacific Journal of Mathematics, 8:171–176, 1958.
-  M. Dresher. Games of Strategy. Prentice-Hall Inc., 1961.
-  Sebastian Thrun, Wolfram Burgard, and Dieter Fox. Probabilistic Robotics. MIT Pess, 2006.
-  Nelson Maculan and Geraldo Galdino de Paula Jr. A linear-time median-finding algorithm for projecting a vector on the simplex of . Operations Research Letters, 8(4):219–222, 1989.
-  Peter L. Bartlett, Michael I. Jordan, and Jon D. McAuliffe. Convexity, classification, and risk bounds. Journal of the American Statistical Association, 101(473):138–156, 2006.
-  Shai Ben-David, David Loker, Nathan Srebro, and Karthik Sridharan. Minimizing the misclassification error rate using a surrogate convex loss. In Proceedings of the 29th Annual International Conference on Machine Learning, 2012.
-  M. Loog and R. P. W. Duin. The dipping phenomenon. In Structural, Syntactic, and Statistical Pattern Recognition, volume 7626 of LNCS, pages 310–317. Springer, 2012.
-  Mark D. Reid and Robert C. Williamson. Composite binary losses. The Journal of Machine Learning Research, 11:2387–2422, 2010.
-  Mark D. Reid and Robert C. Williamson. Information, divergence and risk for binary experiments. The Journal of Machine Learning Research, 12:731–817, 2011.
-  Tong Zhang. Statistical behavior and consistency of classification methods based on convex risk minimization. Annals of Statistics, pages 56–85, 2004.
Transductive inference for text classification using support vector machines.In Proceedings of the 6th International Conference on Machine Learning, pages 200–209, 1999.
-  Y. Grandvalet and Y. Bengio. Semi-supervised learning by entropy minimization. Advances in Neural Information Processing Systems, 17:529–536, 2004.
-  K. Bache and M. Lichman. Uci machine learning repository, 2013.
-  D. D. Lucas, R. Klein, J. Tannahill, D. Ivanova, S. Brandon, D. Domyancic, and Y. Zhang. Failure analysis of parameter-induced simulation crashes in climate models. Geoscientific Model Development Discussions, 6(1):585–623, 2013.
-  J. P. Bridge, S. B. Holden, and L. C. Paulson. Machine learning for first-order theorem proving: learning to select a good heuristic. submitted, 2013.
-  Alexander Vergara, Shankar Vembu, Tuba Ayhan, Margaret A Ryan, Margie L Homer, and Ramón Huerta. Chemical gas sensor drift compensation using classifier ensembles. Sensors and Actuators B: Chemical, 166:320–329, 2012.
-  Kamel Mansouri, Tine Ringsted, Davide Ballabio, Roberto Todeschini, and Viviana Consonni. Quantitative structure–activity relationship models for ready biodegradability of chemicals. Journal of chemical information and modeling, 53(4):867–878, 2013.
-  R. Bhatt and A. Dhall. Skin segmentation dataset.
-  P. I. Good. Permutation Tests. Springer, 2000.
-  T. Hastie, R. Tibshirani, and J.H. Friedman. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer Verlag, 2001.
-  E. L. Lehmann and G. Casella. Theory of point estimation. Springer-Verlag, second edition, 1998.
-  Peter D Grünwald and A Philip Dawid. Game theory, maximum entropy, minimum discrepancy and robust bayesian decision theory. Annals of Statistics, pages 1367–1433, 2004.