1 Introduction
A century after its inception [1, 2, 3], parameter estimation through maximum likelihood (ML) is still one of the most widely used statistical estimation techniques. In a more rudimentary form, maximum likelihood can even be traced back as far as the 18th century [4]. ML estimation is employed in fields as diverse as genealogy, imaging, genetics, astrophysics, physiology, and quantum communication, as is illustrated by many recent research works such as [5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17]. Moreover, new tools and techniques based on or related to ML are still being developed within modern statistics and related fields. Some recent examples are [18, 19, 20, 21, 22, 23]. A satisfactory approach to MLbased estimation for semisupervised classifiers, however, has not been developed so far.
In general, the aim of semisupervised learning is to improve supervised classifiers by exploiting additional, typically easier to obtain, unlabeled data [24, 25]. Up to now, however, the literature has reported mixed results when it comes to such improvements; it is not always the case that semisupervision leads to lower expected error rates or the like. On the contrary, severely deteriorated performances have been observed in empirical studies and theory shows that improvement guarantees can often only be provided under rather stringent conditions on the data we are dealing with [26, 27, 28, 29, 30].
In this work, we demonstrate when and how ML estimators for classification can be improved in the semisupervised setting. We show that semisupervised estimates can be constructed that are essentially closer to the estimates that would be obtained when also all the labels for all unlabeled data would be available in the training phase. That is, the semisupervised estimates are closer to the estimates obtained with all labels available than the supervised estimates that rely on the same labeled instances as semisupervision does, but that do not use the additional unlabeled data set. A crucial difference between the theory in this work and theories from, for instance, [26, 27, 28, 29, 30] is that the former can do without strict assumption on the data or the relation between the data and the classifier considered. In fact, as we will see, Theorem 2 in Section 4 especially relies on assumptions that are minimal and can be readily checked on the data at hand. Other results in semisupervised learning resort to premises that generally cannot be conclusively tested for.
In order to show the potential improvements semisupervised classifiers can deliver, we introduce a novel, generally applicable estimation principle that extends likelihood estimation to the semisupervised case in a consistent way. In particular, our method is contrastive, which refers to the fact that the objective function takes into account the original supervised solution in an explicit way. This enables the semisupervised solution to explicitly control the potential improvements over the supervised solution. In addition, our method is pessimistic, which refers to the fact that the unlabeled data is treated as if it behaves in a worst kind of way, i.e., such that the semisupervised estimates benefit the least from it. It makes the estimates conservative, but resilient to any possible state in which the unlabeled data can be encountered. We refer to this principle as maximum contrastive pessimistic likelihood estimation or MCPL estimation for short.
1.1 Outline
In Section 3, the main theory is introduced, contrast and pessimism are further elucidated, and our core, general estimation principle, MCPL, is presented. In that same section, we also sketch the possibility of improved semisupervised estimation by means of MCPL. Sections 4 and 5
provide a workedout illustration and a further specification of our theory. The former section introduces the MCPLbased version of LDA, proves in what way the semisupervised LDA parameters are expected to really improve over the regular supervised ones, and sketches the heuristic employed to tackle the related optimization problem. The latter section, Section
5, provides extensive results on a range of data sets, comparing regular supervised LDA and an earlier proposed semisupervised approach to LDA [31] with the novel semisupervised LDA introduced here. Section 6 puts the results in a somewhat broader perspective and raises some open issues. Finally, Section 7 concludes. To begin with, however, we put our work in context, provide some preliminaries, introduce ML estimation and LDA, give an overview of the principal related works, and discuss related earlier findings.2 Background and Preliminaries
The loglikelihood objective function for a class supervised classification problem takes on the general form
(1) 
where class contains a total of samples, is the total number of samples,
is the set of all labeled training pairs with
dimensional feature vectors
^{1}^{1}1As is also common in many mathematical statistics and analysis textbooks, plain italic lowercase letters may indicate vectors and not only scalars., andare their corresponding labels. Denoted with is the th sample from class . Here, every model parameter—specific to a particular class or not—is absorbed in . The set contains all parameter settings possible, thus defining the full class of models under consideration. Now, the supervised ML estimate, , maximizes the above criterion:
(2) 
What follows is an overview of the main approaches to semisupervised learning with a particular focus on likelihoodbased methods. Specific attention will furthermore be given to semisupervised approaches to LDA. For broader and more extensive literature reviews, we refer to [24] and [32].
2.1 SelfLearning and Expectation Maximization
With the current work, we in essence revisit a problem in ML estimation that has already been considered as early as the late 1960s. In 1968, Hartley and Rao sketched a general way of exploiting unlabeled data
in likelihood estimation of model parameters for the analysis of variance
[33]. The basic idea is to consider all possible labelings that the unlabeled data could have and choose that labeling that achieves the largest loglikelihood. As such, this procedure still relies on ML estimation, but where the fully supervised model would merely optimize the loglikelihood of the parameters of the model, here the unobserved labelsof the unlabeled data in are considered parameters over which the likelihood is maximized as well:
(3) 
Clearly, as the number of possible labelings grows exponentially with the number of unlabeled data points, even for fairly small sample sizes this procedure is generally intractable.
A learning strategy that is often referred to as selflearning or selfteaching approaches the problem in a similar though greedy way. In its most most simple form, the classifier of choice is trained on the available labeled data in an initial step. Using this trained classifier, all unlabeled data or part of it are assigned a label. Then, in a next step, this now labeled data is added to the training set and the classifier is retrained with this enlarged set. Given the newly trained classifier, one can relabel the initially unlabeled data and retrain the classifier again with these updated labels. This process is iterated until convergence, i.e., when the labeling of the initially unlabeled data remains unchanged.
McLachlan [34]
, in 1975, was probably the first to apply this procedure and indeed suggested it as a computationally more tractable alternative to the one in
[33]. Similar procedures have been reintroduced throughout the last couple of decades (see, for instance, [35, 36, 37]). Outside of the literature on likelihood estimation, a procedure reminiscent of McLachlan’s had already been proposed. In 1966, while dealing with an issue slightly different from semisupervised learning, Nagy and Shelton proposed a general technique similar to selflearning [38]. One of the crucial differences is that the labeled data is only used to train the initial classifier. It does not play a role in any of the subsequent selflearning iterations. Also this procedure has been reconsidered many years after it was initially suggested, e.g. in [35].Possibly the best known semisupervised likelihoodbased approach treats the absence of labels as a classical missingdata problem and integrates out these nuisance parameters to come to a new, full model likelihood [39, 40, 41]
Its maximization over
typically relies on the classical technique of expectation maximization (EM) in which the estimates are not updated on the basis of hard labels, but rather using posterior probabilities, which can equivalently be thought of as soft labels or assignments. In 1973,
[42] and [43] were possibly the first to consider this specific problem explicitly, though [44] had already employed such formulation in its applied work in 1972. A more modern overview of EM approaches to partial classification can be found in [45].At a first glance, selflearning and EM may seem different ways of tackling the semisupervised classification problem, but there are clear parallels. Indeed, where EM provides soft class assignments to all unlabeled data, selflearning just assigns every such instance in a hard way to one unique class in every iteration. In fact, [35] effectively shows that selflearners optimize the same objective as EM does. Similar observations have been made in [46] and [47].
The major problem with the aforementioned methods is that they can suffer from severely deteriorated performance with increasing numbers of unlabeled samples. This behavior, already extensively studied [48, 49, 31, 50], is often caused by model misspecification, i.e., the statistical class of models with parameters is not able to properly fit the actual data distribution. We note that this is in contrast with the supervised setting, where most classifiers are capable of handling mismatched data assumptions rather well and adding more labeled data typically improves performance. The latter is in line with the behavior many misspecified likelihood models display [51].
2.2 DensityRatio Correction
A rather different approach to semisupervised estimation for likelihoodbased models is offered in [52], in which the problem of semisupervised learning is basically treated as one of learning under covariate shift [53]. Covariate shift is the setting in which the posterior distribution of the labels given the data, , remains the same, while the marginal might change when going from the training to the testing phase. Following [54], the main idea in [52] is that the marginal distribution over the feature space can be better estimated based on all data, both labeled and unlabeled. Subsequently, the density ratio between this estimate and the marginal estimate based on labeled data only can be exploited to weight the training data by means of their importance, as generally suggested in [53].
In their work, the authors prove that, asymptotically, this semisupervised learning procedure works better than its regular, supervised counterpart. Next to the fact that results hold only asymptotically, the behavior of this semisupervised learner seems to depend strongly on the way the density ratio is determined. In the finite sample setting, one may run into similar kind of problems as those sketched in the previous subsection: choosing the incorrect model for estimating the density ratio of the marginal feature distributions, could lead to deteriorated performance instead of performance improvements. Experimental results in both [52] and [54] seem to reflect this.
2.3 Intrinsically Constrained Estimation
In recent years, the author proposed an essentially different take on semisupervised learning [55, 56]. On a conceptual level, the idea is that the available unlabeled data indirectly puts restrictions on the parameters possible, i.e., it basically allows us to look at a set that is smaller than the initial set . A first operationalization of this idea has been studied for the simple nearest mean classifier (NMC, [55]). It exploits constraints that are known to hold for this classifier, defining relationships between the classspecific parameters and certain statistics that are independent of the specific labeling. In particular, for the NMC the following constraint can be exploited:
(4) 
with the estimated overall sample mean of the data, the sample means of the classes, and the estimates of the class priors. In the supervised setting this constraint is automatically fulfilled [57]. Its benefit only becomes apparent, therefore, with the arrival of unlabeled data that can be used to improve the labelindependent estimate . Using this more accurate estimate results in a violation of the constraint. Fixing it by properly adjusting the s, these labeldependent estimates become more accurate as well.
Supervised LDA can be improved in a similar way. The same constraint in Equation (4) holds, but for LDA additional ones involving the classconditional covariance matrix apply. Notably, we have that the covariance matrix of all the data, the total covariance , equals the sum of the covariance between the class means, the betweenclass covariance , and the classconditional covariance matrix (which is also referred to as the withinclass covariance) [57]:
(5) 
These additional constraints further restrict the possible semisupervised solutions, allowing for more significant improvements over the regular supervised classifier [56, 31].
The aforementioned works enforce the constraints imposed in a rather ad hoc way. A somewhat more principled constrained likelihood approach is suggested in [58, 59]. Generally, given any constraint that the parameters of the semisupervised classifier should comply with, the idea is to maximize the original likelihood from Equation (1)—as in Equation (2), but subject to the constraint, i.e., we solve
Reference [59] shows, for instance, how to formulate the constrained NMC from [55] in this way. A major shortcoming of this approach is that such constraints must have been identified in the first place. For this reason, its applicability to other classifiers is currently limited.
A second and more recent instantiation of our general idea coined in [55] does allow for broader applicability [60, 61]. The optimization suggests to find those parameters that maximize the likelihood on the labeled data set , but only allows solutions that can be achieved with a data set that includes labeled versions of the initially unlabeled instances as well. In terms of a likelihood formulation, what it suggests to solve is the following:
(6) 
The first important ingredient is the set , which is the labeled data set augmented with the unlabeled data combined with the labels in . So
is a fully labeled data set for all . The second important ingredient is the set , which typically is a proper subset of the original parameter set . This set contains all possible classifier parameters that are obtained by training classifiers on all of the possible fully labeled data sets . As we need to consider all possible labelings for the unlabeled data, this brings us back to Hartley and Rao’s intractable method [33]. In [60] and [61], this problem is overcome by introducing the possibility of fractional or soft labels, resulting in a wellbehaved quadratic programming problem for the case of the least squares classifier.
Putting our earlier work further in the appropriate context, we should finally mention [62] and [63], where likelihoodbased semisupervised learning guided by particular constraints is considered as well. The crucial difference is that the constraints proposed in these works are typically derived from domain knowledge and very task specific. If these a priori constraints are correct, a learner can obviously benefit from them, even in the supervised case. If they are incorrect they may lead to severely deteriorated performance. So where these constraints are classifierextrinsically motivated, any other method in this subsection relies on intrinsically motivated constraints, which are fixed as soon as the data is available and the choice of classifier is made.
2.4 Supervised and SemiSupervised LDA
As our workedout example in Sections 4 and 5 concerns LDA, this subsection turns to its associated likelihood and the specific semisupervised solutions that have been proposed for this classical technique.
Compared to Equation (1), the loglikelihood objective function for class LDA takes on a more specific form. We can write [64]
(7) 
where , the class priors, is the class means, and the classconditional covariance matrix. The , on the last line, denotes the normal (or
aussian) probability density function. Of course, to find the supervised solution, we solve the maximization already noted in Equation (
2), which leads to the wellknown ML estimates of the parameters of regular supervised LDA.Semisupervised LDA has been considered both in theoretical and methodological work. The main example in Hartley and Rao’s work [33] treats univariate LDA in the semisupervised setting. Also McLachlan [34] focusses on LDA. Following these contributions, other early studies of the use of unlabeled data in LDA can be found in [65, 40, 41] and [66]. Selflearned and intrinsically constrained versions of LDA have been compared in [56] and [31].
Let us finally remark that various contributions from a large number of disciplines still employ classical, supervised LDA as their decision rule of choice. A handful of recent examples from the applied and natural sciences can be found in some of the earliermentioned references: [5, 6, 7, 8, 9]. Semisupervised versions of LDA, however, have not been widely applied. The general shortcoming mentioned in Subsection 2.1, the fact that selflearned and EM versions can give sharply inferior performance, probably contributes to this.
3 Contrastive Pessimistic ML
For none of the aforementioned semisupervised learning schemes and classifiers, there are currently any generally applicable guarantees when it comes to performance improvements, unless one makes strong assumptions about the data. The learning strategy that we devise in this section does allow for such a guarantee on the training set in a strict way. This we will show in Section 4. The main, general theory is provided in the current section.
Consider the fully labeled data set
It is similar to considered in Subsection 2.3, but we now assume that contains the true labels belonging to the feature vectors in . Define
which gives the classifier’s parameter estimates on the full training set in which also the unlabeled data is labeled. With respect to this enlarged training set , the estimate is optimal by construction and cannot be improved upon. As the supervised parameters in are estimated merely on a subset of , we have
In the semisupervised setting, both and are at our disposal, but has not been observed. We have more information than in the supervised setting, but less than in the optimal, fully labeled case. The principal result obtained in this section is that, for likelihoodbased classifiers, semisupervised parameter estimates obtained by means of MCPL are essentially in between the corresponding supervised and the optimal estimates:
In itself, this result might not seem all too helpful as we can easily come up with a semisupervised parameter estimate for which these inequalities are trivially fulfilled: take to equal . However, we first want to clarify that the inequality holds generally for MCPL before we proceed and make the claim that strict improvements by means of MCPL over regular supervised estimation can be expected. That is, we argue, at least for particular classifiers, that
i.e., the loglikelihood on the fully labeled set obtained by the semisupervised estimates is strictly larger than that obtained under supervision. For LDA, this is proven in Section 4.
3.1 Contrast and Pessimism
To be able to construct a semisupervised learner that improves upon its supervised counterpart, we take the supervised estimate into account explicitly and consider the difference in loss incurred by and .
Before doing so, however, we first introduce some notation. We define to be the hypothetical posterior of observing a particular label given the feature vector . We may interpret the as soft labels for every and will also refer to them as such. This respects the fact that classes may be overlapping and not every can be be assigned unambiguously to a single class. By definition, . More precisely, we can state that the dimensional vector is an element of the simplex in :
Provided that these posteriors are given, we can express the loglikelihood on the complete data set for any as
(8) 
in which the dependence on the s is explicitly indicated also on the lefthand side by means of the variable . Note that use of these soft labels in allows more flexibility than just using a set of hard labels , such as was for instance done in Equations (3) and (6).
For a given , the relative improvement of any semisupervised estimate over the supervised solution can now be expressed as follows:
(9) 
This contrasts the semisupervised solution with the regular supervised solution obtained on the data set , enabling us to explicitly check to what extent semisupervised improvements are possible in terms of loglikelihood. As we are dealing with a semisupervised problem, is unknown and we cannot use Equation (9) directly for optimization. The choice we make now is the most pessimistic one: we are going to assume that the true (soft) labeling is most adverse against any semisupervised approach and consider the that minimizes the gain in likelihood. That is, our objective function becomes
(10) 
where ; the Cartesian product of simplices.
3.2 MCPL Estimation
We are now ready to define MCPL estimation, which extends general likelihood estimation for supervised learners to the general semisupervised case.
Definition 1 (Mcpl).
Let be the supervised ML estimate maximizing and let be a set of unlabeled data. A maximum contrastive pessimistic likelihood estimate, , is an estimate that maximizes the criterion in Equation (10), i.e.,
(11) 
Maximizing the objective function for leads to a rather conservative estimate, because of the pessimistic choice of . But we need this choice, in combination with the contrastive nature of the objective function, to be able to guarantee that the following holds.
Lemma 1.
(12) 
To see that the lemma indeed holds, consider Equation (11). Because we can take , 0 is always among the minimizers in this equation. As a consequence, the maximum will never be smaller than 0:
Looking at Equation (9), this means that the difference between the semisupervised and the supervised loglikelihood is larger than 0, but as this holds even for the worst choice of , it must also hold for the true hard labeling considered in . From this, the first inequality follows in Equation (12), which shows the lemma to hold.
3.3 Prospects of Improved Estimates
If we can show for a classifier that we can expect the inequalities in Lemma 1 to be strict, then we can conclude that the semisupervised parameter estimates are essentially better than those obtained under supervision. When can we expect this to happen? There are at least two different ways.
Firstly, a semisupervised classifier can be better if the true underlying soft labeling is less adversarial than the worstcase that is considered in MCPL estimation. Even though we cannot give any general quantitative statement on how often this happens, we can imagine that this is quite likely. Secondly, we can expect improvements in case the set of feature vectors of the labeled instances, , is an ill representation of the complete set of labeled and unlabeled data, and . It is clear that nothing can be gained in the other extreme, where the feature vectors in are just exact copies of those in . In that case, MCPL estimation would just recover the supervised estimate. In the next section, we use such illrepresentation argument to show that semisupervised LDA typically outperforms its supervised counterpart.
4 MCPL Version of LDA
Combining MCPL estimation as defined in Subsection 3.2 with the loglikelihood formulation of regular supervised LDA from Equation (7) leads to our proposal of a proper semisupervised version of LDA. Following the previous section, we have
Here and in what follows, the subscripted LDA makes explicit that we are specifically considering this classifier. Subsection 4.3 briefly presents the heuristic we used to carry out the necessary maximinimization to actually obtain . But first, in the next two subsections, we demonstrate that we can expect improved semisupervised estimation.
4.1 Preliminaries
As the set of normal densities makes up an exponential family, it can be reparameterized into a socalled canonical parametrization such that it is concave in its parameters [67, 68]. Denote this reparametrization by . For fixed , is also concave. Now, by definition of the MCPL estimate
From this, it is not difficult to see that for fixed , is concave in and for fixed , is linear in . So is in fact concaveconvex on . In addition, is compact and so we can invoke the important minimax corollary by Sion [69] that allows us to interchange the maximization and minimization, which in turn means that the solution to the above maximinimzation is a saddle point [70]. Moreover, the estimate is unique if is strictly concave in [70]. This is ensured if is positive definite. From Equation (14) in Subsection 4.2, it follows that this holds, for instance, if is positive definite. Equivalently, we will assume the supervised estimation problem to be wellposed.
For normal distributions, both the standard parametrization and the canonical parametrization are complete parameterizations. We have
[67]: , where returns the upper triangular part of the square matrix . As we consider wellposed estimation problems, is invertible and so the mapping between and is a bijection (cf. [71]). So coming back from the canonical parametrization to our original , we see that the maximinimzation also leads to a unique solution for . This will be important in what follows.4.2 SemiSupervised Improvements
We consider , which is Equation (9) with the particular choice of the likelihood from Equation (7). Leaving fixed, we saw that there is a unique maximizer for . Fixing , the supervised part of the contrastive likelihood does not play an essential role in the objective function. It merely provides an offset, and the maximizer of is equal to the maximizer of . Now, the latter is a weighted version of standard LDA—the weights are provided by —and it is not difficult to show that, for every class , the optimal ML parameter estimates are given by
(13) 
while the estimate of the average classconditional covariance matrix becomes
(14) 
Note that the total data mean equals
(15) 
which is independent of the soft labels . We now additionally note that also for weighted LDA, for any choice of , the constraint in Equation (4) holds. The MCPL solution will have corresponding pessimistic soft labels and therefore satisfies the constraint as well: .
Now, if semisupervised learning does not improve over the supervised estimate, should equal the initial supervised solution , because the estimate is unique (see Subsection 4.1). This, in turn, implies that we also have . But as the supervised solution is trained on only, it should simultaneously fulfil the constraint in Equation (4) with the total data mean equal to
(16) 
i.e., the sample average of . We therefore have:
If the feature vectors of our classification problem come from a continuous distribution then, unless is empty, the probability that equals is zero. This, in turn, implies that we can expect to be different from and, therefore, improve upon it. With this, we have proven our first main result concerning semisupervised LDA.
Theorem 1.
If the supervised estimation problem is wellposed, , and if the feature vectors are continuously distributed, the strict inequality
holds almost surely.
We should note that if the feature distribution is discrete, the inequality holds with a probability smaller than one. Nonetheless, when either the number of discrete elements of the distribution, the number of labeled points, or the number of unlabeled feature vectors is large, the probability that the inequality is strict typically gets close to one. We dare to conjecture that Theorem 1 will be accurate for many practical purposes, even in the discrete case.
What we can say in the discrete case is that the probability that does not equal is nonzero and, therefore, we at least have strict improvement in expectation.
Theorem 2.
If the supervised estimation problem is wellposed and , we have
where the expectation is taken over .
Hence, LDA parameter estimation by means of MCPL is, in the average, always better than classical supervised loglikelihood estimation.
4.3 Solving the Maximinimization
As was discussed in Subsection 4.1 already, the objective function, as provided by Equation (9), is linear in and strictly concave in . As a result, we know that we are looking for a saddle point solution with a unique optimizer for . Moreover, we know there are no other local saddle point solutions for this maximinimization problem [70]. The basis of our heuristic to come to an MCPL estimate for the parameters of semisupervised LDA are the following two steps between which the optimization alternates.

Given LDA parameters , the gradient for is calculated, and is changed to , with the step size. The following should be noted:

is not guaranteed to be in , so we project back into this set in every iteration [72];

the objective function is linear in , so the gradient is easily obtained:

we want to minimize for , so we change its value in the direction opposite of the gradient, i.e., with .

In our experiments in Section 5, the step size is decreased as one over the number of iterations. Furthermore, we limit the maximum number of iterations to 1000. In addition, if the maximin objective does not change more than in one iteration, the optimization is halted. With these settings, in our experiments, the maximum number of iterations is reached seldom (in less than one in every thousand cases).
Finally, we remark that care should be taken when calculating the necessary loglikelihoods or any of the related quantities. For example, the logarithm of the determinant of the average class covariance matrices can, especially for moderate and highdimensional problems, easily results in numerical infinities. Fairly reliable results can, in this instance, be obtained by determining the singular values of the covariance matrix through an SVD and taking the sum of the logarithm of these values.
5 Experiments and Results with LDA
Having presented the specific theory for semisupervised LDA and a heuristic approach to find its MCPL parameters in Section 4, there are four main issues we want to investigate experimentally. To start with, the theory states that semisupervised LDA estimates are better on the training data at hand given the loglikelihood as the performance measure. The two questions this raises are, firstly, how do these estimates compare to the supervised estimates on new and previously unseen test data? And secondly, how do they perform and compare in terms of the 01 loss, i.e., the classification error? Concerning the second point, we remark that the relation between likelihood and error rate is not necessarily monotonic and a higher likelihood does not necessarily lead to a lower error. It is only in recent years that considerable effort has been spent on understanding the nontrivial relationship between the criterion a classifier optimizes (here the likelihood) and how that classifier performs in terms of any other criterion of interest (here the error rate). Refer, for instance, to [73, 74, 75, 76, 77, 78]. Thirdly, we measure the loglikelihood for the various parameter estimates also on the training set. This gives us a basic check on the performance of our optimization heuristic: we should find that the semisupervised solutions never deteriorates the supervised solution and typically even improves upon it. The final, fourth point is to compare our theoretically underpinned method to the semisupervised LDA technique from [31], which enforced the constraints in Equations (4) and (5) in an ad hoc way. It puts our novel method in a broader perspective, as the earlier method has been studied extensively already. Among others, this constrained LDA has been shown to perform much better than selflearning or EM approaches to LDA and to be competitive with transductive SVM [79]
and even entropy regularized logistic regression
[80], especially in the small sample setting.5.1 Data Sets and Preprocessing
full data set name  abbreviated  cit. 

banknote authentication  banknote  
climate model simulation  climate  [82] 
crashes  
firstorder theorem proving  firstorder  [83] 
gas sensor array drift  gas  [84] 
landsat satellite  landsat  
letter recognition  letter  
low resolution spectrometer  low  
magic gamma telescope  magic  
miniboone particle  miniboone  
identification  
optical recognition of  optical  
handwritten digits  
penbased recognition of  penbased  
handwritten digits  
qsar biodegradation  qsar  [85] 
shuttle  shuttle  
skin segmentation  skin  [86] 
spambase  spambase  
spectf heart  spectf 
We chose 16 data sets from the UCI Machine Learning Repository
[81] to perform our experiments on. The full names can be found in Table 1. The same table contains abbreviated names that we use to refer to these sets in other tables and throughout the text.A main criterion for choosing these particular data sets was their size. We wanted to be able to easily generate labeled and unlabeled training sets from them plus independent test sets and we wanted especially the last two sets to have a fair size. In addition, we wanted to limit the computational burden and therefore did not choose too highdimensional sets. Moreover, in order to rid ourselves of potential problems with singular classconditional covariance matrices (which would leave the supervised estimation problem illposed) or numerical challenges related to this, the complete data sets were preprocessed in the following way. In a first step, the variance of every individual feature was normalized to one. A feature was removed altogether if its variance was numerically zero. In a second step, PCA was applied to the full sets and
of the variance was retained in order to remove linearly dependent features. We note that reducing the dimensionality essentially changes the likelihood of a data set, but that any nonsingular linear transformation merely offsets the loglikelihood attained by LDA.
Table 2 provides various statistics for the 16 data sets. It also indicates, in the last column, which 6 of the 16 data sets consist purely of discrete feature values. The fourthtolast to secondtolast column in the table gives the different sizes of labeled (), unlabeled (), and test sets we used in every run of our experiments. We do not expect much gain from employing unlabeled data if the number of labeled points is large. We therefore kept the labeled set small, choosing a size of twice the dimensionality plus once the number of classes: . We also took care that every class has at least one labeled instance in the training set. The remainder of the data was then randomly divided in two, more or less, equally sized sets that make up the unlabeled and test sets, respectively.
data set (abbr.)  #objects  dim.  PCA/  largest  (%)  smallest  (%)  #test  discr.  

banknote  1372  4  4  2  762  (55.5)  610  (44.5)  10  681  681  no 
climate  540  18  18  2  494  (91.5)  46  (8.5)  38  251  251  no 
firstorder  6118  51  41  6  2554  (41.7)  486  (7.9)  88  3015  3015  no 
gas  13910  128  60  6  3009  (21.6)  1641  (11.8)  126  6892  6892  no 
landsat  6435  36  33  6  1533  (23.8)  626  (9.7)  72  3182  3181  yes 
letter  20000  16  16  26  813  (4.1)  734  (3.7)  58  9971  9971  yes 
low  531  93  70  10  90  (16.9)  4  (0.8)  150  191  190  no 
magic  19020  10  10  2  12332  (64.8)  6688  (35.2)  22  9499  9499  no 
miniboone  130064  50  11  2  93565  (71.9)  36499  (28.1)  24  65020  65020  no 
optical  5620  64  61  10  572  (10.2)  554  (9.9)  132  2744  2744  yes 
penbased  10992  16  16  10  1144  (10.4)  1055  (9.6)  42  5475  5475  yes 
qsar  1055  41  38  2  699  (66.3)  356  (33.7)  78  489  488  no 
shuttle  58000  9  6  7  45586  (78.6)  10  (0.0)  19  28991  28990  yes 
skin  245057  3  3  2  194198  (79.2)  50859  (20.8)  8  122525  122524  no 
spambase  4601  57  56  2  2788  (60.6)  1813  (39.4)  114  2244  2243  no 
spectf  267  44  43  2  212  (79.4)  55  (20.6)  88  90  89  yes 
5.2 Performance Criteria and Results
data set  estimated on test  estimated on full train  % test wins  % trn. wins  

(abbr.)  test  trn.  
banknote  11.7  4.72  4.51  11.5  4.69  4.48  100.0  98.4  100.0  100.0  0.971  0.970 
climate  34.1  26.5  26.2  32.6  25.8  25.5  100.0  100.0  100.0  100.0  0.964  0.961 
firstorder  1.88e+03  62.6  60.3  1.78e+03  40.4  39.2  100.0  100.0  100.0  100.0  0.999  0.999 
gas  4.46e+04  4.4e+03  4.41e+03  4.37e+04  13.1  12.4  100.0  44.8  100.0  100.0  1.000  1.000 
landsat  33.2  4.64  3.73  32.4  4.35  3.42  100.0  100.0  100.0  100.0  0.969  0.968 
letter  63.6  22.3  18.4  63.3  22.2  18.3  100.0  100.0  100.0  100.0  0.914  0.913 
low  90.1  19.8  17.6  37.8  11.7  13.9  100.0  99.9  100.0  100.0  0.969  0.957 
magic  30.6  11.7  11.1  30.6  11.6  11.1  100.0  100.0  100.0  100.0  0.974  0.974 
miniboone  2.2e+09  7.17e+07  6.93e+07  2.42e+09  9.75  9.48  99.8  93.1  100.0  100.0  0.999  1.000 
optical  6.24e+15  5.66e+12  6.35e+12  6.06e+15  61.1  60.1  100.0  83.8  100.0  100.0  1.000  1.000 
penbased  45.2  15.9  13.5  44.9  15.8  13.5  100.0  100.0  100.0  100.0  0.927  0.926 
qsar  4.02e+14  1.02e+03  1.03e+03  3.36e+14  37.2  36.9  100.0  99.7  100.0  100.0  1.000  1.000 
shuttle  5.42e+07  9.81  9.24  6.8e+07  9.37  8.76  100.0  96.9  100.0  100.0  1.000  1.000 
skin  125  3.84  3.45  125  3.84  3.45  100.0  100.0  100.0  100.0  0.997  0.997 
spambase  1.09e+16  81.6  81.3  9.76e+15  73.7  73.4  100.0  100.0  100.0  100.0  1.000  1.000 
spectf  78.6  53.6  53.1  54.5  36.8  36.5  100.0  97.5  100.0  100.0  0.982  0.985 
data set  estimated on test  estimated on full trn.  % test wins  % trn. wins  

(abbr.)  test  trn.  
banknote  0.061  0.052  0.025  0.061  0.052  0.024  69.7  89.7  70.5  89.3  0.254  0.240 
climate  0.150  0.143  0.053  0.133  0.129  0.034  63.9  99.8  56.0  100.0  0.071  0.033 
firstorder  0.666  0.658  0.529  0.652  0.650  0.514  75.9  100.0  55.3  100.0  0.055  0.015 
gas  0.141  0.134  0.085  0.139  0.133  0.082  68.5  99.9  65.7  99.8  0.134  0.105 
landsat  0.291  0.251  0.161  0.285  0.247  0.153  100.0  100.0  99.9  100.0  0.312  0.286 
letter  0.618  0.599  0.299  0.615  0.595  0.294  97.5  100.0  97.1  100.0  0.061  0.060 
low  0.763  0.747  0.696  0.475  0.501  0.334  70.0  91.5  2.2  100.0  0.233  0.181 
magic  0.317  0.303  0.216  0.316  0.303  0.216  90.3  100.0  89.4  99.8  0.136  0.134 
miniboone  0.246  0.229  0.159  0.246  0.229  0.159  83.6  99.9  83.7  99.9  0.198  0.197 
optical  0.161  0.113  0.049  0.154  0.111  0.042  100.0  100.0  100.0  100.0  0.426  0.385 
penbased  0.280  0.243  0.124  0.278  0.241  0.122  99.6  100.0  100.0  100.0  0.238  0.234 
qsar  0.257  0.247  0.154  0.229  0.226  0.132  65.7  100.0  53.1  100.0  0.089  0.031 
shuttle  0.134  0.103  0.059  0.134  0.103  0.059  82.1  83.7  81.7  83.7  0.415  0.413 
skin  0.098  0.087  0.068  0.098  0.087  0.068  79.8  55.9  79.8  56.0  0.365  0.365 
spambase  0.195  0.185  0.112  0.189  0.182  0.108  76.2  99.8  70.7  100.0  0.117  0.086 
spectf  0.325  0.325  0.260  0.203  0.210  0.131  41.7  85.7  21.6  100.0  0.006  0.108 
data set  test  trn.  test  trn.  win test lik.  win trn. lik.  win test err.  win trn. err.  

(abbr.)  
banknote  9.38  9.29  0.087  0.086  73.8  96.5  74.0  96.6  30.1  76.2  30.6  75.2 
climate  27  26.2  0.117  0.102  100.0  93.7  100.0  93.3  79.9  22.4  81.1  17.5 
firstorder  68  43.7  0.626  0.616  100.0  100.0  100.0  100.0  96.8  7.6  95.0  5.8 
gas  5.66e+03  21.1  0.145  0.143  100.0  99.9  100.0  100.0  44.7  68.3  42.9  67.9 
landsat  16.8  16.2  0.308  0.302  99.4  100.0  99.5  100.0  29.8  98.6  27.9  98.0 
letter  53.1  52.9  0.625  0.622  99.8  100.0  99.7  100.0  33.2  92.4  32.2  92.9 
low  27.9  9.42  0.744  0.485  100.0  100.0  100.0  100.0  74.9  39.3  26.1  16.4 
magic  12.4  12.4  0.292  0.292  100.0  80.7  100.0  80.7  74.0  37.8  74.3  38.9 
miniboone  7.65e+07  10.8  0.218  0.218  99.7  96.1  100.0  98.3  73.1  41.3  72.6  40.7 
optical  7.74e+15  7.48e+15  0.900  0.900  29.5  99.0  32.7  100.0  0.0  100.0  0.0  100.0 
penbased  35.4  35  0.299  0.297  98.9  100.0  99.1  100.0  24.5  98.7  24.8  98.5 
qsar  1.51e+13  1.1e+13  0.229  0.209  100.0  93.2  100.0  96.6  86.9  16.1  83.8  14.9 
shuttle  5.51e+05  5.82e+05  0.822  0.822  1.6  100.0  1.6  100.0  1.6  99.1  1.6  99.1 
skin  40.4  40.4  0.102  0.102  94.7  95.2  94.7  95.4  40.1  71.2  40.6  71.1 
spambase  1.66e+16  8.65e+15  0.310  0.307  85.1  100.0  85.1  100.0  51.3  51.0  51.8  48.4 
spectf  53.8  36.8  0.293  0.182  100.0  74.2  100.0  42.3  71.0  17.8  78.4  8.0 
With the labeled, unlabeled, and test sets as described above, we determined , , and . In addition, we calculated , which are the parameters of the constrained LDA estimated by means of the more ad hoc procedure in [31]. For , we of course had to use the true labels belonging to the unlabeled data. The parameters in can be estimated in closed form. For details, we refer to the original work in [31].
For every data set the experiments were repeated 1000 times. Using the estimates , , and , we calculated the following twelve criteria based on the loglikelihood for Table 3: the three average loglikelihoods (denoted , , and ) on the independent test data; the same three average loglikelihoods on the labeled plus unlabeled data, i.e., the training data ; the percentage of times that the loglikelihood of the semisupervised learner is strictly larger than the loglikelihood of the supervised learner (, read: semisupervised over supervised); the percentage that the loglikelihood of the optimal classifier is strictly larger than the semisupervised one (this number, denoted , as well as the previously defined are calculated both on the test and the training set); and finally we expressed the relative improvement of the semisupervised approach over the supervised approach in comparison with the optimal estimates by . Again this is done both on the test and the training set. The same quantities are also calculated for the corresponding error rates , , and (see Table 4), with the only difference that we check numbers to be strictly smaller, instead of larger, to determine and . Finally, Table 5 contains averaged loglikelihoods and error rates , both on training and test sets, for the more ad hoc semisupervised approach. Similar to those in Tables 3 and 4, in the last four columns, comparisons to the corresponding loglikelihoods and classification errors of the supervised and our novel semisupervised approach are made.
A permutation test on all different paired results [87], both for the four loglikelihoods , , , and and the four errors , , , and , showed that for almost all cases we cannot retain the hypothesis that their averages are the same (at ). There are a few exceptions though. For the test error rates and on spectf
, we cannot reject the null hypothesis of equality of expectation (at
). On optical and qsar there is no statistically significant difference between and for the test loglikelihoods (at and , respectively). Finally, and are, both in training and testing, not significantly different on shuttle (at and ) and spambase (at and ), while and are not significantly different on skin (at and ). For easy reference, the related performance numbers are underlined in the respective result tables.6 Discussion
6.1 Guarantees on the Training Set
The results in Table 3 show that, on the training set, MCPLbased semisupervised LDA is in between the regular supervised and the optimal estimate. That this happens to be the case in a strict sense, in all experiments we carried out, can be most readily deduced from the values under and on the training set. These numbers equal in all cases. This, in turn, indicates that in all of the 16,000 experiments we ran, the strict inequality from Theorem 1 was satisfied. Even for the discrete data sets this holds true, which was to be expected, given the number of different discrete vectors these data sets take on. Spectf has the smallest number, 267, implying that every feature vector in spectf is unique. With 267 distinct values, chances are indeed very small that the means from Equation (15) and (16) coincide.
6.2 Likelihood Behavior on the Test Set
The aforementioned guarantees are on the training set that includes the unlabeled samples in , but of course we are interested in the performance on independent test data as well. We are unaware of any theoretical results for the loglikelihood that provide a precise connection between performance on the training set and the test set, though we do expect that with more training data the likelihood of the supervised model on the test set becomes better in expectation. We need to consider such improvement in expectation, simply because, for a single instantiation of a classification problem, we might be unlucky in our draw of training or test set. In contrast with the situation in the training phase, we can therefore only get improvements in the average. Comparing the test loglikelihood in Table 3 for the supervised method with the one for the semisupervised approach, we see the same as on the training data: for every data set, is smaller than . Also if we look at , we see that there are only two cases out of 16,000 in which the supervised estimate was better: we find a percentage of instead of on miniboone.
The story is different, however, if we compare the semisupervised and the optimal estimates. First of all,
indicates that, on the independent test set, the semisupervised estimate is better than the optimal one in about 5% of the cases. In itself, this does not have to be at odds with what we expect for the likelihood, as it concerns the number of wins or losses and not the average loglikelihood. Our results on
gas, optical, and qsar, however, indicate that also when it comes to the expected loglikelihood, may outperform . Only the result on gas is statistically significant though. Moreover, the differences are anyway relatively small, as also the secondtolast column in Table 3 illustrates, where we find values basically equal to 1 for these sets.Regarding the loglikelihood, we generally note the following. Overall, the relative improvements, as provided in the last two columns of Table 3, are considerable, sometimes enormous even. None of them is lower than 0.9 and many are virtually 1. This shows that the semisupervised loglikelihood is, relative to the supervised value, very close to the optimal estimate. The immense improvements are probably explained by the fact that the averaged classconditional covariance matrix is much more stably estimated in case of semisupervision. The supervised estimate relies on samples, while the semisupervised estimate, as can be readily seen from Equation (14), is based on all in the training set. In our experiments is considerably larger than . The latter is only slightly larger than twice the dimensionality, resulting in unstable covariance estimates. Clearly, the extreme difference in behavior for the various estimates will disappear with increasing numbers of labeled data.
6.3 Error Rates
Unlike the loglikelihood, the 01 loss is bounded and the differences and relative improvements stated in Table 4 are not that large. In almost all cases, is smaller than and is smaller than in turn. On the test set, the maximum relative improvement reported is 0.426 on optical, with a good second of 0.415 on shuttle.
There are three settings, however, in which no improvements of semisupervised over supervised learning are attained: the first one is on the training set for low and the two others are in the training and test phase for spectf. In all cases, is better than . So we have the, possibly, somewhat counterintuitive behavior that the estimates improve in terms of the expected loglikelihood, but that the expected error rate still deteriorates. Similar phenomena for other classifiers have been described in [74, 75], where simple artificial examples are provided of how such behavior can be realized. It is a glimpse of the earlier mentioned difficult interrelationship two different performance criteria can display [73, 76, 77, 78], which we alluded to earlier on in Section 5. We checked the learning curves for low and spectf and they just showed the regular behavior: with increasing labeled sample sizes, the expected error rate of the supervised classifier decreases.
Finally, we remark that the increase in error rate going from the training to the test set is less for the semisupervised classifier than for the supervised one. This shows that the semisupervised classifier is less overtrained on the training set than supervised LDA.
6.4 Comparison to Constrained LDA
Looking at Table 5, we see that also the ad hoc approach can work well. Especially when looking at the likelihood and comparing it to the supervised estimates, we see that, both on the training and the test set, the estimated likelihood is often better than the one obtained by the regular supervised parameters. The reason for the constrained approach to often be so much better than the supervised approach is probably similar to the one given in Subsection 6.2 to explain why the new approach comes so close to the optimal loglikelihoods. The large improvements are probably due to the fact that the averaged classconditional covariance matrix is much more stably estimated in case of semisupervision. The estimated covariance matrix might still not be very good, but at least it is substantially better than the volatile and not so well conditioned supervised estimate. Nonetheless, the novel approach clearly outperforms the more ad hoc technique in most of the cases where the likelihood is concerned. In fact, compared to the constrained approach, MCPL provides the best average test loglikelihood on all data sets. The only expected loglikelihood that is worse during training is the one for spectf.
Looking at the error rate, we see that the ad hoc procedure does very bad on optical and shuttle (the reason for this remains as yet unclear). Still, leads to the best error rate on the test set on seven data sets. On the other nine data sets turns out to be preferred.
6.5 MCPL for Other Classifiers
MCPL is proposed as a general estimation principle, which delivers semisupervised estimates that are at least as good as the regular supervised parameter estimates for any loglikelihood based classifier. To come to results such as Theorems 1 and 2, additional knowledge about the classconditional distributions is needed. Because they are very similar to LDA and the same kind of mean constraints hold, classifiers for which it is almost immediate that strict or expected improvements can be obtained through semisupervision, are the NMC (nearest mean classifier), quadratic discriminant analysis (QDA), and all kinds of kernelized or flexibilized versions of NMC, LDA, and QDA [88]. We speculate that also many classifiers constructed on the basis of exponential families [67, 68] allow for theorems making equivalent statements. These include, for instance, the Bernoulli, multinomial, and exponential density.
Another interesting group of classifiers to study in the context of MCPL is that for which every class may consist of a mixture model. As the analysis of mixture models is in itself already rather difficult [89]—for one, the likelihood function is not concave, such classifiers may be outside the reach of any helpful theoretical analysis. We do, however, expect to benefit, if only from the regularizing effect our semisupervised approach has, similar to the situation mentioned at the end of Subsection 6.2. What does seem a problem still, is to find an appropriate solution to the optimization that needs to be carried out in order to find an MCPL estimate. It seem worthwhile, though, to try to get to the nearest saddle point that can be found by means of a combined gradient ascent (in ) and descent (in ).
Finally, we could try to extend our work to classifiers that do not rely on likelihood models. One possible path may be through [90]
, which presents a decisiontheoretic interpretation of maximum entropy and considers generalized concepts of entropy that relate to a much broader class of loss function than merely the (negative) loglikelihood. Though the link with this work is certainly not onetoone, it may be possible to interpret our contrastive loss as a form of relative entropy and to make use of the results in
[90].7 Conclusion
We presented a wellfounded approach to likelihoodbased semisupervised learning. Our principle of maximum contrastive pessimistic likelihood (MCPL) estimation is generally applicable to supervised classifiers whose parameters are estimated by means of a maximization of the likelihood. Moreover, under certain concavity assumptions, improvements of the semisupervised estimates can be expected and, in particular cases, even be guaranteed. A workedout illustration based on classical LDA demonstrates the significant improvements that can be obtained by our novel approach.
Acknowledgments
Marleen de Bruijne (Erasmus MC and KU) is wholeheartedly acknowledged for scrutinizing an initial version of this article beginning to end. Jesse H. Krijthe (LUMC and TU Delft) and David M. J. Tax (TU Delft) are kindly thanked for their proofreading of parts of the text. Joris Mooij (UvA) is acknowledged for inviting me to give a talk that, eventually, triggered insights into a simplification and generalization of the theory. Are C. Jensen (UiO) is warmly thanked for all the semisupervised inspiration he provided me with. Thanks also to Mads Nielsen (KU) who gave me some great opportunities throughout the past decade. Finally, I would like to thank the anonymous reviewers for their critical appraisal. This work has benefitted from all the input received.
References
 [1] Ronald A. Fisher. An absolute criterion for fitting frequency curves. Messenger of Mathematics, 41:155–160, 1912.
 [2] Ronald A. Fisher. On the mathematical foundations of theoretical statistics. Philosophical Transactions of the Royal Society of London. Series A, Containing Papers of a Mathematical or Physical Character, 222:309–368, 1922.
 [3] Ronald Aylmer Fisher. Theory of statistical estimation. In Mathematical Proceedings of the Cambridge Philosophical Society, volume 22, pages 700–725. Cambridge Univ Press, 1925.
 [4] Stephen M. Stigler. The epic story of maximum likelihood. Statistical Science, 22(4):598–620, 2007.
 [5] Markus Ackermann, M. Ajello, A. Allafort, L. Baldini, J. Ballet, G. Barbiellini, et al. Detection of the characteristic piondecay signature in supernova remnants. Science, 339(6121):807–811, 2013.
 [6] Jenny Allen, Mason Weinrich, Will Hoppitt, and Luke Rendell. Networkbased diffusion analysis reveals cultural transmission of lobtail feeding in humpback whales. Science, 340(6131):485–488, 2013.
 [7] Hoi Sung Chung and William A Eaton. Singlemolecule fluorescence probes dynamics of barrier crossing. Nature, 2013.
 [8] Bingni W. Brunton, Matthew M. Botvinick, and Carlos D. Brody. Rats and humans can optimally accumulate evidence for decisionmaking. Science, 340(6128):95–98, 2013.
 [9] Dana C. Price, Cheong Xin Chan, Hwan Su Yoon, Eun Chan Yang, Huan Qiu, et al. Cyanophora paradoxa genome elucidates origin of photosynthesis in algae and plants. Science, 335(6070):843–847, 2012.
 [10] Hu Cang, Anna Labno, Changgui Lu, Xiaobo Yin, Ming Liu, Christopher Gladden, Yongmin Liu, and Xiang Zhang. Probing the electromagnetic field of a 15nanometre hotspot by single molecule imaging. Nature, 469(7330):385–388, 2011.
 [11] Angélique D’Hont, France Denoeud, JeanMarc Aury, FrancChristophe Baurens, Françoise Carreel, et al. The banana (Musa acuminata) genome and the evolution of monocotyledonous plants. Nature, 488(7410):213–217, 2012.
 [12] Yuannian Jiao, Norman J Wickett, Saravanaraj Ayyampalayam, André S Chanderbali, Lena Landherr, et al. Ancestral polyploidy in seed plants and angiosperms. Nature, 473(7345):97–100, 2011.
 [13] Lauri Nummenmaa, Enrico Glerean, Riitta Hari, and Jari K Hietanen. Bodily maps of emotions. Proceedings of the National Academy of Sciences, 111(2):646–651, 2014.
 [14] E. Saglamyurek, N. Sinclair, J. Jin, J. A. Slater, D. Oblak, F. Bussières, M. George, R. Ricken, W. Sohler, and W. Tittel. Broadband waveguide quantum memory for entangled photons. Nature, 469(7331):512, 2011.
 [15] Koichiro Tamura, Daniel Peterson, Nicholas Peterson, Glen Stecher, Masatoshi Nei, and Sudhir Kumar. Mega5: molecular evolutionary genetics analysis using maximum likelihood, evolutionary distance, and maximum parsimony methods. Molecular Biology and Evolution, 28(10):2731–2739, 2011.
 [16] J. Wang. An improvement on the maximum likelihood reconstruction of pedigrees from marker data. Heredity, 2013.
 [17] Ziheng Yang and Bruce Rannala. Molecular phylogenetics: principles and practice. Nature Reviews Genetics, 13(5):303–314, 2012.
 [18] Jacob Bien and Robert J. Tibshirani. Sparse estimation of a covariance matrix. Biometrika, 98(4):807–820, 2011.
 [19] Madeleine Cule, Richard Samworth, and Michael Stewart. Maximum likelihood estimation of a multidimensional logconcave density. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 72(5):545–607, 2010.
 [20] Yeojin Chung, Sophia RabeHesketh, Vincent Dorie, Andrew Gelman, and Jingchen Liu. A nondegenerate penalized likelihood estimator for variance parameters in multilevel models. Psychometrika, pages 1–25, 2013.
 [21] Ted A. Laurence and Brett A. Chromy. Efficient maximum likelihood estimator fitting of histograms. Nature Methods, 7(5):338–339, 2010.
 [22] Jason D. Lee and Trevor J. Hastie. Learning mixed graphical models. arXiv preprint arXiv:1205. 5012, 2012.
 [23] N. Simon and R. J. Tibshirani. Discriminant analysis with adaptively pooled covariance. arXiv preprint arXiv:1111. 1687, 2011.
 [24] O. Chapelle, B. Schölkopf, and A. Zien. SemiSupervised Learning. MIT Press, Cambridge, MA, 2006.
 [25] X. Zhu and A. B. Goldberg. Introduction to SemiSupervised Learning. Morgan & Claypool Publishers, 2009.
 [26] MariaFlorina Balcan and Avrim Blum. A discriminative model for semisupervised learning. Journal of the ACM, 57(3):19, 2010.
 [27] V. Castelli and T. M. Cover. On the exponential value of labeled samples. Pattern Recognition Letters, 16(1):105–111, 1995.
 [28] S. BenDavid, T. Lu, and D. Pál. Does unlabeled data provably help? worstcase analysis of the sample complexity of semisupervised learning. In Proceedings of COLT 2008, pages 33–44, 2008.
 [29] J. Lafferty and L. Wasserman. Statistical analysis of semisupervised regression. In Advances in Neural Information Processing Systems, volume 20, pages 801–808, 2007.
 [30] A. Singh, R. Nowak, and X. Zhu. Unlabeled data: Now it helps, now it doesn’t. In Advances in Neural Information Processing Systems, volume 21, 2008.

[31]
Marco Loog.
Semisupervised linear discriminant analysis through momentconstraint parameter estimation.
Pattern Recognition Letters, 37(1):24 –31, 2014.  [32] X. Zhu. Semisupervised learning literature survey. Computer Sciences TR 1530, University of Wisconsin, 2008.
 [33] H. O. Hartley and J. N. K. Rao. Classification and estimation in analysis of variance problems. Review of the International Statistical Institute, 36(2):141–147, 1968.
 [34] G. J. McLachlan. Iterative reclassification procedure for constructing an asymptotically optimal rule of allocation in discriminant analysis. Journal of the American Statistical Association, 70(350):365–369, 1975.
 [35] S. Basu, A. Banerjee, and R. Mooney. Semisupervised clustering by seeding. In Proceedings of the Nineteenth International Conference on Machine Learning, pages 19–26, 2002.
 [36] J. N. Vittaut, M. R. Amini, and P. Gallinari. Learning classification with both labeled and unlabeled data. In Machine Learning: ECML 2002, pages 69–78, 2002.
 [37] D. Yarowsky. Unsupervised word sense disambiguation rivaling supervised methods. In Proceedings of the 33rd annual meeting on Association for Computational Linguistics, pages 189–196, 1995.
 [38] G. Nagy and G.L. Shelton. Selfcorrective character recognition system. IEEE Transactions on Information Theory, 12(2):215–222, 1966.

[39]
K. Nigam, A. McCallum, S. Thrun, and T. Mitchell.
Learning to classify text from labeled and unlabeled documents.
In
Proceedings of the Fifteenth National Conference on Artificial Intelligence
, pages 792–799, 1998.  [40] T. J. O’Neill. Normal discrimination with unclassified observations. Journal of the American Statistical Association, pages 821–826, 1978.
 [41] D. M. Titterington. Updating a diagnostic system using unconfirmed cases. Journal of the Royal Statistical Society. Series C (Applied Statistics), 25(3):238–247, 1976.
 [42] N. P. Dick and D. C. Bowden. Maximumlikelihood estimation for mixtures of two normal distributions. Biometrics, 29:781–791, 1973.
 [43] D. W. Hosmer Jr. A comparison of iterative maximum likelihood estimates of the parameters of a mixture of two normal distributions under three different types of sample. Biometrics, pages 761–770, 1973.
 [44] W. Y. Tan and W. C. Chang. Convolution approach to genetic analysis of quantitative characters of selffertilized population. Biometrics, 28:1073–1090, 1972.
 [45] G. J. McLachlan. Discriminant analysis and statistical pattern recognition. John Wiley & Sons, 1992.
 [46] S. Abney. Understanding the Yarowsky algorithm. Computational Linguistics, 30(3):365–395, 2004.
 [47] G. Haffari and A. Sarkar. Analysis of semisupervised learning with the Yarowsky algorithm. In Proceedings of the 23rd Conference on Uncertainty in Artificial Intelligence, 2007.
 [48] I. Cohen, F. G. Cozman, N. Sebe, M. C. Cirelo, and T. S. Huang. Semisupervised learning of classifiers: Theory, algorithms, and their application to humancomputer interaction. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 1553–1567, 2004.
 [49] F. Cozman and I. Cohen. Risks of semisupervised learning. In SemiSupervised Learning, chapter 4. MIT Press, 2006.
 [50] Ting Yang and Carey E Priebe. The effect of model misspecification on semisupervised classification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(10):2093–2103, 2011.
 [51] Halbert White. Maximum likelihood estimation of misspecified models. Econometrica, 50(1):1–25, 1982.
 [52] Masanori Kawakita and Jun ichi Takeuchi. Safe semisupervised learning based on weighted likelihood. Neural Networks, 2014.
 [53] Hidetoshi Shimodaira. Improving predictive inference under covariate shift by weighting the loglikelihood function. Journal of statistical planning and inference, 90(2):227–244, 2000.
 [54] Nataliya Sokolovska, Olivier Cappé, and François Yvon. The asymptotics of semisupervised learning in discriminative probabilistic models. In Proceedings of the 25th International Conference on Machine learning, pages 984–991. ACM, 2008.
 [55] M. Loog. Constrained parameter estimation for semisupervised learning: the case of the nearest mean classifier. In Machine Learning and Knowledge Discovery in Databases (ECML PKDD 2010), volume 6322 of LNAI, pages 291–304. Springer, 2010.
 [56] M. Loog. Semisupervised linear discriminant analysis using moment constraints. In Partially Supervised Learning (PSL 2011), volume 7081 of LNAI, pages 32–41. Springer, 2012.
 [57] K. Fukunaga. Introduction to Statistical Pattern Recognition. Academic Press, 1990.
 [58] M. Loog and A. C. Jensen. Constrained loglikelihoodbased semisupervised linear discriminant analysis. In Structural, Syntactic, and Statistical Pattern Recognition, volume 7626 of LNCS, pages 327–335. Springer, 2012.
 [59] M. Loog and A. C. Jensen. Semisupervised nearest mean classification through a constrained loglikelihood. IEEE Transactions on Neural networks and Learning Systems, accepted, 2014.
 [60] J. H. Krijthe and M. Loog. Implicitly constrained semisupervised least squares classification. submitted November 2013, available through http://www.jessekrijthe.com/papers/krijthe2013.pdf, 2013.
 [61] J. H. Krijthe and M. Loog. Implicitly constrained semisupervised linear discriminant analysis. In Proceedings of the 22nd International Conference on Pattern Recognition, volume 22, pages —, Stockholm, Sweden, accepted, 2014.
 [62] MingWei Chang, Lev Ratinov, and Dan Roth. Guiding semisupervision with constraintdriven learning. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 280–287, Prague, Czech Republic, 2007.
 [63] G.S. Mann and A. McCallum. Generalized expectation criteria for semisupervised learning with weakly labeled data. The Journal of Machine Learning Research, 11:955–984, 2010.
 [64] Brian D. Ripley. Pattern recognition and neural networks. Cambridge University Press, 1996.
 [65] G. J. McLachlan. Estimating the linear discriminant function from initial samples containing a small number of unclassified observations. Journal of the American Statistical Association, 72(358):403–406, 1977.
 [66] G. J. McLachlan and S. Ganesalingam. Updating a discriminant function on the basis of unclassified data. Communications in Statistics  Simulation and Computation, 11(6):753–767, 1982.
 [67] Lawrence D. Brown. Fundamentals of Statistical Exponential Families, volume 9 of Lecture Notes–Monograph Series. Institute of Mathematical Statistics, 1986.
 [68] P. J. Bickel and K. A. Doksum. Mathematical Statistics, volume 1. PrenticeHall, Inc., second edition, 2001.
 [69] M. Sion. On general minimax theorems. Pacific Journal of Mathematics, 8:171–176, 1958.
 [70] M. Dresher. Games of Strategy. PrenticeHall Inc., 1961.
 [71] Sebastian Thrun, Wolfram Burgard, and Dieter Fox. Probabilistic Robotics. MIT Pess, 2006.
 [72] Nelson Maculan and Geraldo Galdino de Paula Jr. A lineartime medianfinding algorithm for projecting a vector on the simplex of . Operations Research Letters, 8(4):219–222, 1989.
 [73] Peter L. Bartlett, Michael I. Jordan, and Jon D. McAuliffe. Convexity, classification, and risk bounds. Journal of the American Statistical Association, 101(473):138–156, 2006.
 [74] Shai BenDavid, David Loker, Nathan Srebro, and Karthik Sridharan. Minimizing the misclassification error rate using a surrogate convex loss. In Proceedings of the 29th Annual International Conference on Machine Learning, 2012.
 [75] M. Loog and R. P. W. Duin. The dipping phenomenon. In Structural, Syntactic, and Statistical Pattern Recognition, volume 7626 of LNCS, pages 310–317. Springer, 2012.
 [76] Mark D. Reid and Robert C. Williamson. Composite binary losses. The Journal of Machine Learning Research, 11:2387–2422, 2010.
 [77] Mark D. Reid and Robert C. Williamson. Information, divergence and risk for binary experiments. The Journal of Machine Learning Research, 12:731–817, 2011.
 [78] Tong Zhang. Statistical behavior and consistency of classification methods based on convex risk minimization. Annals of Statistics, pages 56–85, 2004.

[79]
T. Joachims.
Transductive inference for text classification using support vector machines.
In Proceedings of the 6th International Conference on Machine Learning, pages 200–209, 1999.  [80] Y. Grandvalet and Y. Bengio. Semisupervised learning by entropy minimization. Advances in Neural Information Processing Systems, 17:529–536, 2004.
 [81] K. Bache and M. Lichman. Uci machine learning repository, 2013.
 [82] D. D. Lucas, R. Klein, J. Tannahill, D. Ivanova, S. Brandon, D. Domyancic, and Y. Zhang. Failure analysis of parameterinduced simulation crashes in climate models. Geoscientific Model Development Discussions, 6(1):585–623, 2013.
 [83] J. P. Bridge, S. B. Holden, and L. C. Paulson. Machine learning for firstorder theorem proving: learning to select a good heuristic. submitted, 2013.
 [84] Alexander Vergara, Shankar Vembu, Tuba Ayhan, Margaret A Ryan, Margie L Homer, and Ramón Huerta. Chemical gas sensor drift compensation using classifier ensembles. Sensors and Actuators B: Chemical, 166:320–329, 2012.
 [85] Kamel Mansouri, Tine Ringsted, Davide Ballabio, Roberto Todeschini, and Viviana Consonni. Quantitative structure–activity relationship models for ready biodegradability of chemicals. Journal of chemical information and modeling, 53(4):867–878, 2013.
 [86] R. Bhatt and A. Dhall. Skin segmentation dataset.
 [87] P. I. Good. Permutation Tests. Springer, 2000.
 [88] T. Hastie, R. Tibshirani, and J.H. Friedman. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer Verlag, 2001.
 [89] E. L. Lehmann and G. Casella. Theory of point estimation. SpringerVerlag, second edition, 1998.
 [90] Peter D Grünwald and A Philip Dawid. Game theory, maximum entropy, minimum discrepancy and robust bayesian decision theory. Annals of Statistics, pages 1367–1433, 2004.
Comments
There are no comments yet.