Learning Using Privileged Information: SVM+ and Weighted SVM

06/13/2013 ∙ by Maksim Lapin, et al. ∙ Max Planck Society Universität Saarland 0

Prior knowledge can be used to improve predictive performance of learning algorithms or reduce the amount of data required for training. The same goal is pursued within the learning using privileged information paradigm which was recently introduced by Vapnik et al. and is aimed at utilizing additional information available only at training time -- a framework implemented by SVM+. We relate the privileged information to importance weighting and show that the prior knowledge expressible with privileged features can also be encoded by weights associated with every training example. We show that a weighted SVM can always replicate an SVM+ solution, while the converse is not true and we construct a counterexample highlighting the limitations of SVM+. Finally, we touch on the problem of choosing weights for weighted SVMs when privileged features are not available.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction: prior knowledge, privileged information, and instance weights

Classification is a well-studied problem in machine learning, however, learning still remains a challenging task when the amount of training data is limited. Hence, information available

in addition to the training sample – the prior knowledge – is the crucial factor in achieving further performance improvement.

Prior knowledge comes in different forms and its incorporation into the learning problem depends on a particular setting as well as the algorithm. This paper focuses on introducing prior knowledge into a support vector machine (SVM) for binary classification.

Lauer & Bloch (2008) provide a review of different ways to incorporate prior knowledge into SVMs and give a categorization of the reviewed methods based on the type of prior knowledge they assume; see also (Schölkopf & Smola, 2002). We will mainly consider the scenario where the additional information is about the training data

rather than about the target function. A loosely related setting is the semi-supervised learning approach

(Chapelle et al., 2006), where unlabeled data carries certain information about the marginal distribution in the input space.

Recently, Vapnik & Vashist (2009)

introduced the learning using privileged information (LUPI) paradigm which aims at improving predictive performance of learning algorithms and reducing the amount of required training data. The additional information in this framework comes in the form of privileged features, which are available at training time, but not at test time. These features are used to parametrize the upper bound on the loss function and, essentially, are used to estimate the loss of an optimal classifier on the given training sample. Higher loss may be seen as an indication that a given point is likely to be an outlier, and, hence, should be treated differently than a non-outlier. This simple idea has been extensively explored in the literature and we give a few pointers in Section 

1.2. The additional information about which training examples are likely to be outliers can be encoded via instance weights, therefore, one can already anticipate a close relation between the LUPI framework and importance weighting which is discussed next.

In the weighted learning scenario, each training example comes with a non-negative weight which is used in the loss function to balance the cost of errors. A typical example where instance weights appear naturally is the cost-sensitive learning (Elkan, 2001). If classes are unbalanced or different misclassification errors incur different penalties, one can encode that prior knowledge in the form of instance weights. Assigning high weight to a data point suggests that the learning algorithm should try to classify that point correctly, possibly at the cost of misclassifying “less important” points. In this paper, however, we do not make the cost-sensitive assumption, i.e., we do not assume that different errors incur different costs on the test set. Instead, we decouple importance weighting on the training and on the test sets, and we only focus on the former. This allows us, in particular, to also assign a high weight to an outlier if that ultimately leads to a better model.

As mentioned above, there are different forms of prior knowledge that can be encoded differently. In this paper, we show that instance weights can express the same type of prior knowledge that is encoded via privileged features. In particular, this allows one to interpret the effect of privileged features in terms of the incurred importance weights. Remarkably, the resulting weights do emphasize outliers, which also happen to be support vectors in SVMs.

Our focus in this work is on the study of the SVM algorithm, which is an extension of the support vector machine to the LUPI framework (Vapnik & Vashist, 2009). Using basic tools of convex analysis, we investigate uniqueness of the SVM solution and its relation to solutions of the weighted SVM (WSVM). It turns out there is a simple connection between an SVM solution and WSVM instance weights, moreover, that relation can be used to better understand the SVM algorithm and to study its limitations. Having realized that instance weights in WSVMs can serve the same purpose as privileged features in SVM, we also turn to the problem of choosing weights when privileged features are not available.

1.1 Our contributions

Below is a summary of contributions of this work.

  • We show that any non-trivial SVM solution is unique (in the primal), which is a stronger result than the one available for (W)SVMs, where the offset may not be unique.

  • By reformulating the SVM dual optimization problem, we reveal its close connection to the WSVM algorithm. In particular, we show that any SVM dual solution can be used to construct weights for the WSVM that will yield the same primal solution up to the non-uniqueness of . This implies that WSVM with appropriately chosen weights can mimic SVM and that it is always possible to go from an SVM solution to a WSVM solution.

  • We also study whether it is always possible to go in the opposite direction (which would imply that the two algorithms are equivalent). We give the necessary and sufficient condition for such an equivalence to hold and reveal that the SVM solutions are a strict subset of the WSVM solutions. We construct a simple counterexample where a WSVM solution cannot be found by SVM, no matter which privileged features are used or which values the hyper-parameters take.

  • Finally, we turn to the problem of choosing weights in the absence of privileged features. We show that the weights can be learned directly from data by minimizing an estimate of risk similar to standard procedures of hyper-parameter tuning. In the idealized setting, where the estimate is computed on a large validation set, we show that the WSVM with learned weights outperforms both the SVM and the SVM. This highlights the potential of weighted learning and should motivate further work on the choice of weights.

1.2 Related work

We now briefly discuss related work on learning using privileged information and weighted learning.

Since the introduction of the new learning paradigm and the corresponding SVM algorithm in (Vapnik, 2006) and later in (Vapnik et al., 2009; Vapnik & Vashist, 2009), there is a growing body of work on theoretical analysis (Pechyony & Vapnik, 2010), implementation (Pechyony & Vapnik, 2011) and application of the proposed framework to various machine learning settings. Liang & Cherkassky (2008); Liang et al. (2009) study the relation between the SVM approach and the multi-task learning scenario, Fouad et al. (2012) apply the SVM idea to metric learning, and Chen et al. (2012) extend it to boosting algorithms. Feyereisl & Aickelin (2012) use privileged information for data clustering and Wolf & Levy (2013) propose an SVM

method to compute similarity scores in video face recognition. Note, however, that the latter method is not related to the SVM

algorithm we have in mind in Section 4.5. In particular, SVM reduces to SVM with a pre-processing step, similar to (Schölkopf et al., 1998), while in our case the optimization problem as well as the motivation are entirely different.

Instance weighting has been widely used in various machine learning settings and the topic is to too vast to cover all of the related work here. We only give a few pointers to papers on cost-sensitive learning (Margineantu, 2002; Zadrozny et al., 2003), sample bias correction (Heckman, 1979; Cortes et al., 2010), domain adaptation (Shimodaira, 2000; Sugiyama & Müller, 2005), online learning (Dredze et al., 2008)

, and active learning

(Beygelzimer et al., 2009). Perhaps the most related in terms of the learning algorithm (SVM) and the interpretation of instance weights are the works on fuzzy SVM (Lin & Wang, 2002), where each data point has a fuzzy class membership represented by a weight between 0 and 1, weighted margin SVM (Wu & Srihari, 2004)

, where again each label has a confidence score between 0 and 1, and weighted SVM with an outlier detection pre-processing step

(Yang et al., 2005), where a kernel-based clustering algorithm is used to generate instance weights.

1.3 Organization

The rest of the paper is organized as follows. In Section 2 we introduce the SVM and the weighted SVM (WSVM) algorithms. In Section 3 we study basic properties of these algorithms, namely, uniqueness of their solutions. In Section 4 we present our main result which consists of four parts. Theorem 4.1 shows that any SVM solution is also a WSVM solution with appropriately chosen weights, Theorem 4.3 gives the necessary and sufficient condition for equivalence between the SVM and WSVM problems, and Section 4.4 presents an example where a WSVM solution cannot be found by SVM, no matter which privileged features are used. Finally, Section 4.5 discusses whether it is possible to complement SVM with an SVM.

Section 5 is concerned with the problem of choosing weights, where we propose a weight learning method in Section 5.3. Lastly, Section 6 presents experimental results on a number of publicly available data sets and Section 7 gives some concluding remarks.

All proofs are moved to Appendix to enhance readability.

2 Preliminaries

In this section we describe the necessary background. Our results are based on basic notions from convex analysis (Boyd & Vandenberghe, 2004) and, in particular, on the Karush-Kuhn-Tucker (KKT) conditions. For convenience, the latter are provided in Appendix A for both of the optimization problems studied below.

2.1 The setting and notation

We consider a binary classification problem with an instance space and the label set . Let be a training sample drawn i.i.d. from an unknown distribution on , and be a convex loss function , e.g., the hinge loss . The task is to learn from a set of hypotheses , that yields label prediction by and achieves the lowest expected loss .

We use to denote the space of privileged information used in the SVM, while the is reserved to indicate a solution to an optimization problem.

In the non-linear setting, the input data is first mapped into a feature space endowed with an inner product. The decision space is mapped into via a feature map () and the correcting space is mapped into via (). It is known (Schölkopf et al., 2001) that inner products correspond to positive definite kernel functions111 A function which for all , gives rise to a positive definite kernel matrix is called a positive definite kernel. as follows: (and similar for ), which allows to formulate algorithms with general kernels in mind. Since the corresponding space should be clear from the context, we omit the subscripts when dealing with inner products and the induced norms.

Unless transposed with

, all vectors are column vectors denoted by lower case bold letters, matrices are denoted by capital bold letters, and random variables are denoted by capital letters. We let

and . The kernel matrices and are defined entrywise via and , where . We also introduce the index sets , , , and a shorthand .

Finally, and stand correspondingly for the null space and the column space of a matrix , is the orthogonal complement of , and (respectively ) is the vector of all zeros (ones).

2.2 The SVM+ optimization problem

In the framework of learning using privileged information (LUPI), the decision space is augmented with a correcting space of privileged features that are available at training time only and are essentially used to estimate the loss of an optimal classifier on the given training sample. The SVM algorithm (Pechyony & Vapnik, 2011) is a generalization of the support vector machine that implements the LUPI paradigm. The slack variables are parametrized as a function of privileged features:

where are the additional parameters to be learned. The following optimization problem defines the SVM algorithm.

(2.1)

Note that there are two hyper-parameters, and , that control the trade-off between the three terms of the objective, where the second term limits the capacity of the set of correcting functions .

2.3 The WSVM optimization problem

The weighted support vector machine (WSVM) is a well-known generalization of the standard SVM. Each instance is assigned an importance weight and in place of the standard empirical risk estimator its weighted version is employed:

The WSVM optimization problem is given below.

(2.2)

At first glance, it may appear that the two generalizations of the SVM are unrelated. As will become clear in the following, however, there is a relation between the two and the solution space of WSVMs includes SVM solutions. This is not very surprising as soon as one realizes that re-weighting allows to alter the loss function to a large extent and, in particular, one can mimic the effect of privileged features. The close relationship can already be seen when comparing the corresponding dual problems.

2.4 The dual optimization problems

Let and be the Lagrange dual variables of the SVM or the WSVM problem corresponding respectively to the first and the second inequality constraints (Schölkopf & Smola, 2002; Vapnik et al., 2009). Define , where for the SVM we set , and note that can be eliminated leading to the constraint . Let

It is not hard to see that the following optimization problem is equivalent to the dual of the SVM problem (2.1).

(2.3)

Likewise, the problem below is equivalent to the dual of the WSVM problem (2.2).

(2.4)

Note that the constraint is the crucial part of the SVM problem as it introduces the coupling between the decision space and the correcting space . Recall from the representer theorem (Schölkopf et al., 2001) that an SVM solution has the form . Correcting features thus control the maximum influence a data point can have on the resulting classifier, just like the weights in WSVMs.

3 Uniqueness results

The connection between SVM and WSVM explored in Section 4 relies on the analysis of uniqueness of their solutions. Effectively, the statements can only be made with respect to the classes of equivalent solutions and equivalent weights, hence, it is imperative to first obtain a better understanding of different sources of non-uniqueness in the aforementioned problems.

In this section, we show that every non-trivial SVM solution is unique, unlike WSVM solutions that may have a non-unique offset . Furthermore, we describe a set of equivalent weights that yield the same WSVM solutions. The latter will be used to prove equivalence between the SVM and the WSVM algorithms under additional constraints.

3.1 Uniqueness of WSVM and SVM+ solutions

We begin with a known result due to Burges & Crisp (1999)

that characterizes uniqueness of the weighted SVM solution. Essentially, it states that if there is an equilibrium between instance weights of support vectors, then the separating hyperplane can be shifted within a certain range without altering the total cost in the WSVM problem. In that case, a WSVM solver has to choose a value for the offset using some heuristic, e.g., it can choose the middle point in the allowed range of

.

The solution to the problem (2.2) is unique in . It is not unique in and iff one of the following two conditions holds:

Note that in practice it may happen that one of the two conditions holds and the WSVM problem (2.2) does not have a unique solution. This is not the case for the SVM as shown next.

The solution to the problem (2.1) is unique in for any , . If there is a support vector, then is unique as well, otherwise:

This result is interesting on its own, since it shows that the SVM is formulated in a way that privileged features always give enough information to choose the unique solution (if there are no support vectors, then the constant classifier can be given by depending on the class balance).

Results concerning uniqueness of dual solutions are more technical and are moved to the Appendix.

3.2 Equivalent weights

Apart from the conditions discussed in the previous section, another source of non-uniqueness is that any given WSVM solution corresponds, in general, to multiple weight vectors . In this section, we give a characterization of all such vectors.

A family of equivalent weights is defined for a given WSVM solution as follows:

where is the hinge loss at a point .

The following simple statement shows that the set defined above contains all weights that correspond to a given WSVM solution.

Let be a primal-dual optimal point for the WSVM problem (2.2). The point is primal optimal for any weight vector , and all such weights are contained in .

There always exists a weight vector such that and .

It is not surprising that a posteriori all weight could be concentrated on support vectors as suggested by Corollary 3.2. As will become clear in the following, this is close to what the SVM algorithm is constrained to do.

4 Relation between SVM+ and WSVM

In this section, we present our main theoretical result on the conditions under which the SVM and the WSVM are equivalent. Section 4.1 shows that it is always possible to construct weights from an SVM solution such that the WSVM will have the same solution. Section 4.2 discusses when it is possible to go in the opposite direction and reveals a fundamental constraint of the SVM algorithm. Finally, Section 4.3 states the necessary and sufficient condition for their equivalence. Furthermore, we present a counterexample violating that condition in Section 4.4 and discuss SVM in Section 4.5.

4.1 SVM+ solutions are also WSVM solutions

The following theorem shows that any SVM solution is also a solution to the WSVM problem with appropriately chosen weights and such a choice of weights can always be given by the SVM dual variables.

Let be a primal-dual optimal point for the SVM problem. There exists a choice of weights , namely , and such that is a primal-dual optimal point for the WSVM problem.

Note that a direct corollary of this result is that, just like a good choice of privileged features leads to improved predictive performance of the SVM (Pechyony & Vapnik, 2010), a good choice of weights leads to improved performance of the WSVM. This claim is verified empirically in the experimental Section 6 when weights are learned in an idealized setting, which is close to the Oracle SVM setting of Vapnik & Vashist (2009).

Figure 1: An example of equivalence between SVM (top) and WSVM (bottom). The privileged features coincide with the optimal slack variables , as motivated by the LUPI paradigm, and instance weights are given by the sum of SVM dual variables (Theorem 4.1). Note that whenever a WSVM solution is constructed from an SVM solution, as in this case, the weighted average loss is greater than the non-weighted one, i.e., (Theorem 4.3).

Figure 1 shows a toy example where an SVM solution is used to compute weights that force the WSVM to find exactly the same solution. Note that the outliers (points 3 and 4) receive relatively high weight, so that the weighted average loss is greater than the non-weighted one. See Section 4.3 for further details.

4.2 Which WSVM solutions are SVM+ solutions?

We now consider the opposite direction and characterize the SVM solutions in terms of the induced instance weights. The following Lemma 4.2 highlights the bias of the SVM algorithm as it establishes that every solution must satisfy a certain relation between the dual variables (respectively the weights) and the loss on the training sample. This is the key to showing that the SVM and the WSVM algorithms are not equivalent, and that the latter is strictly more generic as it does not impose that additional constraint.

Assume any given , and let be a primal-dual optimal point for the SVM problem (2.1), then the following holds:

(4.1)

where is the hinge loss at a point . If , then (4.1) is satisfied with equality.

Taking into account that the corresponding weights in the WSVM are given by the sum of the SVM dual variables, the above inequality can be re-written in a more compact form.

[The Necessary Condition] Assume the setting of Theorem 4.1, then and

Proof.

Follows from Theorem 4.1 and Lemma 4.2. ∎

Note that this result suggests a simple way to interpret the effect of privileged features – they impose a re-weighting of the input training data. Moreover, at the end of training more emphasis will be on points with positive loss and less on easy points, in particular, the non-support vectors may end up with zero weight.

4.3 SVM+ and WSVM equivalence

We now state the main result of this paper which gives the necessary and sufficient condition for the equivalence between the SVM and the WSVM.

Let be a primal-dual optimal point for the WSVM problem with instance weights , not all zero. There exists a choice of , , and correcting features such that is optimal for the SVM problem iff:

(4.2)

where . If , one such possible choice is as follows:

(4.3)

moreover, the optimal and in that case are:

(4.4)

Let us make a few remarks. First, condition (4.2) can be rewritten in terms of averages as

(4.5)

where is the normalized weight. Hence, any SVM solution has an equivalent WSVM setting that puts more weight on hard examples, i.e., the points with higher loss.

Further, it is clear from Definition 3.2 that the weight of points with can be changed arbitrarily without altering the since in that case , and , i.e., these points are not support vectors and they have no influence on the final classifier. Hence, their weight – the upper bound on the influence – does not matter.

This reasoning leads us to a condition that is much easier to check in practice than the one in Theorem 4.3. Note that condition (4.2) involves the set of equivalent weights and it is possible to check it directly using the definition of as will be discussed below. However, if the kernel matrix is non-singular, as is often the case with the Gaussian kernel, then one can simply take and check (4.2) for that particular weight vector only.

Let be a primal-dual optimal point for the WSVM problem with instance weights , not all zero. If

then there exists a choice of , , and such that is optimal for the SVM problem iff:

(4.6)

Intuitively, the SVM algorithm maximizes the margin by minimizing , as in the standard SVM, and also gradually shifts focus to hard examples by minimizing . As long as there are sufficiently many points on the “right” side of the margin, (4.5) can be achieved by reducing the weight of such non-support vectors, and so the SVM solution space is as rich as that of the WSVM. In general, however, (4.5) may not be attainable without altering the as demonstrated by the counter example below.

4.4 WSVM solution not found by SVM+

We now consider the case when misclassified training points have low weight, i.e., , and give an example where SVM fails to find the corresponding WSVM solution.

Consider the training sample below (Figure 2):

The corresponding primal-dual optimal point is

Since , this solution does not correspond to any of the SVM solutions (Lemma 4.2). Note that one can easily verify that contains only , hence, Proposition 4.3 already completes the claim. Similarly, one can show using Definition 3.2 that and that other equivalent weights can only increase the weight of points 1 and 2, which would only decrease . Therefore, there is no for which and, by Theorem 4.3, there is no correcting space that would make an SVM solution.

Figure 2: An example of a WSVM solution (bottom) that cannot be found by SVM (top). The instance weights are chosen in a way to avoid a zero norm constant classifier (). The resulting weighted average loss is less than the non-weighted one, hence the SVM cannot find this solution. Computing the privileged features as in (4.3) leads to an SVM solution with the opposite prediction and a higher value of the weighted average loss.

Figure 2 shows the learned WSVM and SVM models, where we used , , . A different choice of and can make SVM return a constant classifier, which is the solution of the standard SVM, but there is no setting that would make it return .

Note that in this example an even stronger result can be shown: SVM cannot reproduce the same type of dichotomy, i.e., even if we allowed it to return a line with any negative slope going through the same point, the SVM would still fail. This shows that there are settings where WSVM performs significantly better than SVM due to a fundamental constraint of the latter.

4.5 Is there an SVM?

We have seen that the SVM has a more constrained solution space than the weighted SVM. Lemma 4.2 gives the exact characterization of that constraint in terms of the relation between the SVM dual variables and the incurred loss on the training sample. The WSVM solution space can thus be partitioned into solutions that can be found by SVM and the rest. We are now interested if there is a modification to the SVM algorithm that would yield solutions from that second part.

Theorem 4.3 suggests that , so, intuitively, if we now require , the corresponding has to be with a minus:

(4.7)

This problem is clearly non-convex as the objective is now a difference of convex functions. If there was a finite (local) minimizer , the KKT conditions would still hold (Borwein & Lewis, 2000, Theorem 2.3.8) for a Lagrange multiplier vector , and one could show a result similar to Lemma 4.2, but with the reverse inequality.

Unfortunately, however, the problem (4.7) is unbounded below, which is easy to see: the quadratic term grows faster than the linear term and the feasible set is unbounded. This shows that it is not trivial to modify the SVM algorithm to obtain solutions from its complement, and it is an open question if such a modification (with non-degenerate solutions) exists at all.

The phenomenon we observe here is that some of the WSVM solutions () can be computed easily within the LUPI framework, while others () may be completely out of reach. What are the implications of this observation in terms of learning a classifier?

Consider any training sample of size for a problem . Let be a classifier constructed by the WSVM with weights , and let be the corresponding loss vector. The set of all admissible weights is partitioned into two subsets, and , depending on the sign of . Define the “best” weight vectors in each of the two classes as . If , then the best classifier corresponds to the weights that are out of reach for the SVM, hence, there are no privileged features that will yield an SVM classifier as good as .

This reasoning motivated us to consider weight generation schemes that are unrelated to SVM and which are discussed next.

5 How to choose the weights

Recall that we are interested in ways of incorporating prior knowledge about the training data. In the SVM approach, the role of additional information is played by the privileged features which are used to estimate the loss on the training sample. The same effect, as we have established, can be achieved by importance weighting. Taking into account the vast amount of work on weighted learning, it seems that re-weighting of misclassification costs is a very powerful method of incorporating prior knowledge. We would like to stress, however, that a critical difference to, e.g., the cost-sensitive learning is that we are ultimately interested in minimizing the non-weighted expected loss and the weights are only used to impose a bias on the learning algorithm.

We also note that even though the SVM solutions are contained within the WSVM solutions, there is no implication that any of the two algorithms is “better”. If privileged features are available, then SVM is a reasonable choice. On the other hand, if there are no privileged features or if one has concerns outlined at the end of Section 4.5, then one may want to consider a more general WSVM with some problem specific scheme for computing weights.

In the following, we investigate two approaches that make different assumptions about what is additionally available to the learning algorithm at training time. The methods operate in a somewhat idealized setting and are mainly aimed at motivating further research on how to choose the weights. They may be thought of as the empirical counterparts of a more theoretical discussion involving the Oracle SVM in (Vapnik & Vashist, 2009). In particular, the weight learning method of Section 5.3 can be thought of as a way of extracting additional information about the given training sample from a validation sample which is used as a reference.

5.1 Why instance weighting is important?

Let us first motivate why instance weighting can be very important in certain problems.

Figure 3: Illustration of the effect of instance weighting on a toy problem in 2D. Even though the problem is (almost) linearly separable, the two outliers in the training set cause the SVM to have a near chance level performance (horizontal line). Assigning zero weight to the outliers allows the WSVM to recover a near optimal solution (vertical line).

Consider the toy problem shown in Figure 3. The data comes from two linearly separable blobs, so it is possible to achieve zero test error on them. However, the training sample has been contaminated with two outliers that lie extremely far from the optimal decision boundary. Since the SVM uses a surrogate loss and not the 0-1 loss, the cost of a point is higher the further the point is from the separating hyperplane. Hence, the SVM “prefers” to keep the two outliers close to the decision boundary, which leads to a near chance level performance on this data set. Instance weighting, on the other hand, allows one to alter the cost of each point. In particular, if the two outliers are assigned zero weight, then the WSVM is able to find a near optimal classifier.

Figure 4: Importance weighting leads to a more stable estimate of the decision boundary in a non-linear 2D problem. The size of a data point corresponds to its weight, which is computed from an estimate of shown in background. The WSVM (solid line) is less influenced by outliers than SVM (dashed line) since the outliers are downweighted, which ultimately results in better predictive performance.

The second toy problem shown in Figure 4 suggests that an estimate of could be used to compute instance weights and improve predictive performance even in the non-linear case, where the aforementioned problem of extreme outliers is less likely to happen. As before, the issue evolves around the points that lie either too close to or even on the wrong side of the true decision boundary. We used the standard Nadaraya-Watson estimator (6.1

) to obtain an estimate of the conditional probability (shown in background), which was then used to compute instance weights (reflected by the size of points) using the formula (

5.2) introduced below. Note that the outliers are downweighted and have less influence on the WSVM decision boundary (solid line) than on the SVM one (dashed line). That leads to better accuracy, as reported in Section 6.2.

5.2 Access to an estimate of

Clearly, having full access to the conditional probability is a hypothetical scenario since in this case the classification problem is solved. However, it is interesting to see how this type of information could be used in construction of good weights. As the first step, we note that if were available at least for the training points one could directly compute the conditional expectation and employ the following estimator

which is an unbiased estimator of

:

The property of being biased or not is of asymptotic nature and is arguably of lesser interest in the small sample regime. Following this line of argument, we consider a conservative weighted estimator given by:

(5.1)
(5.2)

It is not hard to check that is biased:

More precisely, is conservative in the sense that the points far from the decision boundary are upweighted, while the points with receive relatively low weight. This behavior is due to the transform which is monotonically increasing and is strictly convex on . The monotonicity also ensures the following important property of the obtained estimator when is the 0-1 loss:

that is, the is minimized by the Bayes classifier and the learning problem is not changed.

If the bias of is a concern, one can let the weights decay to one as the size of the training sample increases. To this end, we consider the following generalization of the weight function in (5.2):

(5.3)

where is tuned along with the standard regularization parameter. Note that SVM is recovered when the weights are given by .

When is estimated from a training sample, the WSVM with weights given by (5.3) will mainly serve as a baseline for the method introduced in the following section. However, it is conceivable that an estimate of could be available from a different source, e.g., from annotations provided by humans. The latter setting is evaluated in Section 6.4.

5.3 Learning the weights

Given a fixed training sample , the weights in a weighted SVM parametrize the set of hypotheses that the WSVM can choose from. Hence, they could be learned within the standard framework of risk minimization with the additional twist that the classifier depends on the weights implicitly:

(5.4)
(5.5)

Clearly, the optimization problem (5.4

) cannot be solved in practice since the underlying probability distribution is unknown, hence, we replace

in (5.4) with an estimator. The latter, however, has to be different from the estimator in (5.5) to avoid overfitting. We follow a simple approach and assume that a second sample is available at training time. The problem (5.4) is thus replaced with

This idea follows the method of Chapelle et al. (2002) who suggested to tune L2-SVM parameters by minimizing certain estimates of the generalization error using a gradient descent algorithm. The use of the penalization of the training errors allows one to additionally assume the hard margin case which leads to a very specific derivation of the gradient w.r.t. the parameters. Instead, we proceed with a different approach and use a smooth version of the hinge loss given below in (5.10). Furthermore, we optimize (5.5) in the primal as suggested by Chapelle (2007). The weight learning problem can thus be stated as follows.

(5.6)
(5.7)

where is the matrix with , and , are the columns of and .

Note that depends on the weights implicitly via the second optimization problem and the main challenge in applying the gradient descent is the computation of and . These can be computed via implicit differentiation from the optimality conditions as shown below.

Let the loss function be convex and twice continuously differentiable and let the kernel matrix be (strictly) positive definite. Define vectors and componentwise for as

where is a solution of (5.7) for a given . If , then the solution is unique, and are continuously differentiable w.r.t.  and the corresponding gradient can be computed as follows.

(5.8)

Note that this result can be directly applied to such popular loss functions as the squared hinge loss and the logistic loss, and for the latter it will always hold that unless all weights are zero. If , it can be seen that is still uniquely defined and is continuously differentiable w.r.t.  if is considered fixed. The “gradient” in this case is given by

(5.9)

Figure 5: A 0-1 loss, a hinge loss, an approximate hinge loss , and its first two derivatives.

Ideally, to be consistent with the discussion about the relation between the SVM and the WSVM, we would have to consider the hinge loss in the weight learning problem. However, the hinge loss is not differentiable and Theorem 5.3 does not apply. Instead, we consider a differentiable approximation of the hinge loss that preserves certain desirable properties of the latter. We have chosen the loss function defined as follows (Figure 5).

(5.10)

Note that, unlike certain other approximations, this function is twice continuously differentiable. Like the hinge loss, it does not penalize points with the margin and it grows linearly for .

With the approximate hinge loss defined above, the means that at least one of the data points has to fall into the strictly convex region of the loss. Clearly, this presents us with a tradeoff between having a good approximation of the hinge loss (small ) and a higher chance of being able to compute “correct” gradients and thus make substantial progress in the optimization problem (large ). We resolve the tradeoff by tuning on a validation set.

6 Experiments

In this section we present empirical evaluation of the algorithms considered in this paper. In our experiments, we used the implementation of the WSVM by Chang & Lin (2011) and the code for the SVM provided by Pechyony & Vapnik (2011). The weight learning problem was solved using our implementation of the BFGS algorithm (Nocedal & Wright, 2006). The general experimental setup is similar to that of (Vapnik & Vashist, 2009)

: parameters are tuned on a validation set, which is not used for training, and performance is evaluated on a test set. Training subsets are randomly sampled from a fixed training set, and results over multiple runs are aggregated showing the mean error rate as well as the standard deviation. Depending on the experiment, the validation set is either fixed or subsampled randomly as well. The Gaussian RBF kernel is used in all of the experiments and features are rescaled to be in

. The weights in (5.2) are computed from , which is either given directly by human experts or estimated via:

(6.1)

where is the Gaussian kernel with bandwidth .

Note that in all experiments each algorithm has access to exactly the same data, and the only difference between different splits is which data is used to construct a classifier (training) and which is used to tune the hyper-parameters (validation).

6.1 WSVM replicates SVM+

Figure 6: SVM, SVM, and WSVM error rates. Left: Reproduction of the experiment of Vapnik et al. (2009). The SVM and the WSVM classifiers coincide up to the non-uniqueness of . Middle: Instance weighting leads to significant performance improvement when a large validation set is available (toy data). Right: Similar setting, but the training-to-validation splits are 1-to-2 and 2-to-1.

Figure 7: SVM and WSVM error rates on the UCI repository data sets with training-to-validation splits of 1-to-2 and 2-to-1. Left: Breast Cancer Wisconsin. Middle: Mammographic Mass. Right: Spambase.

We begin with the experimental verification of our theoretical findings of Section 4. We reproduced the handwritten digit recognition experiment of Vapnik et al. (2009), where the task is to discriminate between ’s and

’s taken from the MNIST database and downsized to

pixels. We used the features provided by the authors and obtained similar error rates for both the SVM and the SVM, see Figure 6, left. Our results are averaged over 100 runs and include more subsets.

The weights for the WSVM algorithm were computed as , where and come from the SVM solution. We observed that . However, we also observed that, in general, , which is explained by the non-uniqueness of (Theorem 3.1). If from the SVM model is used (WSVM- in the plot), then the two classifiers are identical, but if is tuned within the constraints imposed by the KKT conditions (WSVM in the plot), then minor differences appear.

6.2 Toy data

We now turn to the problem of choosing weights and evaluate the two weight generation schemes introduced in Section 5. In this experiment, data comes from a mixture of 2D Gaussians that form a non-linear shape resembling “W”, see Figure 4. Similar to the previous setting, we sample from a fixed training set of size 400, tune parameters, estimate the , and perform weight learning on a validation set of size 4000, and test on a separate set of size 2000. The results are averaged over 50 runs, see Figure 6, middle. Note that, just like in the experiment of Vapnik et al. (2009), this is an idealistic setting where the validation set is so large that model selection is close to optimal. In practice, one would never split the available sample as 1-to-40, therefore we also evaluate more “reasonable” splits 1-to-2 and 2-to-1 next.

Figure 6, right, shows results of a similar experiment where the validation sample is not fixed, but rather obtained by splitting the available training data. Since validation samples are now small, the estimation of fails and the corresponding WSVM performs on par with the standard SVM. The weight learning, however, still yields performance improvement on 1-to-2 splits. Moreover, the WSVM with weight learning is able to achieve a similar error rate as the SVM trained on twice as much data. We also observe the effect of overfitting when weight learning is performed on 2-to-1 splits, and we omit it in further experiments. Note that one could have anticipated that for the weight learning to succeed the amount of validation data, in general, has to be at least comparable to or larger than the number of weights that are to be learned.

6.3 UCI data sets

In this set of experiments we evaluate weight learning on three data sets from the UCI repository (Frank & Asuncion, 2010). For every data set, we first remove any records with missing values and then split the remaining data randomly into training and test sets of roughly equal size approximately preserving the initial class distribution. Table 1 summarizes characteristics of the obtained data sets. Smaller subsets are then sampled from the training data and split into training and validation sets as 1-to-2 and 2-to-1. The subsets sampling process is repeated 20 to 50 times depending on the amount of data. The rest of the experimental setup is the same as before.

Figure 8: Error rate comparison in the handwritten digit recognition experiment of Vapnik et al. (2009). was estimated from human rankings. Left: The original setting. Right: The extended setting where each digit is translated by 1 pixel in each of the 8 directions.
Data set Features Training Test
BCW 9 351 332
Mammographic 4 420 410
Spambase 57 2430 2171
Table 1: Statistics of data sets from the UCI repository.

Breast Cancer Wisconsin (BCW) (Bennett & Mangasarian, 1992): On this data set, the weight learning on the 1-to-2 split performs on par or better than the SVM on both splits, see Figure 7. Notably, the SVM performed worse on the 2-to-1 split, which we attribute to overfitting. The latter is not too surprising considering the small amount of data and the capacity of the RBF kernel, which makes the weight learning result even more remarkable.

Mammographic Mass (Elter et al., 2007): Again, the weight learning performs on par or better than the SVM on all splits for almost all subsets. On the last subset, however, the weight optimization did not yield any improvement, and the resulting performance is the same as that of the corresponding SVM.

Spambase: On this data set, the general outcome is that the weight learning brings roughly the same level of improvement as if twice as much data were used for training the standard SVM. This can be interpreted as a more efficient use of training data given the additional knowledge about importance of each data point.

6.4 Handwritten digit recognition (5’s vs 8’s)

Finally, we get back to the original handwritten digit recognition experiment of Vapnik et al. (2009) and evaluate our weight generation schemes on that data.

In this experiment, we evaluate the first weight generation scheme (5.3) under the assumption that digit ranking is available as the additional information, i.e., in addition to the class label , we are also given a confidence score between and . This is a reasonable assumption e.g. for data sets where robust annotation is obtained by aggregation of labeling from several human experts and is similar to the setting considered by Wu & Srihari (2004).

We collected additional annotation in the form of ranking from three human experts. The humans were presented with a random sample of the pixel digits and were asked to label them using one of possible labels, which we translated to a score in . Each of the 100 digits from the training set was ranked 16 times and the average score was then used as an estimate of .

Figure 8 shows the corresponding experimental results. We observe that additional information from human experts helps on small subsets, but its influence degrades on larger subsets. This might be in part due to the difference in image representation used by SVMs and humans. In particular, humans’ recognition of digits is translation invariant, while the pixel-wise representation is not. This leads us to our final experiment on the extended version of that data set.

We extend the original training sample of 100 digits by shifting each digit by 1 pixel in all 8 directions, thus obtaining 9 times the initial sample size. We assume that both the human rankings and the privileged features from the experiment of Vapnik et al. (2009) are unaffected by such translations and we simply replicate them. The experimental results are presented in Figure 8, right. Note that the WSVM with human rankings is now consistently on par or better than SVM and is somewhat comparable to SVM.

Remarkably, weight learning now gives significant performance boost on the extended version of the data set, which shows that it can be successfully combined with other sources of additional information, like the hint about translation invariance in this case. Interestingly enough, Lauer & Bloch (2008) discussed the possibility of combining the virtual sample method, which we used to extend the training set, with weighted learning where each virtual point would be given a confidence score . Our weight learning algorithm does exactly that, but without trying to model the measure of confidence. Instead, it attempts to directly optimize an estimate of the expected loss .

7 Conclusion

We have investigated basic properties of the recently proposed SVM algorithm, such as uniqueness of its solution, and have shown that it is closely related to the well-known weighted SVM. We revealed that all SVM solutions are constrained to have a certain dependency between the dual variables and the incurred loss on the training sample, and that the prior knowledge from the SVM framework can be encoded via instance weights.

That motivated us to consider other sources of additional information about the training data than the one given by privileged features. In particular, we considered the weight learning method in Section 5.3 which allows one to learn weights directly from data (using a validation set). The latter approach is not limited to SVMs and can be extended to other classifiers.

Experimental results confirmed our intuition that importance weighting is a powerful method of incorporating prior knowledge. In the idealized setting, we showed that the weight learning works and yields significant performance improvement. The choice of weights in a more practical setting is left for future work.

Appendix A The KKT conditions

In convex optimization, the Karush-Kuhn-Tucker (KKT) conditions are necessary and sufficient for a point to be primal and dual optimal with zero duality gap (Boyd & Vandenberghe, 2004).

The KKT conditions corresponding to the weighted SVM problem (2.2) are given below:

(A.1a)
(A.1b)
(A.1c)
(A.1d)
(A.1e)
(A.1f)
(A.1g)

And the KKT conditions corresponding to the SVM problem (2.1) are as follows:

(A.2a)
(A.2b)
(A.2c)
(A.2d)
(A.2e)
(A.2f)
(A.2g)
(A.2h)

Appendix B Technical proofs

b.1 Proof of Theorem 3.1

The solution to the problem (2.1) is unique in for any , . If there is a support vector, then is unique as well, otherwise: