On the consistency of Multithreshold Entropy Linear Classifier

04/18/2015 ∙ by Wojciech Marian Czarnecki, et al. ∙ Jagiellonian University 0

Multithreshold Entropy Linear Classifier (MELC) is a recent classifier idea which employs information theoretic concept in order to create a multithreshold maximum margin model. In this paper we analyze its consistency over multithreshold linear models and show that its objective function upper bounds the amount of misclassified points in a similar manner like hinge loss does in support vector machines. For further confirmation we also conduct some numerical experiments on five datasets.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 10

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Many of the existing machine learning classifiers are based on the minimization of some additive loss function which penalizes each missclassification 

[scholkopf2002learning]

. This class of models consists perceptron, neural networks, logistic regression, linear regression, support vector machines (both traditional and least squares) and many others. For most of such approaches it is possible to prove their consistency, meaning that under assumption that our data is sampled i.i.d. from some unknown probability distributions, algorithm will converge to the optimal model in Bayesian sense with the sample size growing to infinity 

[steinwart2002influence, steinwart2005consistency]. While this is quite natural to be consistent with a loss function which is being directly minimized, it generally only upper bounds the number of wrong answers.

In general, up to some weighting schemes, the classic measure of the classification error is the expected number of missclassified samples from some unknown distribution :

which directly translates to

for . We call the 0/1 loss function and use the notation. As a result we can define an empirical risk over the training set as

which can be minimized over some family of classifiers . Unfortunately for 0/1 loss the resulting optimization problem is hard even for linear models. To overcome this issue many classifiers are constructed through optimization of some similar loss function which results in feasible problems. For example support vector machines change 0/1 loss to so called hinge loss

for . It appears, that such problem in the class of linear classifiers is convex and so – easy to compute. There are two important aspects of hinge loss that make it a reasonable surrogate function. First, 111Implication is an equivalence relation up to scaling of the linear operator as hinge loss returns non-zero values for predictions in interval. second . In other words, it is an upper bound of the 0/1 loss and when it attains zero then there are no missclassified points.

In this paper we analyze Multithreshold Entropy Linear Classifier, a recently proposed [czarnecki2014multithreshold] classifier which builds a multithreshold linear model using information theoretic concepts. It is a density based approach which cannot be easily translated to the language of additive loss functions. We show that this model is consistent with 0/1 loss over simple families of distributions and that in general it also upper bounds the 0/1 loss in the class of multithreshold linear classifiers and when it attains zero then there are no missclassified points. We also draw some intuitions to show how this model is related to other linear classifiers and conclude with some numerical experiments.

2 Multithreshold Entropy Linear Classifier

Multithreshold Entropy Linear Classifier (MELC [czarnecki2014multithreshold]) is aimed at finding such linear operator that maximizes the Cauchy-Schwarz Divergence [jenssen2006cauchy]

of kernel density estimation of each class projection on

. It appears that due to the affine transformation invariance of such problem one can (and should, as shown in [czarnecki2014multithreshold]) restrict to the unit sphere, meaning that .

There are many density based methods in particular one can perform kernel density estimation of any dataset and simply classify according to which density is bigger. However, such an approach cannot work in general due to the curse of dimensionality and the fact that density estimation requires enormous number of points for reasonable results (number of required points grows exponentially with the data dimension). As a result, existing datasets can be used to approximate density to at most few dimensions while data can have thousands. This leads to a very natural concept of performing density estimation of low dimensional data projection, in particular one dimensional one, performed by MELC.

For a given set of points

, its projection to the hyperplane

is simply . Kernel density estimations using Silverman’s rule [silverman] is given by

where

Now to define the MELC objective function, we need some definitions, namely:

  • cross information potential which, as shown in [czarnecki2014multithreshold], is connected to minimization of the empirical risk

  • Renyi’s quadratic cross entropy as defined in [principe2010information] is simply a negative logarithm of

  • Renyi’s quadratic entropy is a Renyi’s quadratic cross entropy between pdf and itself

  • Cauchy-Schwarz Divergence, optimized by the full MELC model

In particular, non-regularized MELC is prone to overfitting which can be easily summarized by the following observation.

Observation 1.

Given an arbitrary finite, consistent set of samples non-regularized MELC learns it with zero error for sufficiently small .

Proof.

First let us notice, that any finite, consistent sample set is separable by some multithreshold linear classifier. In other words

Obviously, there are pairs of vectors which can violate this assumption. Each defining a family of linear projections that are projecting them at the same point. thus

So it is sufficient to choose which is a non-empty set as for any there are infinitely many possible angles that vectors can form with each axis, and for all (from the dataset consistency).

In the worst case it results in a multithreshold linear classifier. As a consequence, there exists such linear projection for which the smallest margin between samples of this set is greater than zero.

As it has been shown in [czarnecki2014multithreshold] non-regularized MELC maximizes the smallest margin among all margins in multithreshold linear classifiers as approaches . In the same time MELC will not learn these samples perfectly if and only if at least two samples are projected at the very same point, which is equivalent to the maximum of the smallest margin in the class of multithreshold linear classifiers for this sample is equal to , contradiction. ∎

In particular, this means that for small values of , without regularization, this model has infinite Vapnik-Chervonenkis dimension [vapnik2000nature], as many other density or nearest neighbours based approaches. In the following section we focus on more practical characteristics - whether this classifier is able to learn an arbitrary continuous distribution with smallest obtainable error in its class of models. This characteristic is called consistency and can be defined as

Definition 1 (Consistency).

Model is called consistent with error measure and family of distributions in the class of models if for any trained on the i.i.d. samples from approaches minimum error as measured by over all models in on with samples’ size going to infinity.

3 Non-regularized MELC consistency

In this section we focus on non-regularized MELC which searches for linear projection (with norm 1) maximizing Renyi’s quadratic cross entropy of kernel density estimation of data projection:

which makes a classification decision based on the estimated projected densities

We show that such classifier is nearly consistent with the 0/1 loss in the class of all multithreshold linear classifiers. We also draw an analogy between its approach to the one taken by support vector machines model (as well as other regularized empirical risk loss function minimization based models). Let us start with some basic definitions and notations.

Definition 2 (Expected accuracy).

Given some classifier the expected accuracy over a distributions with priors is

For unbalanced datasets we might be more interested in measures that make both classes equally important despite their sizes (priors) which leads to the averaged accuracy (also known as balanced/weighted accuracy).

Definition 3 (Expected averaged accuracy).

Given some classifier the expected averaged accuracy (ignoring the classes’ priors) over a distributions is

Let us now compute the smallest obtainable error by multithreshold linear classifiers as measured by expected averaged accuracy (EAA).

Proposition 1 (Multithreshold Linear Classifier EAA Bayes Risk).

For the family of multithreshold linear classifiers, the smallest obtainable EAA error for distributions equals

Proof.

simply expresses the probability of making a bad classification over whole data projection. For each point , we have to classify it as a member of either or and obviously, we make an error when classifying any point with probability . As a result, the projection which realizes the minimum of probability of an error is the one giving the greatest expected averaged accuracy. ∎

In the following sections we assume that the kernel density estimation approximating the data distribution is the actual distribution, as with the sample size growing to infinity kernel density estimation with Silverman’s rule [silverman] is guaranteed to converge to the true distribution. As a consequence each result regarding a property over distribution is also true over finite sample in the limiting case. We also use the notation

for the smallest obtainable multithreshold linear classifier missclassification error for a given projection . So in particular

Let us begin with the simplest case, when there exists a perfect classifier able to distinguish samples’ classes (case when Bayesian risk is 0).

Observation 2.

Non regularized MELC is consistent with 0/1 loss on multithreshold linearly separable distributions.

Proof.

If two distributions are perfectly separable by a multithreshold linear separator then there exists a linear projection such that common support of distributions projected on has zero measure.

Obviously, as we integrate the function which is not equal to 0 only on the set o zero measure.

Similarly because if the integral of the product of two functions is equal to zero then only on the set of zero measure both of these functions can be non-zero. As a result the solution given by non-regularized MELC attains the Bayesian risk for this class of distributions. ∎

Let us now investigate the situation when data of each class come from a radial normal distributions.

Observation 3.

Non regularized MELC is consistent with 0/1 loss on radial normal distributions.

Proof.

Let us assume that we are given Gaussians with variances

and respectively.

It is easy to see that linear projections of these distributions form the family of one-dimensional normal distributions with variances respectively and distance between their means in the interval. Optimal projection is given by which maximizes the distance between these means, so .

On the other hand according to Czarnecki et al. [czarnecki2014multithreshold], we have

so obviously is minimized (and maximized) when is maximized. As a result non-regularized MELC selects optimal linear projection. ∎

Unfortunately MELC (neither regularized nor non-regularized) does not seem to be consistent with 0/1 loss in general. However, we show that 0/1 loss is nicely bounded by its objective function which will draw an analogy between this approach and those taken by other linear models.

We start with a simple lemma connecting square of the function’s integral and integral of the function’s square on a bounded interval.

Lemma 1.

For any square integrable function such that

Proof.

This is an obvious consequence of Schwarz inequality

for , being non-negative and being a constant function equal ,

Now we can prove the main theorem of this paper.

Theorem 1.

Negative log likelihood of minimal obtainable missclassification error of a given multithreshold linear classifier for any not multithreshold linearly separable distributions is at least half of Renyi’s quadratic cross entropy of data projections used by this classifier.

Proof.

First from the fact that we can scale/center data so for any linear operator such that we have

and consequently we can narrow down to the error over a unit interval222for KDE based on functions with infinite support, for a proper scaling, integral of the pdf outside interval goes to with samples size growing to infinity . From Lemma 1 we get

(1)

For any we have , thus

which connected with (1) yields

consequently, as are not multithreshold linearly separable, is strictly positive, thus

In other words by maximizing the Renyi’s quadratic cross entropy (minimizing the cross information potential) we should also optimize negative log likelihood of correct classification (get close to the Bayes risk of 0/1 error). It is worth noting that we do not assume any particular kernel so even though MELC is defined with Gaussian mixtures kernel density estimation, the theorems holds for any square integrable distributions on interval.

Figure 1: Visualization of sampled points for each dataset (first column), hinge loss and Bayesian risk of linear models (second column), underlying dataset distribution (third column) and finally square root of the cross information potential and the Bayesian risk of multithreshold models (last column). X axis corresponds to the angle of the vector. Large dots correspond to minima of each function, additionally for both hinge loss and there is another dot denoting the value of true error obtained if solution is selected using these objectives.

4 Experiments

To further confirm our claims we perform simple numerical experiments on five datasets, three of which are synthetic ones and two real life examples. During this evaluation we analyze all possible linear models in two-dimensional space and compare how particular upper bound objective (hinge loss in the case of linear classifiers and non-regularized MELC for multithreshold classifiers) behaves as compared to the Bayesian risk. Figure 1

visualizes the results for: two radial Gaussians distributions (one per class) in 2d space; four radial Gaussians distributions placed alternately (two per class) in a line; four random strongly overlapping Gaussian distributions (two per class); fourclass dataset 

[ho1996building]; 2d PCA embedding of the images of 0 and 2s (positive class) and 3s and 8s from MNIST dataset [lecun1998mnist].

First, it is easy to notice the convexity of the hinge loss objective function. Even for problems having multiple local optima (like fourth dataset) the SVM objective function has just one, global optimum which is the core advantage of such an approach. In the same time, non-regularized MELC function has similar number of local optima like the Bayesian risk function, however it is much smoother and as a result one of the unimportant local solution in terms of 0/1 loss in the fourth example (located near ) is not a solution of MELC.

On the other hand for datasets where the considered class of models is not sufficient (like third problem for linear model) hinge loss convex upper bounds leads to the selection of the point distant from the true optimum (see Table 1). MELC on the other hand seems to better approximate the underlying Bayesian risk function and results in the solutions with comparable error (even if the solution itself is far away from the true optimum, like in the case of fourth dataset).

dataset
2 Gauss 2d 6% 1.00 3% 1.00
4 Gauss in line 0% 0.96 0% 1.00
4 Gauss mixed 34% 0.56 5% 1.00
fourclass 1% 1.00 7% 0.05
MNIST 2% 0.99 1% 1.00
Table 1: Comparison of solutions given by optimization of hinge loss and optimal linear classifier and between non-regularized MELC and optimal multithreshold linear classifier. Error function is the relative increase in the corresponding error measure when using a particular optimization scheme . is a linear projection given by hinge loss optimization, by 0/1 loss optimization, by non-regularized MELC and the optimal multithreshold linear projection in the Bayesian sense.

5 Conclusions

In this paper Multithreshold Entropy Linear Classifier is analyzed in terms of its consistency with 0/1 loss function in the class of multithreshold linear classifiers. It has been shown that it is truly consistent with some simple distribution classes and that in general its objective function upper bounds the 0/1 loss in a similar manner as hinge or square losses upper bounds 0/1 loss. Experiments on the synthetic, low dimensional data showed that in practise, one can expect that optimization of MELC objective function truly leads to the nearly optimal classifier with sample size growing to infinity.

References