Linear and Order Statistics Combiners for Pattern Classification

05/20/1999
by   Kagan Tumer, et al.
0

Several researchers have experimentally shown that substantial improvements can be obtained in difficult pattern recognition problems by combining or integrating the outputs of multiple classifiers. This chapter provides an analytical framework to quantify the improvements in classification results due to combining. The results apply to both linear combiners and order statistics combiners. We first show that to a first order approximation, the error rate obtained over and above the Bayes error rate, is directly proportional to the variance of the actual decision boundaries around the Bayes optimum boundary. Combining classifiers in output space reduces this variance, and hence reduces the "added" error. If N unbiased classifiers are combined by simple averaging, the added error rate can be reduced by a factor of N if the individual errors in approximating the decision boundaries are uncorrelated. Expressions are then derived for linear combiners which are biased or correlated, and the effect of output correlations on ensemble performance is quantified. For order statistics based non-linear combiners, we derive expressions that indicate how much the median, the maximum and in general the ith order statistic can improve classifier performance. The analysis presented here facilitates the understanding of the relationships among error rates, classifier boundary distributions, and combining in output space. Experimental results on several public domain data sets are provided to illustrate the benefits of combining and to support the analytical results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/20/1999

Robust Combining of Disparate Classifiers through Order Statistics

Integrating the outputs of multiple classifiers via combiners or meta-le...
research
10/29/2021

Neyman-Pearson lemma for Bayes factors

We point out that the Neyman-Pearson lemma applies to Bayes factors if w...
research
11/18/2022

Adversarial Detection by Approximation of Ensemble Boundary

A spectral approximation of a Boolean function is proposed for approxima...
research
09/18/2021

Ensemble Learning using Error Correcting Output Codes: New Classification Error Bounds

New bounds on classification error rates for the error-correcting output...
research
10/11/2007

Comparison and Combination of State-of-the-art Techniques for Handwritten Character Recognition: Topping the MNIST Benchmark

Although the recognition of isolated handwritten digits has been a resea...
research
05/16/2021

General order adjusted Edgeworth expansions for generalized t-tests

We develop generalized approach to obtaining Edgeworth expansions for t-...
research
06/07/2022

Inferring Unfairness and Error from Population Statistics in Binary and Multiclass Classification

We propose methods for making inferences on the fairness and accuracy of...

Please sign up or login with your details

Forgot password? Click here to reset