A Dataset-Level Geometric Framework for Ensemble Classifiers

06/16/2021
by   Shengli Wu, et al.
0

Ensemble classifiers have been investigated by many in the artificial intelligence and machine learning community. Majority voting and weighted majority voting are two commonly used combination schemes in ensemble learning. However, understanding of them is incomplete at best, with some properties even misunderstood. In this paper, we present a group of properties of these two schemes formally under a dataset-level geometric framework. Two key factors, every component base classifier's performance and dissimilarity between each pair of component classifiers are evaluated by the same metric - the Euclidean distance. Consequently, ensembling becomes a deterministic problem and the performance of an ensemble can be calculated directly by a formula. We prove several theorems of interest and explain their implications for ensembles. In particular, we compare and contrast the effect of the number of component classifiers on these two types of ensemble schemes. Empirical investigation is also conducted to verify the theoretical results when other metrics such as accuracy are used. We believe that the results from this paper are very useful for us to understand the fundamental properties of these two combination schemes and the principles of ensemble classifiers in general. The results are also helpful for us to investigate some issues in ensemble classifiers, such as ensemble performance prediction, selecting a small number of base classifiers to obtain efficient and effective ensembles.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/27/2023

Leveraging Linear Independence of Component Classifiers: Optimizing Size and Prediction Accuracy for Online Ensembles

Ensembles, which employ a set of classifiers to enhance classification a...
research
09/09/2017

Less Is More: A Comprehensive Framework for the Number of Components of Ensemble Classifiers

The number of component classifiers chosen for an ensemble has a great i...
research
11/15/2017

Efficient Estimation of Generalization Error and Bias-Variance Components of Ensembles

For many applications, an ensemble of base classifiers is an effective s...
research
07/09/2021

Specialists Outperform Generalists in Ensemble Classification

Consider an ensemble of k individual classifiers whose accuracies are kn...
research
04/23/2021

Selecting a number of voters for a voting ensemble

For a voting ensemble that selects an odd-sized subset of the ensemble c...
research
06/09/2011

Inducing Interpretable Voting Classifiers without Trading Accuracy for Simplicity: Theoretical Results, Approximation Algorithms

Recent advances in the study of voting classification algorithms have br...
research
10/04/2016

Ensemble Validation: Selectivity has a Price, but Variety is Free

If classifiers are selected from a hypothesis class to form an ensemble,...

Please sign up or login with your details

Forgot password? Click here to reset