Leveraging Linear Independence of Component Classifiers: Optimizing Size and Prediction Accuracy for Online Ensembles
Ensembles, which employ a set of classifiers to enhance classification accuracy collectively, are crucial in the era of big data. However, although there is general agreement that the relation between ensemble size and its prediction accuracy, the exact nature of this relationship is still unknown. We introduce a novel perspective, rooted in the linear independence of classifier's votes, to analyze the interplay between ensemble size and prediction accuracy. This framework reveals a theoretical link, consequently proposing an ensemble size based on this relationship. Our study builds upon a geometric framework and develops a series of theorems. These theorems clarify the role of linear dependency in crafting ensembles. We present a method to determine the minimum ensemble size required to ensure a target probability of linearly independent votes among component classifiers. Incorporating real and synthetic datasets, our empirical results demonstrate a trend: increasing the number of classifiers enhances accuracy, as predicted by our theoretical insights. However, we also identify a point of diminishing returns, beyond which additional classifiers provide diminishing improvements in accuracy. Surprisingly, the calculated ideal ensemble size deviates from empirical results for certain datasets, emphasizing the influence of other factors. This study opens avenues for deeper investigations into the complex dynamics governing ensemble design and offers guidance for constructing efficient and effective ensembles in practical scenarios.
READ FULL TEXT