DeepAI AI Chat
Log In Sign Up

Feature Learning Viewpoint of AdaBoost and a New Algorithm

by   Fei Wang, et al.
Xi'an Jiaotong University

The AdaBoost algorithm has the superiority of resisting overfitting. Understanding the mysteries of this phenomena is a very fascinating fundamental theoretical problem. Many studies are devoted to explaining it from statistical view and margin theory. In this paper, we illustrate it from feature learning viewpoint, and propose the AdaBoost+SVM algorithm, which can explain the resistant to overfitting of AdaBoost directly and easily to understand. Firstly, we adopt the AdaBoost algorithm to learn the base classifiers. Then, instead of directly weighted combination the base classifiers, we regard them as features and input them to SVM classifier. With this, the new coefficient and bias can be obtained, which can be used to construct the final classifier. We explain the rationality of this and illustrate the theorem that when the dimension of these features increases, the performance of SVM would not be worse, which can explain the resistant to overfitting of AdaBoost.


page 1

page 2

page 3

page 4


A Theory of Feature Learning

Feature Learning aims to extract relevant information contained in data ...

F-SVM: Combination of Feature Transformation and SVM Learning via Convex Relaxation

The generalization error bound of support vector machine (SVM) depends o...

Learning sparse features can lead to overfitting in neural networks

It is widely believed that the success of deep networks lies in their ab...

Benign Overfitting in Binary Classification of Gaussian Mixtures

Deep neural networks generalize well despite being exceedingly overparam...

Toward a Better Understanding of Leaderboard

The leaderboard in machine learning competitions is a tool to show the p...

Explaining Translationese: why are Neural Classifiers Better and what do they Learn?

Recent work has shown that neural feature- and representation-learning, ...