A mixture model for aggregation of multiple pre-trained weak classifiers

05/31/2018
by   Rudrasis Chakraborty, et al.
0

Deep networks have gained immense popularity in Computer Vision and other fields in the past few years due to their remarkable performance on recognition/classification tasks surpassing the state-of-the art. One of the keys to their success lies in the richness of the automatically learned features. In order to get very good accuracy, one popular option is to increase the depth of the network. Training such a deep network is however infeasible or impractical with moderate computational resources and budget. The other alternative to increase the performance is to learn multiple weak classifiers and boost their performance using a boosting algorithm or a variant thereof. But, one of the problems with boosting algorithms is that they require a re-training of the networks based on the misclassified samples. Motivated by these problems, in this work we propose an aggregation technique which combines the output of multiple weak classifiers. We formulate the aggregation problem using a mixture model fitted to the trained classifier outputs. Our model does not require any re-training of the `weak' networks and is computationally very fast (takes <30 seconds to run in our experiments). Thus, using a less expensive training stage and without doing any re-training of networks, we experimentally demonstrate that it is possible to boost the performance by 12%. Furthermore, we present experiments using hand-crafted features and improved the classification performance using the proposed aggregation technique. One of the major advantages of our framework is that our framework allows one to combine features that are very likely to be of distinct dimensions since they are extracted using different networks/algorithms. Our experimental results demonstrate a significant performance gain from the use of our aggregation technique at a very small computational cost.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/10/2019

Boosting Classifiers with Noisy Inference

We present a principled framework to address resource allocation for rea...
research
10/27/2022

Deep Latent Mixture Model for Recommendation

Recent advances in neural networks have been successfully applied to man...
research
03/01/2021

Class Means as an Early Exit Decision Mechanism

State-of-the-art neural networks with early exit mechanisms often need c...
research
05/28/2018

CapsNet comparative performance evaluation for image classification

Image classification has become one of the main tasks in the field of co...
research
01/02/2015

An Empirical Study of the L2-Boost technique with Echo State Networks

A particular case of Recurrent Neural Network (RNN) was introduced at th...
research
08/15/2018

Building medical image classifiers with very limited data using segmentation networks

Deep learning has shown promising results in medical image analysis, how...
research
05/21/2018

A Simple Cache Model for Image Recognition

Training large-scale image recognition models is computationally expensi...

Please sign up or login with your details

Forgot password? Click here to reset