αQBoost: An Iteratively Weighted Adiabatic Trained Classifier

10/14/2022
by   Salvatore Certo, et al.
0

A new implementation of an adiabatically-trained ensemble model is derived that shows significant improvements over classical methods. In particular, empirical results of this new algorithm show that it offers not just higher performance, but also more stability with less classifiers, an attribute that is critically important in areas like explainability and speed-of-inference. In all, the empirical analysis displays that the algorithm can provide an increase in performance on unseen data by strengthening stability of the statistical model through further minimizing and balancing variance and bias, while decreasing the time to convergence over its predecessors.

READ FULL TEXT
research
09/09/2014

eAnt-Miner : An Ensemble Ant-Miner to Improve the ACO Classification

Ant Colony Optimization (ACO) has been applied in supervised learning in...
research
12/22/2019

Contracting Implicit Recurrent Neural Networks: Stable Models with Improved Trainability

Stability of recurrent models is closely linked with trainability, gener...
research
02/10/2020

Stability for the Training of Deep Neural Networks and Other Classifiers

We examine the stability of loss-minimizing training processes that are ...
research
07/08/2011

Polyceptron: A Polyhedral Learning Algorithm

In this paper we propose a new algorithm for learning polyhedral classif...
research
06/22/2022

Learning Debiased Classifier with Biased Committee

Neural networks are prone to be biased towards spurious correlations bet...
research
05/07/2020

Local Cascade Ensemble for Multivariate Data Classification

We present LCE, a Local Cascade Ensemble for traditional (tabular) multi...
research
06/10/2022

Imitation Learning via Differentiable Physics

Existing imitation learning (IL) methods such as inverse reinforcement l...

Please sign up or login with your details

Forgot password? Click here to reset