When in Doubt: Improving Classification Performance with Alternating Normalization

09/28/2021
by   Menglin Jia, et al.
7

We introduce Classification with Alternating Normalization (CAN), a non-parametric post-processing step for classification. CAN improves classification accuracy for challenging examples by re-adjusting their predicted class probability distribution using the predicted class distributions of high-confidence validation examples. CAN is easily applicable to any probabilistic classifier, with minimal computation overhead. We analyze the properties of CAN using simulated experiments, and empirically demonstrate its effectiveness across a diverse set of classification tasks.

READ FULL TEXT
research
04/15/2021

Effect of Post-processing on Contextualized Word Representations

Post-processing of static embedding has beenshown to improve their perfo...
research
08/19/2022

Improving Post-Processing of Audio Event Detectors Using Reinforcement Learning

We apply post-processing to the class probability distribution outputs o...
research
01/14/2014

Binary Classifier Calibration: Non-parametric approach

Accurate calibration of probabilistic predictive models learned is criti...
research
01/13/2014

Binary Classifier Calibration: Bayesian Non-Parametric Approach

A set of probabilistic predictions is well calibrated if the events that...
research
02/08/2021

Model Rectification via Unknown Unknowns Extraction from Deployment Samples

Model deficiency that results from incomplete training data is a form of...

Please sign up or login with your details

Forgot password? Click here to reset