Building robust classifiers through generation of confident out of distribution examples

12/01/2018
by   Kumar Sricharan, et al.
0

Deep learning models are known to be overconfident in their predictions on out of distribution inputs. There have been several pieces of work to address this issue, including a number of approaches for building Bayesian neural networks, as well as closely related work on detection of out of distribution samples. Recently, there has been work on building classifiers that are robust to out of distribution samples by adding a regularization term that maximizes the entropy of the classifier output on out of distribution data. To approximate out of distribution samples (which are not known apriori), a GAN was used for generation of samples at the edges of the training distribution. In this paper, we introduce an alternative GAN based approach for building a robust classifier, where the idea is to use the GAN to explicitly generate out of distribution samples that the classifier is confident on (low entropy), and have the classifier maximize the entropy for these samples. We showcase the effectiveness of our approach relative to state-of-the-art on hand-written characters as well as on a variety of natural image datasets.

READ FULL TEXT
research
12/01/2018

Improving robustness of classifiers by training against live traffic

Deep learning models are known to be overconfident in their predictions ...
research
04/27/2019

Analysis of Confident-Classifiers for Out-of-distribution Detection

Discriminatively trained neural classifiers can be trusted, only when th...
research
11/26/2017

Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples

The problem of detecting whether a test sample is from in-distribution (...
research
08/13/2021

CODEs: Chamfer Out-of-Distribution Examples against Overconfidence Issue

Overconfident predictions on out-of-distribution (OOD) samples is a thor...
research
01/31/2022

UQGAN: A Unified Model for Uncertainty Quantification of Deep Classifiers trained via Conditional GANs

We present an approach to quantifying both aleatoric and epistemic uncer...
research
10/08/2019

Credible Sample Elicitation by Deep Learning, for Deep Learning

It is important to collect credible training samples (x,y) for building ...
research
08/06/2022

Towards Robust Deep Learning using Entropic Losses

Current deep learning solutions are well known for not informing whether...

Please sign up or login with your details

Forgot password? Click here to reset