CODEs: Chamfer Out-of-Distribution Examples against Overconfidence Issue

08/13/2021
by   Keke Tang, et al.
0

Overconfident predictions on out-of-distribution (OOD) samples is a thorny issue for deep neural networks. The key to resolve the OOD overconfidence issue inherently is to build a subset of OOD samples and then suppress predictions on them. This paper proposes the Chamfer OOD examples (CODEs), whose distribution is close to that of in-distribution samples, and thus could be utilized to alleviate the OOD overconfidence issue effectively by suppressing predictions on them. To obtain CODEs, we first generate seed OOD examples via slicing splicing operations on in-distribution samples from different categories, and then feed them to the Chamfer generative adversarial network for distribution transformation, without accessing to any extra data. Training with suppressing predictions on CODEs is validated to alleviate the OOD overconfidence issue largely without hurting classification accuracy, and outperform the state-of-the-art methods. Besides, we demonstrate CODEs are useful for improving OOD detection and classification.

READ FULL TEXT

page 4

page 5

research
06/07/2020

Uncertainty-Aware Deep Classifiers using Generative Models

Deep neural networks are often ignorant about what they do not know and ...
research
06/19/2022

Out-of-distribution Detection by Cross-class Vicinity Distribution of In-distribution Data

Deep neural networks only learn to map in-distribution inputs to their c...
research
12/01/2018

Building robust classifiers through generation of confident out of distribution examples

Deep learning models are known to be overconfident in their predictions ...
research
08/14/2019

Once a MAN: Towards Multi-Target Attack via Learning Multi-Target Adversarial Network Once

Modern deep neural networks are often vulnerable to adversarial samples....
research
06/08/2021

Provably Robust Detection of Out-of-distribution Data (almost) for free

When applying machine learning in safety-critical systems, a reliable as...
research
12/05/2019

Why Should we Combine Training and Post-Training Methods for Out-of-Distribution Detection?

Deep neural networks are known to achieve superior results in classifica...
research
01/08/2019

Interpretable BoW Networks for Adversarial Example Detection

The standard approach to providing interpretability to deep convolutiona...

Please sign up or login with your details

Forgot password? Click here to reset