Provably Robust Detection of Out-of-distribution Data (almost) for free

06/08/2021
by   Alexander Meinke, et al.
0

When applying machine learning in safety-critical systems, a reliable assessment of the uncertainy of a classifier is required. However, deep neural networks are known to produce highly overconfident predictions on out-of-distribution (OOD) data and even if trained to be non-confident on OOD data one can still adversarially manipulate OOD data so that the classifer again assigns high confidence to the manipulated samples. In this paper we propose a novel method where from first principles we combine a certifiable OOD detector with a standard classifier into an OOD aware classifier. In this way we achieve the best of two worlds: certifiably adversarially robust OOD detection, even for OOD samples close to the in-distribution, without loss in prediction accuracy and close to state-of-the-art OOD detection performance for non-manipulated OOD data. Moreover, due to the particular construction our classifier provably avoids the asymptotic overconfidence problem of standard neural networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/26/2019

Towards neural networks that provably know when they don't know

It has recently been shown that ReLU networks produce arbitrarily over-c...
research
08/08/2023

Comprehensive Assessment of the Performance of Deep Learning Classifiers Reveals a Surprising Lack of Robustness

Reliable and robust evaluation methods are a necessary first step toward...
research
06/11/2021

Topological Detection of Trojaned Neural Networks

Deep neural networks are known to have security issues. One particular t...
research
11/29/2022

Birds of a Feather Trust Together: Knowing When to Trust a Classifier via Adaptive Neighborhood Aggregation

How do we know when the predictions made by a classifier can be trusted?...
research
03/29/2021

Performance Analysis of Out-of-Distribution Detection on Various Trained Neural Networks

Several areas have been improved with Deep Learning during the past year...
research
07/03/2020

Confidence-Aware Learning for Deep Neural Networks

Despite the power of deep neural networks for a wide range of tasks, an ...
research
08/13/2021

CODEs: Chamfer Out-of-Distribution Examples against Overconfidence Issue

Overconfident predictions on out-of-distribution (OOD) samples is a thor...

Please sign up or login with your details

Forgot password? Click here to reset