GIT: Detecting Uncertainty, Out-Of-Distribution and Adversarial Samples using Gradients and Invariance Transformations

07/05/2023
by   Julia Lust, et al.
0

Deep neural networks tend to make overconfident predictions and often require additional detectors for misclassifications, particularly for safety-critical applications. Existing detection methods usually only focus on adversarial attacks or out-of-distribution samples as reasons for false predictions. However, generalization errors occur due to diverse reasons often related to poorly learning relevant invariances. We therefore propose GIT, a holistic approach for the detection of generalization errors that combines the usage of gradient information and invariance transformations. The invariance transformations are designed to shift misclassified samples back into the generalization area of the neural network, while the gradient information measures the contradiction between the initial prediction and the corresponding inherent computations of the neural network using the transformed sample. Our experiments demonstrate the superior performance of GIT compared to the state-of-the-art on a variety of network architectures, problem setups and perturbation types.

READ FULL TEXT
research
02/24/2021

Identifying Untrustworthy Predictions in Neural Networks by Geometric Gradient Analysis

The susceptibility of deep neural networks to untrustworthy predictions,...
research
02/01/2019

Natural and Adversarial Error Detection using Invariance to Image Transformations

We propose an approach to distinguish between correct and incorrect imag...
research
07/14/2022

On the Strong Correlation Between Model Invariance and Generalization

Generalization and invariance are two essential properties of any machin...
research
09/15/2023

Unveiling Invariances via Neural Network Pruning

Invariance describes transformations that do not alter data's underlying...
research
10/08/2022

Symmetry Subgroup Defense Against Adversarial Attacks

Adversarial attacks and defenses disregard the lack of invariance of con...
research
09/19/2020

Efficient Certification of Spatial Robustness

Recent work has exposed the vulnerability of computer vision models to s...
research
03/03/2021

Shift Invariance Can Reduce Adversarial Robustness

Shift invariance is a critical property of CNNs that improves performanc...

Please sign up or login with your details

Forgot password? Click here to reset