Extremely Simple Activation Shaping for Out-of-Distribution Detection

by   Andrija Djurisic, et al.

The separation between training and deployment of machine learning models implies that not all scenarios encountered in deployment can be anticipated during training, and therefore relying solely on advancements in training has its limits. Out-of-distribution (OOD) detection is an important area that stress-tests a model's ability to handle unseen situations: Do models know when they don't know? Existing OOD detection methods either incur extra training steps, additional data or make nontrivial modifications to the trained network. In contrast, in this work, we propose an extremely simple, post-hoc, on-the-fly activation shaping method, ASH, where a large portion (e.g. 90 activation at a late layer is removed, and the rest (e.g. 10 lightly adjusted. The shaping is applied at inference time, and does not require any statistics calculated from training data. Experiments show that such a simple treatment enhances in-distribution and out-of-distribution sample distinction so as to allow state-of-the-art OOD detection on ImageNet, and does not noticeably deteriorate the in-distribution accuracy. We release alongside the paper two calls for explanation and validation, believing the collective power to further validate and understand the discovery. Calls, video and code can be found at: https://andrijazz.github.io/ash


LINe: Out-of-Distribution Detection by Leveraging Important Neurons

It is important to quantify the uncertainty of input samples, especially...

Thinkback: Task-SpecificOut-of-Distribution Detection

The increased success of Deep Learning (DL) has recently sparked large-s...

ActMAD: Activation Matching to Align Distributions for Test-Time-Training

Test-Time-Training (TTT) is an approach to cope with out-of-distribution...

MAGDiff: Covariate Data Set Shift Detection via Activation Graphs of Deep Neural Networks

Despite their successful application to a variety of tasks, neural netwo...

Feature Space Singularity for Out-of-Distribution Detection

Out-of-Distribution (OoD) detection is important for building safe artif...

T2FNorm: Extremely Simple Scaled Train-time Feature Normalization for OOD Detection

Neural networks are notorious for being overconfident predictors, posing...

Constraining Representations Yields Models That Know What They Don't Know

A well-known failure mode of neural networks corresponds to high confide...

Code Repositories


Code release for paper Extremely Simple Activation Shaping for Out-of-Distribution Detection

view repo

Please sign up or login with your details

Forgot password? Click here to reset