Know Your Limits: Monotonicity Softmax Make Neural Classifiers Overconfident on OOD Data

12/09/2020
by   Dennis Ulmer, et al.
13

A crucial requirement for reliable deployment of deep learning models for safety-critical applications is the ability to identify out-of-distribution (OOD) data points, samples which differ from the training data and on which a model might underperform. Previous work has attempted to tackle this problem using uncertainty estimation techniques. However, there is empirical evidence that a large family of these techniques do not detect OOD reliably in classification tasks. This paper puts forward a theoretical explanation for said experimental findings. We prove that such techniques are not able to reliably identify OOD samples in a classification setting, provided the models satisfy weak assumptions about the monotonicity of feature values and resulting class probabilities. This result stems from the interplay between the saturating nature of activation functions like sigmoid or softmax, coupled with the most widely-used uncertainty metrics.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset