A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks

07/10/2018
by   Kimin Lee, et al.
3

Detecting test samples drawn sufficiently far away from the training distribution statistically or adversarially is a fundamental requirement to deploying a good classifier in many real-world machine learning applications. However, deep neural networks with the softmax classifier are known to produce highly overconfident posterior distributions even for such abnormal samples. In this paper, we propose a simple yet effective method for detecting any abnormal samples, which is applicable to any pre-trained softmax neural classifier. We obtain the class conditionalGaussian distributions with respect to (low- and upper-level) features of the deep models under Gaussian discriminant analysis, which result in a confidence score based on the Mahalanobis distance. While most prior methods have been evaluated for detecting either out-of-distribution or adversarial samples, but not both, the proposed method achieves the state-of-art performances for both cases in our experiments. Moreover, we found that our proposed method is more robust in extreme cases, e.g., when the training dataset has noisy labels or small number of samples. Finally, we show that the proposed method enjoys broader usage by applying it to class incremental learning: whenever out-of-distribution samples are detected, our classification rule can incorporate new classes well without further training deep models.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset