Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep Learning

03/13/2018
by   Nicolas Papernot, et al.
0

Deep neural networks (DNNs) enable innovative applications of machine learning like image recognition, machine translation, or malware detection. However, deep learning is often criticized for its lack of robustness in adversarial settings (e.g., vulnerability to adversarial inputs) and general inability to rationalize its predictions. In this work, we exploit the structure of deep learning to enable new learning-based inference and decision strategies that achieve desirable properties such as robustness and interpretability. We take a first step in this direction and introduce the Deep k-Nearest Neighbors (DkNN). This hybrid classifier combines the k-nearest neighbors algorithm with representations of the data learned by each layer of the DNN: a test input is compared to its neighboring training points according to the distance that separates them in the representations. We show the labels of these neighboring points afford confidence estimates for inputs outside the model's training manifold, including on malicious inputs like adversarial examples--and therein provides protections against inputs that are outside the models understanding. This is because the nearest neighbors can be used to estimate the nonconformity of, i.e., the lack of support for, a prediction in the training data. The neighbors also constitute human-interpretable explanations of predictions. We evaluate the DkNN algorithm on several datasets, and show the confidence estimates accurately identify inputs outside the model, and that the explanations provided by nearest neighbors are intuitive and useful in understanding model failures.

READ FULL TEXT

page 2

page 8

page 9

page 10

page 13

page 17

page 18

research
08/15/2021

Deep Adversarially-Enhanced k-Nearest Neighbors

Recent works have theoretically and empirically shown that deep neural n...
research
06/13/2017

Analyzing the Robustness of Nearest Neighbors to Adversarial Examples

Motivated by applications such as autonomous vehicles, test-time attacks...
research
09/15/2019

Detecting Adversarial Samples Using Influence Functions and Nearest Neighbors

Deep neural networks (DNNs) are notorious for their vulnerability to adv...
research
06/20/2022

Understanding Robust Learning through the Lens of Representation Similarities

Representation learning, i.e. the generation of representations useful f...
research
07/22/2021

Unsupervised Detection of Adversarial Examples with Model Explanations

Deep Neural Networks (DNNs) have shown remarkable performance in a diver...
research
07/07/2022

Back to the Basics: Revisiting Out-of-Distribution Detection Baselines

We study simple methods for out-of-distribution (OOD) image detection th...
research
10/18/2020

Explaining and Improving Model Behavior with k Nearest Neighbor Representations

Interpretability techniques in NLP have mainly focused on understanding ...

Please sign up or login with your details

Forgot password? Click here to reset