Explaining Away Attacks Against Neural Networks

03/06/2020
by   Sean Saito, et al.
0

We investigate the problem of identifying adversarial attacks on image-based neural networks. We present intriguing experimental results showing significant discrepancies between the explanations generated for the predictions of a model on clean and adversarial data. Utilizing this intuition, we propose a framework which can identify whether a given input is adversarial based on the explanations given by the model. Code for our experiments can be found here: https://github.com/seansaito/Explaining-Away-Attacks-Against-Neural-Networks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset