An Information-Theoretic Explanation for the Adversarial Fragility of AI Classifiers

01/27/2019
by   Hui Xie, et al.
0

We present a simple hypothesis about a compression property of artificial intelligence (AI) classifiers and present theoretical arguments to show that this hypothesis successfully accounts for the observed fragility of AI classifiers to small adversarial perturbations. We also propose a new method for detecting when small input perturbations cause classifier errors, and show theoretical guarantees for the performance of this detection method. We present experimental results with a voice recognition system to demonstrate this method. The ideas in this paper are motivated by a simple analogy between AI classifiers and the standard Shannon model of a communication system.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset