Trust but Verify: An Information-Theoretic Explanation for the Adversarial Fragility of Machine Learning Systems, and a General Defense against Adversarial Attacks

05/25/2019
by   Jirong Yi, et al.
0

Deep-learning based classification algorithms have been shown to be susceptible to adversarial attacks: minor changes to the input of classifiers can dramatically change their outputs, while being imperceptible to humans. In this paper, we present a simple hypothesis about a feature compression property of artificial intelligence (AI) classifiers and present theoretical arguments to show that this hypothesis successfully accounts for the observed fragility of AI classifiers to small adversarial perturbations. Drawing on ideas from information and coding theory, we propose a general class of defenses for detecting classifier errors caused by abnormally small input perturbations. We further show theoretical guarantees for the performance of this detection method. We present experimental results with (a) a voice recognition system, and (b) a digit recognition system using the MNIST database, to demonstrate the effectiveness of the proposed defense methods. The ideas in this paper are motivated by a simple analogy between AI classifiers and the standard Shannon model of a communication system.

READ FULL TEXT

page 15

page 16

page 22

page 40

research
01/27/2019

An Information-Theoretic Explanation for the Adversarial Fragility of AI Classifiers

We present a simple hypothesis about a compression property of artificia...
research
07/28/2020

Derivation of Information-Theoretically Optimal Adversarial Attacks with Applications to Robust Machine Learning

We consider the theoretical problem of designing an optimal adversarial ...
research
01/15/2018

Sparsity-based Defense against Adversarial Attacks on Linear Classifiers

Deep neural networks represent the state of the art in machine learning ...
research
05/25/2023

Don't Retrain, Just Rewrite: Countering Adversarial Perturbations by Rewriting Text

Can language models transform inputs to protect text classifiers against...
research
03/18/2022

Adversarial Attacks on Deep Learning-based Video Compression and Classification Systems

Video compression plays a crucial role in enabling video streaming and c...
research
07/29/2020

End-to-End Adversarial White Box Attacks on Music Instrument Classification

Small adversarial perturbations of input data are able to drastically ch...

Please sign up or login with your details

Forgot password? Click here to reset