Understanding and Quantifying Adversarial Examples Existence in Linear Classification

10/27/2019
by   Xupeng Shi, et al.
8

State-of-art deep neural networks (DNN) are vulnerable to attacks by adversarial examples: a carefully designed small perturbation to the input, that is imperceptible to human, can mislead DNN. To understand the root cause of adversarial examples, we quantify the probability of adversarial example existence for linear classifiers. Previous mathematical definition of adversarial examples only involves the overall perturbation amount, and we propose a more practical relevant definition of strong adversarial examples that separately limits the perturbation along the signal direction also. We show that linear classifiers can be made robust to strong adversarial examples attack in cases where no adversarial robust linear classifiers exist under the previous definition. The quantitative formulas are confirmed by numerical experiments using a linear support vector machine (SVM) classifier. The results suggest that designing general strong-adversarial-robust learning systems is feasible but only through incorporating human knowledge of the underlying classification problem.

READ FULL TEXT

page 3

page 8

research
08/23/2020

Developing and Defeating Adversarial Examples

Breakthroughs in machine learning have resulted in state-of-the-art deep...
research
12/01/2016

A Theoretical Framework for Robustness of (Deep) Classifiers against Adversarial Examples

Most machine learning classifiers, including deep neural networks, are v...
research
07/08/2016

Adversarial examples in the physical world

Most existing machine learning classifiers are highly vulnerable to adve...
research
09/03/2021

A Synergetic Attack against Neural Network Classifiers combining Backdoor and Adversarial Examples

In this work, we show how to jointly exploit adversarial perturbation an...
research
11/08/2017

Intriguing Properties of Adversarial Examples

It is becoming increasingly clear that many machine learning classifiers...
research
06/24/2021

On the (Un-)Avoidability of Adversarial Examples

The phenomenon of adversarial examples in deep learning models has cause...
research
01/06/2018

Adversarial Perturbation Intensity Achieving Chosen Intra-Technique Transferability Level for Logistic Regression

Machine Learning models have been shown to be vulnerable to adversarial ...

Please sign up or login with your details

Forgot password? Click here to reset