A Theoretical Framework for Robustness of (Deep) Classifiers against Adversarial Examples

12/01/2016
by   Beilun Wang, et al.
0

Most machine learning classifiers, including deep neural networks, are vulnerable to adversarial examples. Such inputs are typically generated by adding small but purposeful modifications that lead to incorrect outputs while imperceptible to human eyes. The goal of this paper is not to introduce a single method, but to make theoretical steps towards fully understanding adversarial examples. By using concepts from topology, our theoretical analysis brings forth the key reasons why an adversarial example can fool a classifier (f_1) and adds its oracle (f_2, like human eyes) in such analysis. By investigating the topological relationship between two (pseudo)metric spaces corresponding to predictor f_1 and oracle f_2, we develop necessary and sufficient conditions that can determine if f_1 is always robust (strong-robust) against adversarial examples according to f_2. Interestingly our theorems indicate that just one unnecessary feature can make f_1 not strong-robust, and the right feature representation learning is the key to getting a classifier that is both accurate and strong-robust.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 35

10/27/2019

Understanding and Quantifying Adversarial Examples Existence in Linear Classification

State-of-art deep neural networks (DNN) are vulnerable to attacks by adv...
05/06/2020

Proper measure for adversarial robustness

This paper analyzes the problems of standard adversarial accuracy and st...
11/25/2019

Playing it Safe: Adversarial Robustness with an Abstain Option

We explore adversarial robustness in the setting in which it is acceptab...
06/21/2021

Adversarial Examples Make Strong Poisons

The adversarial machine learning literature is largely partitioned into ...
05/06/2019

Adversarial Examples Are Not Bugs, They Are Features

Adversarial examples have attracted significant attention in machine lea...
02/18/2021

Consistent Non-Parametric Methods for Adaptive Robustness

Learning classifiers that are robust to adversarial examples has receive...
11/15/2018

Adversarial Examples from Cryptographic Pseudo-Random Generators

In our recent work (Bubeck, Price, Razenshteyn, arXiv:1805.10204) we arg...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.