On the (Un-)Avoidability of Adversarial Examples

06/24/2021
by   Sadia Chowdhury, et al.
0

The phenomenon of adversarial examples in deep learning models has caused substantial concern over their reliability. While many deep neural networks have shown impressive performance in terms of predictive accuracy, it has been shown that in many instances an imperceptible perturbation can falsely flip the network's prediction. Most research has then focused on developing defenses against adversarial attacks or learning under a worst-case adversarial loss. In this work, we take a step back and aim to provide a framework for determining whether a model's label change under small perturbation is justified (and when it is not). We carefully argue that adversarial robustness should be defined as a locally adaptive measure complying with the underlying distribution. We then suggest a definition for an adaptive robust loss, derive an empirical version of it, and develop a resulting data-augmentation framework. We prove that our adaptive data-augmentation maintains consistency of 1-nearest neighbor classification under deterministic labels and provide illustrative empirical evaluations.

READ FULL TEXT
research
06/23/2019

Defending Against Adversarial Examples with K-Nearest Neighbor

Robustness is an increasingly important property of machine learning mod...
research
11/19/2015

A Unified Gradient Regularization Family for Adversarial Examples

Adversarial examples are augmented data points generated by imperceptibl...
research
07/06/2020

On Data Augmentation and Adversarial Risk: An Empirical Analysis

Data augmentation techniques have become standard practice in deep learn...
research
10/27/2019

Understanding and Quantifying Adversarial Examples Existence in Linear Classification

State-of-art deep neural networks (DNN) are vulnerable to attacks by adv...
research
12/04/2020

Kernel-convoluted Deep Neural Networks with Data Augmentation

The Mixup method (Zhang et al. 2018), which uses linearly interpolated d...
research
03/09/2018

On Generation of Adversarial Examples using Convex Programming

It has been observed that deep learning architectures tend to make erron...
research
10/09/2020

How Does Mixup Help With Robustness and Generalization?

Mixup is a popular data augmentation technique based on taking convex co...

Please sign up or login with your details

Forgot password? Click here to reset