Perturbation Analysis of Learning Algorithms: A Unifying Perspective on Generation of Adversarial Examples

12/15/2018
by   Emilio Rafael Balda, et al.
0

Despite the tremendous success of deep neural networks in various learning problems, it has been observed that adding an intentionally designed adversarial perturbation to inputs of these architectures leads to erroneous classification with high confidence in the prediction. In this work, we propose a general framework based on the perturbation analysis of learning algorithms which consists of convex programming and is able to recover many current adversarial attacks as special cases. The framework can be used to propose novel attacks against learning algorithms for classification and regression tasks under various new constraints with closed form solutions in many instances. In particular we derive new attacks against classification algorithms which are shown to achieve comparable performances to notable existing attacks. The framework is then used to generate adversarial perturbations for regression tasks which include single pixel and single subset attacks. By applying this method to autoencoding and image colorization tasks, it is shown that adversarial perturbations can effectively perturb the output of regression tasks as well.

READ FULL TEXT

page 13

page 14

research
02/23/2021

Adversarial Examples Detection beyond Image Space

Deep neural networks have been proved that they are vulnerable to advers...
research
03/09/2018

On Generation of Adversarial Examples using Convex Programming

It has been observed that deep learning architectures tend to make erron...
research
05/08/2020

Towards Robustness against Unsuspicious Adversarial Examples

Despite the remarkable success of deep neural networks, significant conc...
research
04/15/2019

Influence of Control Parameters and the Size of Biomedical Image Datasets on the Success of Adversarial Attacks

In this paper, we study dependence of the success rate of adversarial at...
research
04/21/2019

Beyond Explainability: Leveraging Interpretability for Improved Adversarial Learning

In this study, we propose the leveraging of interpretability for tasks b...
research
01/04/2022

On the Minimal Adversarial Perturbation for Deep Neural Networks with Provable Estimation Error

Although Deep Neural Networks (DNNs) have shown incredible performance i...
research
08/19/2022

A Novel Plug-and-Play Approach for Adversarially Robust Generalization

In this work, we propose a robust framework that employs adversarially r...

Please sign up or login with your details

Forgot password? Click here to reset