On Generation of Adversarial Examples using Convex Programming

03/09/2018
by   Emilio Rafael Balda, et al.
0

It has been observed that deep learning architectures tend to make erroneous decisions with high reliability for particularly designed adversarial instances. In this work, we show that the perturbation analysis of these architectures provides a method for generating adversarial instances by convex programming which, for classification tasks, recovers variants of existing non-adaptive adversarial methods. The proposed method can be used for the design of adversarial noise under various desirable constraints and different types of networks. Furthermore, the core idea of this method is that neural networks can be well approximated by a linear function. Experiments show the competitive performance of the obtained algorithms, in terms of fooling ratio, when benchmarked with well-known adversarial methods.

READ FULL TEXT
research
12/15/2018

Perturbation Analysis of Learning Algorithms: A Unifying Perspective on Generation of Adversarial Examples

Despite the tremendous success of deep neural networks in various learni...
research
03/18/2019

Generating Adversarial Examples With Conditional Generative Adversarial Net

Recently, deep neural networks have significant progress and successful ...
research
09/13/2021

TREATED:Towards Universal Defense against Textual Adversarial Attacks

Recent work shows that deep neural networks are vulnerable to adversaria...
research
02/22/2017

Robustness to Adversarial Examples through an Ensemble of Specialists

We are proposing to use an ensemble of diverse specialists, where specia...
research
06/24/2021

On the (Un-)Avoidability of Adversarial Examples

The phenomenon of adversarial examples in deep learning models has cause...
research
11/21/2018

How the Softmax Output is Misleading for Evaluating the Strength of Adversarial Examples

Even before deep learning architectures became the de facto models for c...
research
10/24/2019

Preventing Adversarial Use of Datasets through Fair Core-Set Construction

We propose improving the privacy properties of a dataset by publishing o...

Please sign up or login with your details

Forgot password? Click here to reset