Efficiently Finding Adversarial Examples with DNN Preprocessing

11/16/2022
by   Avriti Chauhan, et al.
0

Deep Neural Networks (DNNs) are everywhere, frequently performing a fairly complex task that used to be unimaginable for machines to carry out. In doing so, they do a lot of decision making which, depending on the application, may be disastrous if gone wrong. This necessitates a formal argument that the underlying neural networks satisfy certain desirable properties. Robustness is one such key property for DNNs, particularly if they are being deployed in safety or business critical applications. Informally speaking, a DNN is not robust if very small changes to its input may affect the output in a considerable way (e.g. changes the classification for that input). The task of finding an adversarial example is to demonstrate this lack of robustness, whenever applicable. While this is doable with the help of constrained optimization techniques, scalability becomes a challenge due to large-sized networks. This paper proposes the use of information gathered by preprocessing the DNN to heavily simplify the optimization problem. Our experiments substantiate that this is effective, and does significantly better than the state-of-the-art.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/23/2020

Developing and Defeating Adversarial Examples

Breakthroughs in machine learning have resulted in state-of-the-art deep...
research
04/20/2020

GraN: An Efficient Gradient-Norm Based Detector for Adversarial and Misclassified Examples

Deep neural networks (DNNs) are vulnerable to adversarial examples and o...
research
11/03/2020

Recent Advances in Understanding Adversarial Robustness of Deep Neural Networks

Adversarial examples are inevitable on the road of pervasive application...
research
07/16/2020

Accelerating Robustness Verification of Deep Neural Networks Guided by Target Labels

Deep Neural Networks (DNNs) have become key components of many safety-cr...
research
01/25/2021

Probabilistic Robustness Analysis for DNNs based on PAC Learning

This paper proposes a black box based approach for analysing deep neural...
research
11/30/2022

Efficient Adversarial Input Generation via Neural Net Patching

The adversarial input generation problem has become central in establish...
research
10/01/2019

Leveraging Model Interpretability and Stability to increase Model Robustness

State of the art Deep Neural Networks (DNN) can now achieve above human ...

Please sign up or login with your details

Forgot password? Click here to reset