Efficient Adversarial Input Generation via Neural Net Patching

11/30/2022
by   Tooba Khan, et al.
0

The adversarial input generation problem has become central in establishing the robustness and trustworthiness of deep neural nets, especially when they are used in safety-critical application domains such as autonomous vehicles and precision medicine. This is also practically challenging for multiple reasons-scalability is a common issue owing to large-sized networks, and the generated adversarial inputs often lack important qualities such as naturalness and output-impartiality. We relate this problem to the task of patching neural nets, i.e. applying small changes in some of the network's weights so that the modified net satisfies a given property. Intuitively, a patch can be used to produce an adversarial input because the effect of changing the weights can also be brought about by changing the inputs instead. This work presents a novel technique to patch neural networks and an innovative approach of using it to produce perturbations of inputs which are adversarial for the original net. We note that the proposed solution is significantly more effective than the prior state-of-the-art techniques.

READ FULL TEXT

page 6

page 7

research
03/24/2019

A Formalization of Robustness for Deep Neural Networks

Deep neural networks have been shown to lack robustness to small input p...
research
09/19/2018

Efficient Formal Safety Analysis of Neural Networks

Neural networks are increasingly deployed in real-world safety-critical ...
research
02/26/2019

Analyzing Deep Neural Networks with Symbolic Propagation: Towards Higher Precision and Faster Verification

Deep neural networks (DNNs) have been shown lack of robustness for the v...
research
11/16/2022

Efficiently Finding Adversarial Examples with DNN Preprocessing

Deep Neural Networks (DNNs) are everywhere, frequently performing a fair...
research
05/24/2016

Measuring Neural Net Robustness with Constraints

Despite having high accuracy, neural nets have been shown to be suscepti...
research
03/16/2022

What Do Adversarially trained Neural Networks Focus: A Fourier Domain-based Study

Although many fields have witnessed the superior performance brought abo...
research
07/23/2021

Self-Repairing Neural Networks: Provable Safety for Deep Networks via Dynamic Repair

Neural networks are increasingly being deployed in contexts where safety...

Please sign up or login with your details

Forgot password? Click here to reset