Q-TART: Quickly Training for Adversarial Robustness and in-Transferability

04/14/2022
by   Madan Ravi Ganesh, et al.
8

Raw deep neural network (DNN) performance is not enough; in real-world settings, computational load, training efficiency and adversarial security are just as or even more important. We propose to simultaneously tackle Performance, Efficiency, and Robustness, using our proposed algorithm Q-TART, Quickly Train for Adversarial Robustness and in-Transferability. Q-TART follows the intuition that samples highly susceptible to noise strongly affect the decision boundaries learned by DNNs, which in turn degrades their performance and adversarial susceptibility. By identifying and removing such samples, we demonstrate improved performance and adversarial robustness while using only a subset of the training data. Through our experiments we highlight Q-TART's high performance across multiple Dataset-DNN combinations, including ImageNet, and provide insights into the complementary behavior of Q-TART alongside existing adversarial training approaches to increase robustness by over 1.3 up to 17.9

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/16/2020

Universal Adversarial Attack on Attention and the Resulting Dataset DAmageNet

Adversarial attacks on deep neural networks (DNNs) have been found for s...
research
06/09/2021

Towards the Memorization Effect of Neural Networks in Adversarial Training

Recent studies suggest that “memorization” is one important factor for o...
research
02/22/2017

DeepCloak: Masking Deep Neural Network Models for Robustness Against Adversarial Samples

Recent studies have shown that deep neural networks (DNN) are vulnerable...
research
12/16/2019

DAmageNet: A Universal Adversarial Dataset

It is now well known that deep neural networks (DNNs) are vulnerable to ...
research
06/02/2022

FACM: Correct the Output of Deep Neural Network with Middle Layers Features against Adversarial Samples

In the strong adversarial attacks against deep neural network (DNN), the...
research
11/19/2018

Generalizable Adversarial Training via Spectral Normalization

Deep neural networks (DNNs) have set benchmarks on a wide array of super...
research
10/07/2020

Batch Normalization Increases Adversarial Vulnerability: Disentangling Usefulness and Robustness of Model Features

Batch normalization (BN) has been widely used in modern deep neural netw...

Please sign up or login with your details

Forgot password? Click here to reset