On Configurable Defense against Adversarial Example Attacks

12/06/2018
by   Bo Luo, et al.
0

Machine learning systems based on deep neural networks (DNNs) have gained mainstream adoption in many applications. Recently, however, DNNs are shown to be vulnerable to adversarial example attacks with slight perturbations on the inputs. Existing defense mechanisms against such attacks try to improve the overall robustness of the system, but they do not differentiate different targeted attacks even though the corresponding impacts may vary significantly. To tackle this problem, we propose a novel configurable defense mechanism in this work, wherein we are able to flexibly tune the robustness of the system against different targeted attacks to satisfy application requirements. This is achieved by refining the DNN loss function with an attack sensitive matrix to represent the impacts of different targeted attacks. Experimental results on CIFAR-10 and GTSRB data sets demonstrate the efficacy of the proposed solution.

READ FULL TEXT
research
11/04/2018

QuSecNets: Quantization-based Defense Mechanism for Securing Deep Neural Network against Adversarial Attacks

Deep Neural Networks (DNNs) have recently been shown vulnerable to adver...
research
01/15/2018

Towards Imperceptible and Robust Adversarial Example Attacks against Neural Networks

Machine learning systems based on deep neural networks, being able to pr...
research
09/04/2021

Utilizing Adversarial Targeted Attacks to Boost Adversarial Robustness

Adversarial attacks have been shown to be highly effective at degrading ...
research
02/27/2023

Aegis: Mitigating Targeted Bit-flip Attacks against Deep Neural Networks

Bit-flip attacks (BFAs) have attracted substantial attention recently, i...
research
12/14/2020

HaS-Nets: A Heal and Select Mechanism to Defend DNNs Against Backdoor Attacks for Data Collection Scenarios

We have witnessed the continuing arms race between backdoor attacks and ...
research
08/31/2021

Segmentation Fault: A Cheap Defense Against Adversarial Machine Learning

Recently published attacks against deep neural networks (DNNs) have stre...
research
03/23/2023

Don't FREAK Out: A Frequency-Inspired Approach to Detecting Backdoor Poisoned Samples in DNNs

In this paper we investigate the frequency sensitivity of Deep Neural Ne...

Please sign up or login with your details

Forgot password? Click here to reset