A Mask-Based Adversarial Defense Scheme

04/21/2022
by   Weizhen Xu, et al.
0

Adversarial attacks hamper the functionality and accuracy of Deep Neural Networks (DNNs) by meddling with subtle perturbations to their inputs.In this work, we propose a new Mask-based Adversarial Defense scheme (MAD) for DNNs to mitigate the negative effect from adversarial attacks. To be precise, our method promotes the robustness of a DNN by randomly masking a portion of potential adversarial images, and as a result, the output of the DNN becomes more tolerant to minor input perturbations. Compared with existing adversarial defense techniques, our method does not need any additional denoising structure, nor any change to a DNN's design. We have tested this approach on a collection of DNN models for a variety of data sets, and the experimental results confirm that the proposed method can effectively improve the defense abilities of the DNNs against all of the tested adversarial attack methods. In certain scenarios, the DNN models trained with MAD have improved classification accuracy by as much as 20 original models that are given adversarial inputs.

READ FULL TEXT
research
11/12/2019

Robust Design of Deep Neural Networks against Adversarial Attacks based on Lyapunov Theory

Deep neural networks (DNNs) are vulnerable to subtle adversarial perturb...
research
03/14/2022

Defending From Physically-Realizable Adversarial Attacks Through Internal Over-Activation Analysis

This work presents Z-Mask, a robust and effective strategy to improve th...
research
10/02/2020

An Empirical Study of DNNs Robustification Inefficacy in Protecting Visual Recommenders

Visual-based recommender systems (VRSs) enhance recommendation performan...
research
02/14/2018

Security Analysis and Enhancement of Model Compressed Deep Learning Systems under Adversarial Attacks

DNN is presenting human-level performance for many complex intelligent t...
research
07/03/2018

Local Gradients Smoothing: Defense against localized adversarial attacks

Deep neural networks (DNNs) have shown vulnerability to adversarial atta...
research
03/20/2022

Adversarial Parameter Attack on Deep Neural Networks

In this paper, a new parameter perturbation attack on DNNs, called adver...
research
04/17/2019

Adversarial Defense Through Network Profiling Based Path Extraction

Recently, researchers have started decomposing deep neural network model...

Please sign up or login with your details

Forgot password? Click here to reset