Adversarial Defense Via Local Flatness Regularization

10/27/2019
by   Jia Xu, et al.
0

Adversarial defense is a popular and important research area. Due to its intrinsic mechanism, one of the most straightforward and effective ways is to analyze the property of loss surface in the input space. In this paper, we define the local flatness of the loss surface as the maximum value of the chosen norm of the gradient regarding to the input within a neighborhood centered at the sample, and discuss its relationship with adversarial vulnerability. Based on the analysis, we propose a new defense approach via regularizing the local flatness (LFR). We demonstrate the effectiveness of the proposed method also from other perspectives, such as human visual mechanism, and analyze the relationship between LFR and related methods theoretically. Experiments are conducted to verify our theory and demonstrate the superiority of the proposed method.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/29/2018

Interpreting Adversarial Robustness: A View from Decision Surface in Input Space

One popular hypothesis of neural network generalization is that the flat...
research
03/24/2023

Improved Adversarial Training Through Adaptive Instance-wise Loss Smoothing

Deep neural networks can be easily fooled into making incorrect predicti...
research
09/23/2020

Adversarial robustness via stochastic regularization of neural activation sensitivity

Recent works have shown that the input domain of any machine learning cl...
research
06/10/2019

Improved Adversarial Robustness via Logit Regularization Methods

While great progress has been made at making neural networks effective a...
research
09/07/2021

Adversarial Parameter Defense by Multi-Step Risk Minimization

Previous studies demonstrate DNNs' vulnerability to adversarial examples...
research
03/17/2020

Heat and Blur: An Effective and Fast Defense Against Adversarial Examples

The growing incorporation of artificial neural networks (NNs) into many ...
research
12/08/2020

A Deep Marginal-Contrastive Defense against Adversarial Attacks on 1D Models

Deep learning algorithms have been recently targeted by attackers due to...

Please sign up or login with your details

Forgot password? Click here to reset