Robust Mode Connectivity-Oriented Adversarial Defense: Enhancing Neural Network Robustness Against Diversified ℓ_p Attacks

03/17/2023
by   Ren Wang, et al.
1

Adversarial robustness is a key concept in measuring the ability of neural networks to defend against adversarial attacks during the inference phase. Recent studies have shown that despite the success of improving adversarial robustness against a single type of attack using robust training techniques, models are still vulnerable to diversified ℓ_p attacks. To achieve diversified ℓ_p robustness, we propose a novel robust mode connectivity (RMC)-oriented adversarial defense that contains two population-based learning phases. The first phase, RMC, is able to search the model parameter space between two pre-trained models and find a path containing points with high robustness against diversified ℓ_p attacks. In light of the effectiveness of RMC, we develop a second phase, RMC-based optimization, with RMC serving as the basic unit for further enhancement of neural network diversified ℓ_p robustness. To increase computational efficiency, we incorporate learning with a self-robust mode connectivity (SRMC) module that enables the fast proliferation of the population used for endpoints of RMC. Furthermore, we draw parallels between SRMC and the human immune system. Experimental results on various datasets and model architectures demonstrate that the proposed defense methods can achieve high diversified ℓ_p robustness against ℓ_∞, ℓ_2, ℓ_1, and hybrid attacks. Codes are available at <https://github.com/wangren09/MCGR>.

READ FULL TEXT

page 3

page 12

research
04/30/2020

Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness

Mode connectivity provides novel geometric insights on analyzing loss la...
research
04/25/2023

Learning Robust Deep Equilibrium Models

Deep equilibrium (DEQ) models have emerged as a promising class of impli...
research
07/01/2019

Comment on "Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network"

A recent paper by Liu et al. combines the topics of adversarial training...
research
05/04/2022

CE-based white-box adversarial attacks will not work using super-fitting

Deep neural networks are widely used in various fields because of their ...
research
03/19/2022

Distinguishing Non-natural from Natural Adversarial Samples for More Robust Pre-trained Language Model

Recently, the problem of robustness of pre-trained language models (PrLM...
research
02/07/2023

RoNet: Toward Robust Neural Assisted Mobile Network Configuration

Automating configuration is the key path to achieving zero-touch network...
research
06/25/2023

Computational Asymmetries in Robust Classification

In the context of adversarial robustness, we make three strongly related...

Please sign up or login with your details

Forgot password? Click here to reset