Exploring Model Robustness with Adaptive Networks and Improved Adversarial Training

05/30/2020
by   Zheng Xu, et al.
0

Adversarial training has proven to be effective in hardening networks against adversarial examples. However, the gained robustness is limited by network capacity and number of training samples. Consequently, to build more robust models, it is common practice to train on widened networks with more parameters. To boost robustness, we propose a conditional normalization module to adapt networks when conditioned on input samples. Our adaptive networks, once adversarially trained, can outperform their non-adaptive counterparts on both clean validation accuracy and robustness. Our method is objective agnostic and consistently improves both the conventional adversarial training objective and the TRADES objective. Our adaptive networks also outperform larger widened non-adaptive architectures that have 1.5 times more parameters. We further introduce several practical “tricks” in adversarial training to improve robustness and empirically verify their efficiency.

READ FULL TEXT
research
12/21/2019

Jacobian Adversarially Regularized Networks for Robustness

Adversarial examples are crafted with imperceptible perturbations with t...
research
03/29/2023

Beyond Empirical Risk Minimization: Local Structure Preserving Regularization for Improving Adversarial Robustness

It is broadly known that deep neural networks are susceptible to being f...
research
08/30/2021

Adaptive perturbation adversarial training: based on reinforcement learning

Adversarial training has become the primary method to defend against adv...
research
06/10/2019

Intriguing properties of adversarial training

Adversarial training is one of the main defenses against adversarial att...
research
06/06/2023

Transferable Adversarial Robustness for Categorical Data via Universal Robust Embeddings

Research on adversarial robustness is primarily focused on image and tex...
research
02/20/2020

Boosting Adversarial Training with Hypersphere Embedding

Adversarial training (AT) is one of the most effective defenses to impro...
research
02/22/2018

Sounderfeit: Cloning a Physical Model with Conditional Adversarial Autoencoders

An adversarial autoencoder conditioned on known parameters of a physical...

Please sign up or login with your details

Forgot password? Click here to reset