Exploring Model Robustness with Adaptive Networks and Improved Adversarial Training

05/30/2020
by   Zheng Xu, et al.
0

Adversarial training has proven to be effective in hardening networks against adversarial examples. However, the gained robustness is limited by network capacity and number of training samples. Consequently, to build more robust models, it is common practice to train on widened networks with more parameters. To boost robustness, we propose a conditional normalization module to adapt networks when conditioned on input samples. Our adaptive networks, once adversarially trained, can outperform their non-adaptive counterparts on both clean validation accuracy and robustness. Our method is objective agnostic and consistently improves both the conventional adversarial training objective and the TRADES objective. Our adaptive networks also outperform larger widened non-adaptive architectures that have 1.5 times more parameters. We further introduce several practical “tricks” in adversarial training to improve robustness and empirically verify their efficiency.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset