Adversarial Concurrent Training: Optimizing Robustness and Accuracy Trade-off of Deep Neural Networks

08/16/2020 ∙ by Elahe Arani, et al. ∙ 19

Adversarial training has been proven to be an effective technique for improving the adversarial robustness of models. However, there seems to be an inherent trade-off between optimizing the model for accuracy and robustness. To this end, we propose Adversarial Concurrent Training (ACT), which employs adversarial training in a collaborative learning framework whereby we train a robust model in conjunction with a natural model in a minimax game. ACT encourages the two models to align their feature space by using the task-specific decision boundaries and explore the input space more broadly. Furthermore, the natural model acts as a regularizer, enforcing priors on features that the robust model should learn. Our analyses on the behavior of the models show that ACT leads to a robust model with lower model complexity, higher information compression in the learned representations, and high posterior entropy solutions indicative of convergence to a flatter minima. We demonstrate the effectiveness of the proposed approach across different datasets and network architectures. On ImageNet, ACT achieves 68.20 accuracy and 44.29 attack, improving upon the standard adversarial training method's 65.70 standard accuracy and 42.36

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 7

Code Repositories

ACT

The official PyTorch code for BMVC'20 Paper "Adversarial Concurrent Training: Optimizing Robustness and Accuracy Trade-off of Deep Neural Networks"


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.