DeepDefense: Training Deep Neural Networks with Improved Robustness

02/23/2018
by   Ziang Yan, et al.
0

Despite the efficacy on a variety of computer vision tasks, deep neural networks (DNNs) are vulnerable to adversarial attacks, limiting their applications in security-critical systems. Recent works have shown the possibility of generating imperceptibly perturbed image inputs (a.k.a., adversarial examples) to fool well-trained DNN models into making arbitrary predictions. To address this problem, we propose a training recipe named DeepDefense. Our core idea is to integrate an adversarial perturbation-based regularizer into the classification objective, such that the obtained models learn to resist potential attacks, directly and precisely. The whole optimization problem is solved just like training a recursive network. Experimental results demonstrate that our method outperforms the state-of-the-arts by large margins on various datasets (including MNIST, CIFAR-10 and ImageNet) and different DNN architectures. Code and models for reproducing our results will be made publicly available.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset