Towards Robust DNNs: An Taylor Expansion-Based Method for Generating Powerful Adversarial Examples

01/23/2020
by   Ya-guan Qian, et al.
0

Although deep neural networks (DNNs) have achieved successful applications in many fields, they are vulnerable to adversarial examples. Adversarial training is one of the most effective methods to improve the robustness of DNNs, and it is generally considered as a minimax point problem that minimizes the loss function and maximizes the perturbation. Therefore, powerful adversarial examples can effectively simulate perturbation maximization to solve the minimax problem. In paper, a novel method was proposed to generate more powerful adversarial examples for robust adversarial training. The main idea is to approximates the output of DNNs in the input neighborhood by using the Taylor expansion, and then optimizes it by using the Lagrange multiplier method to generate adversarial examples. The experiment results show it can effectively improve the robust of DNNs trained with these powerful adversarial examples.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset