DeepAI AI Chat
Log In Sign Up

Hessian-Free Second-Order Adversarial Examples for Adversarial Learning

by   Yaguan Qian, et al.

Recent studies show deep neural networks (DNNs) are extremely vulnerable to the elaborately designed adversarial examples. Adversarial learning with those adversarial examples has been proved as one of the most effective methods to defend against such an attack. At present, most existing adversarial examples generation methods are based on first-order gradients, which can hardly further improve models' robustness, especially when facing second-order adversarial attacks. Compared with first-order gradients, second-order gradients provide a more accurate approximation of the loss landscape with respect to natural examples. Inspired by this, our work crafts second-order adversarial examples and uses them to train DNNs. Nevertheless, second-order optimization involves time-consuming calculation for Hessian-inverse. We propose an approximation method through transforming the problem into an optimization in the Krylov subspace, which remarkably reduce the computational complexity to speed up the training procedure. Extensive experiments conducted on the MINIST and CIFAR-10 datasets show that our adversarial learning with second-order adversarial examples outperforms other fisrt-order methods, which can improve the model robustness against a wide range of attacks.


page 1

page 8


Adversarial Robustness through Regularization: A Second-Order Approach

Adversarial training is a common approach to improving the robustness of...

Second-Order NLP Adversarial Examples

Adversarial example generation methods in NLP rely on models like langua...

Defend Deep Neural Networks Against Adversarial Examples via Fixed andDynamic Quantized Activation Functions

Recent studies have shown that deep neural networks (DNNs) are vulnerabl...

Towards Sharper First-Order Adversary with Quantized Gradients

Despite the huge success of Deep Neural Networks (DNNs) in a wide spectr...

Defending Against Adversarial Attacks Using Random Forests

As deep neural networks (DNNs) have become increasingly important and po...

Constrained Adversarial Learning and its applicability to Automated Software Testing: a systematic review

Every novel technology adds hidden vulnerabilities ready to be exploited...

Adversarial Attack for Asynchronous Event-based Data

Deep neural networks (DNNs) are vulnerable to adversarial examples that ...