Second-Order Adversarial Attack and Certifiable Robustness

09/10/2018
by   Bai Li, et al.
0

We propose a powerful second-order attack method that outperforms existing attack methods on reducing the accuracy of state-of-the-art defense models based on adversarial training. The effectiveness of our attack method motivates an investigation of provable robustness of a defense model. To this end, we introduce a framework that allows one to obtain a certifiable lower bound on the prediction accuracy against adversarial examples. We conduct experiments to show the effectiveness of our attack method. At the same time, our defense models obtain higher accuracies compared to previous works under our proposed attack.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/11/2023

ATWM: Defense against adversarial malware based on adversarial training

Deep learning technology has made great achievements in the field of ima...
research
07/26/2018

Evaluating and Understanding the Robustness of Adversarial Logit Pairing

We evaluate the robustness of Adversarial Logit Pairing, a recently prop...
research
05/04/2022

Rethinking Classifier and Adversarial Attack

Various defense models have been proposed to resist adversarial attack a...
research
08/07/2020

Visual Attack and Defense on Text

Modifying characters of a piece of text to their visual similar ones oft...
research
06/01/2020

Rethinking Empirical Evaluation of Adversarial Robustness Using First-Order Attack Methods

We identify three common cases that lead to overestimation of adversaria...
research
02/09/2021

Provable Defense Against Delusive Poisoning

Delusive poisoning is a special kind of attack to obstruct learning, whe...
research
06/27/2021

ASK: Adversarial Soft k-Nearest Neighbor Attack and Defense

K-Nearest Neighbor (kNN)-based deep learning methods have been applied t...

Please sign up or login with your details

Forgot password? Click here to reset