An Efficient and Margin-Approaching Zero-Confidence Adversarial Attack

10/01/2019
by   Yang Zhang, et al.
11

There are two major paradigms of white-box adversarial attacks that attempt to impose input perturbations. The first paradigm, called the fix-perturbation attack, crafts adversarial samples within a given perturbation level. The second paradigm, called the zero-confidence attack, finds the smallest perturbation needed to cause mis-classification, also known as the margin of an input feature. While the former paradigm is well-resolved, the latter is not. Existing zero-confidence attacks either introduce significant ap-proximation errors, or are too time-consuming. We therefore propose MARGINATTACK, a zero-confidence attack framework that is able to compute the margin with improved accuracy and efficiency. Our experiments show that MARGINATTACK is able to compute a smaller margin than the state-of-the-art zero-confidence attacks, and matches the state-of-the-art fix-perturbation at-tacks. In addition, it runs significantly faster than the Carlini-Wagner attack, currently the most ac-curate zero-confidence attack algorithm.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/15/2020

Generalizing Universal Adversarial Attacks Beyond Additive Perturbations

The previous study has shown that universal adversarial attacks can fool...
research
03/25/2019

The LogBarrier adversarial attack: making effective use of decision boundary information

Adversarial attacks for image classification are small perturbations to ...
research
12/01/2018

FineFool: Fine Object Contour Attack via Attention

Machine learning models have been shown vulnerable to adversarial attack...
research
03/18/2022

Neural Predictor for Black-Box Adversarial Attacks on Speech Recognition

Recent works have revealed the vulnerability of automatic speech recogni...
research
05/26/2019

Generalizable Adversarial Attacks Using Generative Models

Adversarial attacks on deep neural networks traditionally rely on a cons...
research
02/25/2021

Fast Minimum-norm Adversarial Attacks through Adaptive Norm Constraints

Evaluating adversarial robustness amounts to finding the minimum perturb...
research
03/29/2022

Zero-Query Transfer Attacks on Context-Aware Object Detectors

Adversarial attacks perturb images such that a deep neural network produ...

Please sign up or login with your details

Forgot password? Click here to reset