An ADMM-Based Universal Framework for Adversarial Attacks on Deep Neural Networks

04/09/2018
by   Pu Zhao, et al.
0

Deep neural networks (DNNs) are known vulnerable to adversarial attacks. That is, adversarial examples, obtained by adding delicately crafted distortions onto original legal inputs, can mislead a DNN to classify them as any target labels. In a successful adversarial attack, the targeted mis-classification should be achieved with the minimal distortion added. In the literature, the added distortions are usually measured by L0, L1, L2, and L infinity norms, namely, L0, L1, L2, and L infinity attacks, respectively. However, there lacks a versatile framework for all types of adversarial attacks. This work for the first time unifies the methods of generating adversarial examples by leveraging ADMM (Alternating Direction Method of Multipliers), an operator splitting optimization approach, such that L0, L1, L2, and L infinity attacks can be effectively implemented by this general framework with little modifications. Comparing with the state-of-the-art attacks in each category, our ADMM-based attacks are so far the strongest, achieving both the 100 success rate and the minimal distortion.

READ FULL TEXT
research
08/05/2018

Structured Adversarial Attack: Towards General Implementation and Better Interpretability

When generating adversarial examples to attack deep neural networks (DNN...
research
09/13/2018

Defensive Dropout for Hardening Deep Neural Networks under Adversarial Attacks

Deep neural networks (DNNs) are known vulnerable to adversarial attacks....
research
07/26/2019

On the Design of Black-box Adversarial Examples by Leveraging Gradient-free Optimization and Operator Splitting Method

Robust machine learning is currently one of the most prominent topics wh...
research
09/29/2018

CAAD 2018: Generating Transferable Adversarial Examples

Deep neural networks (DNNs) are vulnerable to adversarial examples, pert...
research
07/30/2020

A Data Augmentation-based Defense Method Against Adversarial Attacks in Neural Networks

Deep Neural Networks (DNNs) in Computer Vision (CV) are well-known to be...
research
01/12/2019

ECGadv: Generating Adversarial Electrocardiogram to Misguide Arrhythmia Classification System

Deep neural networks (DNNs)-powered Electrocardiogram (ECG) diagnosis sy...
research
06/04/2021

BO-DBA: Query-Efficient Decision-Based Adversarial Attacks via Bayesian Optimization

Decision-based attacks (DBA), wherein attackers perturb inputs to spoof ...

Please sign up or login with your details

Forgot password? Click here to reset