Towards Sharper First-Order Adversary with Quantized Gradients

02/01/2020
by   Zhuanghua Liu, et al.
0

Despite the huge success of Deep Neural Networks (DNNs) in a wide spectrum of machine learning and data mining tasks, recent research shows that this powerful tool is susceptible to maliciously crafted adversarial examples. Up until now, adversarial training has been the most successful defense against adversarial attacks. To increase adversarial robustness, a DNN can be trained with a combination of benign and adversarial examples generated by first-order methods. However, in state-of-the-art first-order attacks, adversarial examples with sign gradients retain the sign information of each gradient component but discard the relative magnitude between components. In this work, we replace sign gradients with quantized gradients. Gradient quantization not only preserves the sign information, but also keeps the relative magnitude between components. Experiments show white-box first-order attacks with quantized gradients outperform their variants with sign gradients on multiple datasets. Notably, our BLOB_QG attack achieves an accuracy of 88.32% on the secret MNIST model from the MNIST Challenge and it outperforms all other methods on the leaderboard of white-box attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/25/2021

Generalizing Adversarial Examples by AdaBelief Optimizer

Recent research has proved that deep neural networks (DNNs) are vulnerab...
research
04/14/2018

On the Limitation of MagNet Defense against L_1-based Adversarial Examples

In recent years, defending adversarial perturbations to natural examples...
research
07/18/2018

Defend Deep Neural Networks Against Adversarial Examples via Fixed andDynamic Quantized Activation Functions

Recent studies have shown that deep neural networks (DNNs) are vulnerabl...
research
07/04/2022

Hessian-Free Second-Order Adversarial Examples for Adversarial Learning

Recent studies show deep neural networks (DNNs) are extremely vulnerable...
research
10/25/2021

Fast Gradient Non-sign Methods

Adversarial attacks make their success in fooling DNNs and among them, g...
research
05/13/2021

Stochastic-Shield: A Probabilistic Approach Towards Training-Free Adversarial Defense in Quantized CNNs

Quantized neural networks (NN) are the common standard to efficiently de...
research
05/07/2019

Towards Evaluating and Understanding Robust Optimisation under Transfer

This work evaluates the efficacy of adversarial robustness under transfe...

Please sign up or login with your details

Forgot password? Click here to reset