LAFEAT: Piercing Through Adversarial Defenses with Latent Features

04/19/2021
by   Yunrui Yu, et al.
0

Deep convolutional neural networks are susceptible to adversarial attacks. They can be easily deceived to give an incorrect output by adding a tiny perturbation to the input. This presents a great challenge in making CNNs robust against such attacks. An influx of new defense techniques have been proposed to this end. In this paper, we show that latent features in certain "robust" models are surprisingly susceptible to adversarial attacks. On top of this, we introduce a unified ℓ_∞-norm white-box attack algorithm which harnesses latent features in its gradient descent steps, namely LAFEAT. We show that not only is it computationally much more efficient for successful attacks, but it is also a stronger adversary than the current state-of-the-art across a wide range of defense mechanisms. This suggests that model robustness could be contingent on the effective use of the defender's hidden components, and it should no longer be viewed from a holistic perspective.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

02/18/2020

Deflecting Adversarial Attacks

There has been an ongoing cycle where stronger defenses against adversar...
12/07/2018

Adversarial Attacks, Regression, and Numerical Stability Regularization

Adversarial attacks against neural networks in a regression setting are ...
11/23/2018

Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses

Research on adversarial examples in computer vision tasks has shown that...
04/23/2021

Evaluating Deception Detection Model Robustness To Linguistic Variation

With the increasing use of machine-learning driven algorithmic judgement...
04/19/2022

Poisons that are learned faster are more effective

Imperceptible poisoning attacks on entire datasets have recently been to...
05/04/2022

CE-based white-box adversarial attacks will not work using super-fitting

Deep neural networks are widely used in various fields because of their ...
10/16/2019

A New Defense Against Adversarial Images: Turning a Weakness into a Strength

Natural images are virtually surrounded by low-density misclassified reg...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.