A New Defense Against Adversarial Images: Turning a Weakness into a Strength

10/16/2019
by   Tao Yu, et al.
0

Natural images are virtually surrounded by low-density misclassified regions that can be efficiently discovered by gradient-guided search — enabling the generation of adversarial images. While many techniques for detecting these attacks have been proposed, they are easily bypassed when the adversary has full knowledge of the detection mechanism and adapts the attack strategy accordingly. In this paper, we adopt a novel perspective and regard the omnipresence of adversarial perturbations as a strength rather than a weakness. We postulate that if an image has been tampered with, these adversarial directions either become harder to find with gradient methods or have substantially higher density than for natural images. We develop a practical test for this signature characteristic to successfully detect adversarial attacks, achieving unprecedented accuracy under the white-box setting where the adversary is given full knowledge of our detection mechanism.

READ FULL TEXT
research
11/19/2020

Adversarial Threats to DeepFake Detection: A Practical Perspective

Facially manipulated images and videos or DeepFakes can be used maliciou...
research
12/10/2021

Preemptive Image Robustification for Protecting Users against Man-in-the-Middle Adversarial Attacks

Deep neural networks have become the driving force of modern image recog...
research
01/12/2022

Adversarially Robust Classification by Conditional Generative Model Inversion

Most adversarial attack defense methods rely on obfuscating gradients. T...
research
11/16/2015

Adversarial Manipulation of Deep Representations

We show that the representation of an image in a deep neural network (DN...
research
06/27/2018

Gradient Similarity: An Explainable Approach to Detect Adversarial Attacks against Deep Learning

Deep neural networks are susceptible to small-but-specific adversarial p...
research
07/26/2018

A general metric for identifying adversarial images

It is well known that a determined adversary can fool a neural network b...
research
05/25/2023

Diffusion-Based Adversarial Sample Generation for Improved Stealthiness and Controllability

Neural networks are known to be susceptible to adversarial samples: smal...

Please sign up or login with your details

Forgot password? Click here to reset