GraCIAS: Grassmannian of Corrupted Images for Adversarial Security

05/06/2020
by   Ankita Shukla, et al.
4

Input transformation based defense strategies fall short in defending against strong adversarial attacks. Some successful defenses adopt approaches that either increase the randomness within the applied transformations, or make the defense computationally intensive, making it substantially more challenging for the attacker. However, it limits the applicability of such defenses as a pre-processing step, similar to computationally heavy approaches that use retraining and network modifications to achieve robustness to perturbations. In this work, we propose a defense strategy that applies random image corruptions to the input image alone, constructs a self-correlation based subspace followed by a projection operation to suppress the adversarial perturbation. Due to its simplicity, the proposed defense is computationally efficient as compared to the state-of-the-art, and yet can withstand huge perturbations. Further, we develop proximity relationships between the projection operator of a clean image and of its adversarially perturbed version, via bounds relating geodesic distance on the Grassmannian to matrix Frobenius norms. We empirically show that our strategy is complementary to other weak defenses like JPEG compression and can be seamlessly integrated with them to create a stronger defense. We present extensive experiments on the ImageNet dataset across four different models namely InceptionV3, ResNet50, VGG16 and MobileNet models with perturbation magnitude set to ϵ = 16. Unlike state-of-the-art approaches, even without any retraining, the proposed strategy achieves an absolute improvement of   4.5

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/25/2019

Defending against Whitebox Adversarial Attacks via Randomized Discretization

Adversarial perturbations dramatically decrease the accuracy of state-of...
research
02/08/2020

An Empirical Evaluation of Perturbation-based Defenses

Recent work has extensively shown that randomized perturbations of a neu...
research
01/31/2023

Image Shortcut Squeezing: Countering Perturbative Availability Poisons with Compression

Perturbative availability poisoning (PAP) adds small changes to images t...
research
02/19/2020

NNoculation: Broad Spectrum and Targeted Treatment of Backdoored DNNs

This paper proposes a novel two-stage defense (NNoculation) against back...
research
12/17/2020

On the Limitations of Denoising Strategies as Adversarial Defenses

As adversarial attacks against machine learning models have raised incre...
research
02/23/2020

VisionGuard: Runtime Detection of Adversarial Inputs to Perception Systems

Deep neural network (DNN) models have proven to be vulnerable to adversa...
research
01/30/2023

On the Efficacy of Metrics to Describe Adversarial Attacks

Adversarial defenses are naturally evaluated on their ability to tolerat...

Please sign up or login with your details

Forgot password? Click here to reset