VisionGuard: Runtime Detection of Adversarial Inputs to Perception Systems

02/23/2020
by   Yiannis Kantaros, et al.
6

Deep neural network (DNN) models have proven to be vulnerable to adversarial attacks. In this paper, we propose VisionGuard, a novel attack- and dataset-agnostic and computationally-light defense mechanism for adversarial inputs to DNN-based perception systems. In particular, VisionGuard relies on the observation that adversarial images are sensitive to lossy compression transformations. Specifically, to determine if an image is adversarial, VisionGuard checks if the output of the target classifier on a given input image changes significantly after feeding it a transformed version of the image under investigation. Moreover, we show that VisionGuard is computationally-light both at runtime and design-time which makes it suitable for real-time applications that may also involve large-scale image domains. To highlight this, we demonstrate the efficiency of VisionGuard on ImageNet, a task that is computationally challenging for the majority of relevant defenses. Finally, we include extensive comparative experiments on the MNIST, CIFAR10, and ImageNet datasets that show that VisionGuard outperforms existing defenses in terms of scalability and detection performance.

READ FULL TEXT
research
10/31/2017

Countering Adversarial Images using Input Transformations

This paper investigates strategies that defend against adversarial-examp...
research
02/01/2021

Robust Adversarial Attacks Against DNN-Based Wireless Communication Systems

Deep Neural Networks (DNNs) have become prevalent in wireless communicat...
research
08/01/2018

EagleEye: Attack-Agnostic Defense against Adversarial Inputs (Technical Report)

Deep neural networks (DNNs) are inherently vulnerable to adversarial inp...
research
01/02/2020

Ensembles of Many Diverse Weak Defenses can be Strong: Defending Deep Neural Networks Against Adversarial Attacks

Despite achieving state-of-the-art performance across many domains, mach...
research
04/27/2023

Detection of Adversarial Physical Attacks in Time-Series Image Data

Deep neural networks (DNN) have become a common sensing modality in auto...
research
08/19/2022

Dispersed Pixel Perturbation-based Imperceptible Backdoor Trigger for Image Classifier Models

Typical deep neural network (DNN) backdoor attacks are based on triggers...
research
05/06/2020

GraCIAS: Grassmannian of Corrupted Images for Adversarial Security

Input transformation based defense strategies fall short in defending ag...

Please sign up or login with your details

Forgot password? Click here to reset