Not All Adversarial Examples Require a Complex Defense: Identifying Over-optimized Adversarial Examples with IQR-based Logit Thresholding

07/30/2019
by   Utku Ozbulak, et al.
8

Detecting adversarial examples currently stands as one of the biggest challenges in the field of deep learning. Adversarial attacks, which produce adversarial examples, increase the prediction likelihood of a target class for a particular data point. During this process, the adversarial example can be further optimized, even when it has already been wrongly classified with 100 confidence, thus making the adversarial example even more difficult to detect. For this kind of adversarial examples, which we refer to as over-optimized adversarial examples, we discovered that the logits of the model provide solid clues on whether the data point at hand is adversarial or genuine. In this context, we first discuss the masking effect of the softmax function for the prediction made and explain why the logits of the model are more useful in detecting over-optimized adversarial examples. To identify this type of adversarial examples in practice, we propose a non-parametric and computationally efficient method which relies on interquartile range, with this method becoming more effective as the image resolution increases. We support our observations throughout the paper with detailed experiments for different datasets (MNIST, CIFAR-10, and ImageNet) and several architectures.

READ FULL TEXT

page 1

page 3

page 4

page 6

research
11/08/2019

Imperceptible Adversarial Attacks on Tabular Data

Security of machine learning models is a concern as they may face advers...
research
11/21/2018

How the Softmax Output is Misleading for Evaluating the Strength of Adversarial Examples

Even before deep learning architectures became the de facto models for c...
research
04/01/2017

SafetyNet: Detecting and Rejecting Adversarial Examples Robustly

We describe a method to produce a network where current methods such as ...
research
06/22/2023

Adversarial Resilience in Sequential Prediction via Abstention

We study the problem of sequential prediction in the stochastic setting ...
research
09/07/2018

Detecting Potential Local Adversarial Examples for Human-Interpretable Defense

Machine learning models are increasingly used in the industry to make de...
research
04/08/2018

Adaptive Spatial Steganography Based on Probability-Controlled Adversarial Examples

Deep learning model is vulnerable to adversarial attack, which generates...
research
07/23/2020

Scalable Inference of Symbolic Adversarial Examples

We present a novel method for generating symbolic adversarial examples: ...

Please sign up or login with your details

Forgot password? Click here to reset