DAAIN: Detection of Anomalous and Adversarial Input using Normalizing Flows

05/30/2021
by   Samuel von Baußnern, et al.
1

Despite much recent work, detecting out-of-distribution (OOD) inputs and adversarial attacks (AA) for computer vision models remains a challenge. In this work, we introduce a novel technique, DAAIN, to detect OOD inputs and AA for image segmentation in a unified setting. Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution. We equip the density estimator with a classification head to discriminate between regular and anomalous inputs. To deal with the high-dimensional activation-space of typical segmentation networks, we subsample them to obtain a homogeneous spatial and layer-wise coverage. The subsampling pattern is chosen once per monitored model and kept fixed for all inputs. Since the attacker has access to neither the detection model nor the sampling key, it becomes harder for them to attack the segmentation network, as the attack cannot be backpropagated through the detector. We demonstrate the effectiveness of our approach using an ESPNet trained on the Cityscapes dataset as segmentation model, an affine Normalizing Flow as density estimator and use blue noise to ensure homogeneous sampling. Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.

READ FULL TEXT

page 2

page 5

page 6

research
07/29/2020

Detecting Anomalous Inputs to DNN Classifiers By Joint Statistical Testing at the Layers

Detecting anomalous inputs, such as adversarial and out-of-distribution ...
research
02/13/2020

Identifying Audio Adversarial Examples via Anomalous Pattern Detection

Audio processing models based on deep neural networks are susceptible to...
research
10/19/2018

Subset Scanning Over Neural Network Activations

This work views neural networks as data generating systems and applies a...
research
05/31/2018

Adversarial Attacks on Face Detectors using Neural Net based Constrained Optimization

Adversarial attacks involve adding, small, often imperceptible, perturba...
research
02/07/2020

RAID: Randomized Adversarial-Input Detection for Neural Networks

In recent years, neural networks have become the default choice for imag...
research
02/09/2022

Adversarial Detection without Model Information

Most prior state-of-the-art adversarial detection works assume that the ...
research
10/07/2020

Not All Datasets Are Born Equal: On Heterogeneous Data and Adversarial Examples

Recent work on adversarial learning has focused mainly on neural network...

Please sign up or login with your details

Forgot password? Click here to reset