Adversarial Profiles: Detecting Out-Distribution Adversarial Samples in Pre-trained CNNs

11/18/2020
by   Arezoo Rajabi, et al.
0

Despite high accuracy of Convolutional Neural Networks (CNNs), they are vulnerable to adversarial and out-distribution examples. There are many proposed methods that tend to detect or make CNNs robust against these fooling examples. However, most such methods need access to a wide range of fooling examples to retrain the network or to tune detection parameters. Here, we propose a method to detect adversarial and out-distribution examples against a pre-trained CNN without needing to retrain the CNN or needing access to a wide variety of fooling examples. To this end, we create adversarial profiles for each class using only one adversarial attack generation technique. We then wrap a detector around the pre-trained CNN that applies the created adversarial profile to each input and uses the output to decide whether or not the input is legitimate. Our initial evaluation of this approach using MNIST dataset show that adversarial profile based detection is effective in detecting at least 92 of out-distribution examples and 59

READ FULL TEXT
research
12/08/2018

Detecting Adversarial Examples in Convolutional Neural Networks

The great success of convolutional neural networks has caused a massive ...
research
02/28/2020

Utilizing Network Properties to Detect Erroneous Inputs

Neural networks are vulnerable to a wide range of erroneous inputs such ...
research
12/28/2019

Detecting Out-of-Distribution Examples with In-distribution Examples and Gram Matrices

When presented with Out-of-Distribution (OOD) examples, deep neural netw...
research
05/18/2022

Deep learning on rail profiles matching

Matching the rail cross-section profiles measured on site with the desig...
research
08/21/2018

Controlling Over-generalization and its Effect on Adversarial Examples Generation and Detection

Convolutional Neural Networks (CNNs) allowed improving the state-of-the-...
research
09/17/2020

Vax-a-Net: Training-time Defence Against Adversarial Patch Attacks

We present Vax-a-Net; a technique for immunizing convolutional neural ne...
research
03/27/2021

LiBRe: A Practical Bayesian Approach to Adversarial Detection

Despite their appealing flexibility, deep neural networks (DNNs) are vul...

Please sign up or login with your details

Forgot password? Click here to reset