Towards Dependable Deep Convolutional Neural Networks (CNNs) with Out-distribution Learning

04/24/2018
by   Arezoo Rajabi, et al.
0

Detection and rejection of adversarial examples in security sensitive and safety-critical systems using deep CNNs is essential. In this paper, we propose an approach to augment CNNs with out-distribution learning in order to reduce misclas- sification rate by rejecting adversarial examples. We empirically show that our augmented CNNs can either reject or classify correctly most adversarial examples generated using well-known methods ( >95 >75 to train using any specific type of adversarial examples and without sacrificing the accuracy of models on clean samples significantly (< 4

READ FULL TEXT

page 1

page 3

research
12/19/2017

Adversarial Examples: Attacks and Defenses for Deep Learning

With rapid progress and great successes in a wide spectrum of applicatio...
research
03/20/2017

On the Limitation of Convolutional Neural Networks in Recognizing Negative Images

Convolutional Neural Networks (CNNs) have achieved state-of-the-art perf...
research
05/28/2019

Brain-inspired reverse adversarial examples

A human does not have to see all elephants to recognize an animal as an ...
research
11/30/2017

Measuring the tendency of CNNs to Learn Surface Statistical Regularities

Deep CNNs are known to exhibit the following peculiarity: on the one han...
research
07/12/2022

Exploring Adversarial Examples and Adversarial Robustness of Convolutional Neural Networks by Mutual Information

A counter-intuitive property of convolutional neural networks (CNNs) is ...
research
12/25/2018

Adversarial Feature Genome: a Data Driven Adversarial Examples Recognition Method

Convolutional neural networks (CNNs) are easily spoofed by adversarial e...
research
05/06/2018

A Counter-Forensic Method for CNN-Based Camera Model Identification

An increasing number of digital images are being shared and accessed thr...

Please sign up or login with your details

Forgot password? Click here to reset