DeepAI AI Chat
Log In Sign Up

A Method for Restoring the Training Set Distribution in an Image Classifier

02/05/2018
by   Alexey Chaplygin, et al.
0

Convolutional Neural Networks are a well-known staple of modern image classification. However, it can be difficult to assess the quality and robustness of such models. Deep models are known to perform well on a given training and estimation set, but can easily be fooled by data that is specifically generated for the purpose. It has been shown that one can produce an artificial example that does not represent the desired class, but activates the network in the desired way. This paper describes a new way of reconstructing a sample from the training set distribution of an image classifier without deep knowledge about the underlying distribution. This enables access to the elements of images that most influence the decision of a convolutional network and to extract meaningful information about the training distribution.

READ FULL TEXT

page 2

page 3

page 4

page 5

page 6

page 7

page 8

10/23/2019

Detecting Out-of-Distribution Inputs in Deep Neural Networks Using an Early-Layer Output

Deep neural networks achieve superior performance in challenging tasks s...
09/17/2020

An Algorithm to Attack Neural Network Encoder-based Out-Of-Distribution Sample Detector

Deep neural network (DNN), especially convolutional neural network, has ...
04/01/2020

Boosting Deep Hyperspectral Image Classification with Spectral Unmixing

Recent advances in neural networks have made great progress in addressin...
10/22/2021

Prototypical Classifier for Robust Class-Imbalanced Learning

Deep neural networks have been shown to be very powerful methods for man...
03/10/2017

Data-Driven Color Augmentation Techniques for Deep Skin Image Analysis

Dermoscopic skin images are often obtained with different imaging device...
08/26/2018

Convolutional Neural Networks for Aerial Vehicle Detection and Recognition

This paper investigates the problem of aerial vehicle recognition using ...
03/20/2022

Over-parameterization: A Necessary Condition for Models that Extrapolate

In this work, we study over-parameterization as a necessary condition fo...