What do Deep Networks Like to See?

03/22/2018
by   Sebastian Palacio, et al.
0

We propose a novel way to measure and understand convolutional neural networks by quantifying the amount of input signal they let in. To do this, an autoencoder (AE) was fine-tuned on gradients from a pre-trained classifier with fixed parameters. We compared the reconstructed samples from AEs that were fine-tuned on a set of image classifiers (AlexNet, VGG16, ResNet-50, and Inception v3) and found substantial differences. The AE learns which aspects of the input space to preserve and which ones to ignore, based on the information encoded in the backpropagated gradients. Measuring the changes in accuracy when the signal of one classifier is used by a second one, a relation of total order emerges. This order depends directly on each classifier's input signal but it does not correlate with classification accuracy or network size. Further evidence of this phenomenon is provided by measuring the normalized mutual information between original images and auto-encoded reconstructions from different fine-tuned AEs. These findings break new ground in the area of neural network understanding, opening a new way to reason, debug, and interpret their results. We present four concrete examples in the literature where observations can now be explained in terms of the input signal that a model uses.

READ FULL TEXT

page 5

page 7

research
11/04/2022

Logits are predictive of network type

We show that it is possible to predict which deep network has generated ...
research
02/09/2023

Knowledge is a Region in Weight Space for Fine-tuned Language Models

Research on neural networks has largely focused on understanding a singl...
research
03/01/2021

Multiclass Burn Wound Image Classification Using Deep Convolutional Neural Networks

Millions of people are affected by acute and chronic wounds yearly acros...
research
09/10/2019

What do Deep Networks Like to Read?

Recent research towards understanding neural networks probes models in a...
research
12/15/2021

An Experimental Study of the Impact of Pre-training on the Pruning of a Convolutional Neural Network

In recent years, deep neural networks have known a wide success in vario...
research
12/24/2015

Measuring pattern retention in anonymized data -- where one measure is not enough

In this paper, we explore how modifying data to preserve privacy affects...
research
05/05/2019

Tuned Inception V3 for Recognizing States of Cooking Ingredients

Cooking is a task that must be performed in a daily basis, and thus it i...

Please sign up or login with your details

Forgot password? Click here to reset