Depth and Representation in Vision Models

11/11/2022
by   Benjamin L. Badger, et al.
0

Deep learning models develop successive representations of their input in sequential layers, the last of which maps the final representation to the output. Here we investigate the informational content of these representations by observing the ability of convolutional image classification models to autoencode the model's input using embeddings existing in various layers. We find that the deeper the layer, the less accurate that layer's representation of the input is before training. Inaccurate representation results from non-uniqueness in which various distinct inputs give approximately the same embedding. Non-unique representation is a consequence of both exact and approximate non-invertibility of transformations present in the forward pass. Learning to classify natural images leads to an increase in representation clarity for early but not late layers, which instead form abstract images. Rather than simply selecting for features present in the input necessary for classification, deep layer representations are found to transform the input so that it matches representations of the training data such that arbitrary inputs are mapped to manifolds learned during training. This work provides support for the theory that the tasks of image recognition and input generation are inseparable even for models trained exclusively to classify.

READ FULL TEXT

page 3

page 4

page 5

page 8

page 10

page 12

page 16

page 17

research
03/16/2023

Jump to Conclusions: Short-Cutting Transformers With Linear Transformations

Transformer-based language models (LMs) create hidden representations of...
research
05/21/2020

When Dictionary Learning Meets Deep Learning: Deep Dictionary Learning and Coding Network for Image Recognition with Limited Data

We present a new Deep Dictionary Learning and Coding Network (DDLCN) for...
research
04/27/2023

XAI-based Comparison of Input Representations for Audio Event Classification

Deep neural networks are a promising tool for Audio Event Classification...
research
11/05/2022

Small Language Models for Tabular Data

Supervised deep learning is most commonly applied to difficult problems ...
research
12/10/2015

Deep Residual Learning for Image Recognition

Deeper neural networks are more difficult to train. We present a residua...
research
01/15/2019

Image Synthesis and Style Transfer

Affine transformation, layer blending, and artistic filters are popular ...
research
12/20/2014

Why does Deep Learning work? - A perspective from Group Theory

Why does Deep Learning work? What representations does it capture? How d...

Please sign up or login with your details

Forgot password? Click here to reset