Neural Networks as Paths through the Space of Representations

06/22/2022
by   Richard D. Lange, et al.
0

Deep neural networks implement a sequence of layer-by-layer operations that are each relatively easy to understand, but the resulting overall computation is generally difficult to understand. We develop a simple idea for interpreting the layer-by-layer construction of useful representations: the role of each layer is to reformat information to reduce the "distance" to the target outputs. We formalize this intuitive idea of "distance" by leveraging recent work on metric representational similarity, and show how it leads to a rich space of geometric concepts. With this framework, the layer-wise computation implemented by a deep neural network can be viewed as a path in a high-dimensional representation space. We develop tools to characterize the geometry of these in terms of distances, angles, and geodesics. We then ask three sets of questions of residual networks trained on CIFAR-10: (1) how straight are paths, and how does each layer contribute towards the target? (2) how do these properties emerge over training? and (3) how similar are the paths taken by wider versus deeper networks? We conclude by sketching additional ways that this kind of representational geometry can be used to understand and interpret network training, or to prescriptively improve network architectures to suit a task.

READ FULL TEXT

page 6

page 8

research
06/05/2021

Solving hybrid machine learning tasks by traversing weight space geodesics

Machine learning problems have an intrinsic geometric structure as centr...
research
09/12/2017

Reversible Architectures for Arbitrarily Deep Residual Neural Networks

Recently, deep residual networks have been successfully applied in many ...
research
07/14/2020

Layer-Parallel Training with GPU Concurrency of Deep Residual Neural Networks via Nonlinear Multigrid

A Multigrid Full Approximation Storage algorithm for solving Deep Residu...
research
07/23/2020

The Representation Theory of Neural Networks

In this work, we show that neural networks can be represented via the ma...
research
10/01/2022

PathFinder: Discovering Decision Pathways in Deep Neural Networks

Explainability is becoming an increasingly important topic for deep neur...
research
02/10/2022

Coded ResNeXt: a network for designing disentangled information paths

To avoid treating neural networks as highly complex black boxes, the dee...
research
11/21/2022

Representational dissimilarity metric spaces for stochastic neural networks

Quantifying similarity between neural representations – e.g. hidden laye...

Please sign up or login with your details

Forgot password? Click here to reset