Insights on representational similarity in neural networks with canonical correlation

06/14/2018
by   Ari S. Morcos, et al.
0

Comparing different neural network representations and determining how representations evolve over time remain challenging open questions in our understanding of the function of neural networks. Comparing representations in neural networks is fundamentally difficult as the structure of representations varies greatly, even across groups of networks trained on identical tasks, and over the course of training. Here, we develop projection weighted CCA (Canonical Correlation Analysis) as a tool for understanding neural networks, building off of SVCCA, a recently proposed method. We first improve the core method, showing how to differentiate between signal and noise, and then apply this technique to compare across a group of CNNs, demonstrating that networks which generalize converge to more similar representations than networks which memorize, that wider networks converge to more similar solutions than narrow networks, and that trained networks with identical topology but different learning rates converge to distinct clusters with diverse representations. We also investigate the representational dynamics of RNNs, across both training and sequential timesteps, finding that RNNs converge in a bottom-up pattern over the course of training and that the hidden state is highly variable over the course of a sequence, even when accounting for linear transforms. Together, these results provide new insights into the function of CNNs and RNNs, and demonstrate the utility of using CCA to understand representations.

READ FULL TEXT
research
05/01/2019

Similarity of Neural Network Representations Revisited

Recent work has sought to understand the behavior of neural networks by ...
research
02/27/2023

Analyzing Populations of Neural Networks via Dynamical Model Embedding

A core challenge in the interpretation of deep neural networks is identi...
research
07/24/2023

On Privileged and Convergent Bases in Neural Network Representations

In this study, we investigate whether the representations learned by neu...
research
06/19/2017

SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability

We propose a new technique, Singular Vector Canonical Correlation Analys...
research
06/22/2020

Neural networks adapting to datasets: learning network size and topology

We introduce a flexible setup allowing for a neural network to learn bot...
research
07/19/2019

Universality and individuality in neural dynamics across large populations of recurrent networks

Task-based modeling with recurrent neural networks (RNNs) has emerged as...
research
06/06/2022

Why do CNNs Learn Consistent Representations in their First Layer Independent of Labels and Architecture?

It has previously been observed that the filters learned in the first la...

Please sign up or login with your details

Forgot password? Click here to reset