Towards Understanding Learning Representations: To What Extent Do Different Neural Networks Learn the Same Representation

10/28/2018
by   Liwei Wang, et al.
0

It is widely believed that learning good representations is one of the main reasons for the success of deep neural networks. Although highly intuitive, there is a lack of theory and systematic approach quantitatively characterizing what representations do deep neural networks learn. In this work, we move a tiny step towards a theory and better understanding of the representations. Specifically, we study a simpler problem: How similar are the representations learned by two networks with identical architecture but trained from different initializations. We develop a rigorous theory based on the neuron activation subspace match model. The theory gives a complete characterization of the structure of neuron activation subspace matches, where the core concepts are maximum match and simple match which describe the overall and the finest similarity between sets of neurons in two networks respectively. We also propose efficient algorithms to find the maximum match and simple matches. Finally, we conduct extensive experiments using our algorithms. Experimental results suggest that, surprisingly, representations learned by the same convolutional layers of networks trained from different initializations are not as similar as prevalently expected, at least in terms of subspace match.

READ FULL TEXT
research
01/03/2019

Subspace Match Probably Does Not Accurately Assess the Similarity of Learned Representations

Learning informative representations of data is one of the primary goals...
research
04/02/2020

Under the Hood of Neural Networks: Characterizing Learned Representations by Functional Neuron Populations and Network Ablations

The need for more transparency of the decision-making processes in artif...
research
07/23/2020

The Representation Theory of Neural Networks

In this work, we show that neural networks can be represented via the ma...
research
11/13/2022

Generalization Beyond Feature Alignment: Concept Activation-Guided Contrastive Learning

Learning invariant representations via contrastive learning has seen sta...
research
10/28/2022

Reliability of CKA as a Similarity Measure in Deep Learning

Comparing learned neural representations in neural networks is a challen...
research
09/19/2018

Interpretable Textual Neuron Representations for NLP

Input optimization methods, such as Google Deep Dream, create interpreta...
research
05/09/2020

Generalizing Outside the Training Set: When Can Neural Networks Learn Identity Effects?

Often in language and other areas of cognition, whether two components o...

Please sign up or login with your details

Forgot password? Click here to reset