Reliability of CKA as a Similarity Measure in Deep Learning

10/28/2022
by   MohammadReza Davari, et al.
0

Comparing learned neural representations in neural networks is a challenging but important problem, which has been approached in different ways. The Centered Kernel Alignment (CKA) similarity metric, particularly its linear variant, has recently become a popular approach and has been widely used to compare representations of a network's different layers, of architecturally similar networks trained differently, or of models with different architectures trained on the same data. A wide variety of conclusions about similarity and dissimilarity of these various representations have been made using CKA. In this work we present analysis that formally characterizes CKA sensitivity to a large class of simple transformations, which can naturally occur in the context of modern machine learning. This provides a concrete explanation of CKA sensitivity to outliers, which has been observed in past works, and to transformations that preserve the linear separability of the data, an important generalization attribute. We empirically investigate several weaknesses of the CKA similarity metric, demonstrating situations in which it gives unexpected or counter-intuitive results. Finally we study approaches for modifying representations to maintain functional behaviour while changing the CKA value. Our results illustrate that, in many cases, the CKA value can be easily manipulated without substantial changes to the functional behaviour of the models, and call for caution when leveraging activation alignment metrics.

READ FULL TEXT

page 7

page 8

page 16

page 17

page 18

page 19

page 20

research
08/03/2021

Grounding Representation Similarity with Statistical Testing

To understand neural network behavior, recent works quantitatively compa...
research
05/01/2019

Similarity of Neural Network Representations Revisited

Recent work has sought to understand the behavior of neural networks by ...
research
11/14/2022

Do Neural Networks Trained with Topological Features Learn Different Internal Representations?

There is a growing body of work that leverages features extracted via to...
research
03/20/2023

Model Stitching: Looking For Functional Similarity Between Representations

Model stitching (Lenc Vedaldi 2015) is a compelling methodology to c...
research
10/28/2018

Towards Understanding Learning Representations: To What Extent Do Different Neural Networks Learn the Same Representation

It is widely believed that learning good representations is one of the m...
research
06/14/2021

Revisiting Model Stitching to Compare Neural Representations

We revisit and extend model stitching (Lenc Vedaldi 2015) as a metho...

Please sign up or login with your details

Forgot password? Click here to reset