Reproduce CKA: Similarity of Neural Network Representations Revisited
Recent work has sought to understand the behavior of neural networks by comparing representations between layers and between different trained models. We examine methods for comparing neural network representations based on canonical correlation analysis (CCA). We show that CCA belongs to a family of statistics for measuring multivariate similarity, but that neither CCA nor any other statistic that is invariant to invertible linear transformation can measure meaningful similarities between representations of higher dimension than the number of data points. We introduce a similarity index that measures the relationship between representational similarity matrices and does not suffer from this limitation. This similarity index is equivalent to centered kernel alignment (CKA) and is also closely connected to CCA. Unlike CCA, CKA can reliably identify correspondences between representations in networks trained from different initializations.READ FULL TEXT VIEW PDF
Centered Kernel Alignment (CKA) was recently proposed as a similarity me...
Comparing different neural network representations and determining how
To understand neural network behavior, recent works quantitatively compa...
The co-occurrence association is widely observed in many empirical data....
This paper investigates contextual word representation models from the l...
Learning informative representations of data is one of the primary goals...
We introduce a technique based on the singular vector canonical correlat...
Reproduce CKA: Similarity of Neural Network Representations Revisited
A project for image classification using many of the same paradigms as torchdeepretina
An Numpy and PyTorch Implementation of CKA-similarity with CUDA support
Across a wide range of machine learning tasks, deep neural networks enable learning powerful feature representations automatically from data. Despite impressive empirical advances of deep neural networks in solving various tasks, the problem of understanding and characterizing the neural network representations learned from data remains relatively under-explored. Previous work (e.g. Advani & Saxe (2017); Amari et al. (2018); Saxe et al. (2014)
) has made progress in understanding the theoretical dynamics of the neural network training process. These studies are insightful, but fundamentally limited, because they ignore the complex interaction between the training dynamics and structured data. A window into the network’s representation can provide more information about the interaction between machine learning algorithms and data than the value of the loss function alone.
This paper investigates the problem of measuring similarities between deep neural network representations. An effective method for measuring representational similarity could help answer many interesting questions, including: (1) Do deep neural networks with the same architecture trained from different random initializations learn similar representations? (2) Can we establish correspondences between layers of different network architectures? (3) How similar are the representations learned using the same network architecture from different datasets?
We build upon previous studies investigating similarity between the representations of neural networks (Laakso & Cottrell, 2000; Li et al., 2015; Raghu et al., 2017; Morcos et al., 2018; Wang et al., 2018). We are also inspired by the extensive neuroscience literature that uses representational similarity analysis (Kriegeskorte et al., 2008a; Edelman, 1998) to compare representations across brain areas (Haxby et al., 2001; Freiwald & Tsao, 2010), individuals (Connolly et al., 2012), species (Kriegeskorte et al., 2008b), and behaviors (Elsayed et al., 2016), as well as between brains and neural networks (Yamins et al., 2014; Khaligh-Razavi & Kriegeskorte, 2014; Sussillo et al., 2015).
Our key contributions are summarized as follows:
[topsep=2pt, partopsep=0pt, leftmargin=15pt, parsep=0pt, itemsep=8pt]
We discuss the invariance properties of similarity indexes and their implications for measuring similarity of neural network representations.
We show that CKA is able to determine the correspondence between the hidden layers of neural networks trained from different random initializations and with different widths, scenarios where previously proposed similarity indexes fail.
We verify that wider networks learn more similar representations, and show that the similarity of early layers saturates at fewer channels than later layers. We demonstrate that early layers, but not later layers, learn similar representations on different datasets.
Let denote a matrix of activations of neurons for examples, and denote a matrix of activations of neurons for the same examples. We assume that these matrices have been preprocessed to center the columns. Without loss of generality we assume that . We are concerned with the design and analysis of a scalar similarity index
that can be used to compare representations within and across neural networks, in order to help visualize and understand the effect of different factors of variation in deep learning.
This section discusses the invariance properties of similarity indexes and their implications for measuring similarity of neural network representations. We argue that both intuitive notions of similarity and the dynamics of neural network training call for a similarity index that is invariant to orthogonal transformation and isotropic scaling, but not invertible linear transformation.
A similarity index is invariant to invertible linear transformation if for any full rank and . If activations are followed by a fully-connected layer , then transforming the activations by a full rank matrix as and transforming the weights by the inverse as preserves the output of . This transformation does not appear to change how the network operates, so intuitively, one might prefer a similarity index that is invariant to invertible linear transformation, as argued by Raghu et al. (2017).
However, a limitation of invariance to invertible linear transformation is that any invariant similarity index gives the same result for any representation of width greater than or equal to the dataset size, i.e. . We provide a simple proof in Appendix A.
Let and be matrices. Suppose is invariant to invertible linear transformation in the first argument, i.e. for arbitrary and any with . If , then .
There is thus a practical problem with invariance to invertible linear transformation: Some neural networks, especially convolutional networks, have more neurons in some layers than there are examples the training dataset (Springenberg et al., 2015; Lee et al., 2018; Zagoruyko & Komodakis, 2016). It is somewhat unnatural that a similarity index could require more examples than were used for training.
A deeper issue is that neural network training
is not invariant to arbitrary invertible linear transformation of inputs or activations. Even in the linear case, gradient descent converges first along the eigenvectors corresponding to the largest eigenvalues of the input covariance matrix(LeCun et al., 1991), and in cases of overparameterization or early stopping, the solution reached depends on the scale of the input. Similar results hold for gradient descent training of neural networks in the infinite width limit (Jacot et al., 2018)
. The sensitivity of neural networks training to linear transformation is further demonstrated by the popularity of batch normalization(Ioffe & Szegedy, 2015).
Invariance to invertible linear transformation implies that the scale of directions in activation space is irrelevant. Empirically, however, scale information is both consistent across networks and useful across tasks. Neural networks trained from different random initializations develop representations with similar large principal components, as shown in Figure 1. Consequently, Euclidean distances between examples, which depend primarily upon large principal components, are similar across networks. These distances are meaningful, as demonstrated by the success of perceptual loss and style transfer (Gatys et al., 2016; Johnson et al., 2016; Dumoulin et al., 2017). A similarity index that is invariant to invertible linear transformation ignores this aspect of the representation, and assigns the same score to networks that match only in large principal components or networks that match only in small principal components.
Rather than requiring invariance to any invertible linear transformation, one could require a weaker condition; invariance to orthogonal transformation, i.e. for full-rank orthonormal matrices and such that and .
Indexes invariant to orthogonal transformations do not share the limitations of indexes invariant to invertible linear transformation. When , indexes invariant to orthogonal transformation remain well-defined. Moreover, orthogonal transformations preserve scalar products and Euclidean distances between examples.
Invariance to orthogonal transformation seems desirable for neural networks trained by gradient descent. Invariance to orthogonal transformation implies invariance to permutation, which is needed to accommodate symmetries of neural networks (Chen et al., 1993; Orhan & Pitkow, 2018). In the linear case, orthogonal transformation of the input does not affect the dynamics of gradient descent training (LeCun et al., 1991), and for neural networks initialized with rotationally symmetric weight distributions, e.g. i.i.d. Gaussian weight initialization, training with fixed orthogonal transformations of activations yields the same distribution of training trajectories as untransformed activations, whereas an arbitrary linear transformation would not.
Given a similarity index that is invariant to orthogonal transformation, one can construct a similarity index that is invariant to any invertible linear transformation by first orthonormalizing the columns of and , and then applying
. Given thin QR decompositionsand one can construct a similarity index , where is invariant to invertible linear transformation because orthonormal bases with the same span are related to each other by orthonormal transformation (see Appendix B).
We expect similarity indexes to be invariant to isotropic scaling, i.e. for any . That said, a similarity index that is invariant to both orthogonal transformation and non-isotropic scaling, i.e.
rescaling of individual features, is invariant to any invertible linear transformation. This follows from the existence of the singular value decomposition of the transformation matrix. Generally, we are interested in similarity indexes that are invariant to isotropic but not necessarily non-isotropic scaling.
Our key insight is that instead of comparing multivariate features of an example in the two representations (e.g. via regression), one can first measure the similarity between every pair of examples in each representation separately, and then compare the similarity structures. In neuroscience, such matrices representing the similarities between examples are called representational similarity matrices (Kriegeskorte et al., 2008a). We show below that, if we use an inner product to measure similarity, the similarity between representational similarity matrices reduces to another intuitive notion of pairwise feature similarity.
A simple formula relates dot products between examples to dot products between features:
The elements of and are dot products between the representations of the and examples, and indicate the similarity between these examples according to the respective networks. The left-hand side of (1) thus measures the similarity between the inter-example similarity structures. The right-hand side yields the same result by measuring the similarity between features from and , by summing the squared dot products between every pair.
Equation 1 implies that, for centered and :
The Hilbert-Schmidt Independence Criterion (Gretton et al., 2005) generalizes Equations 1 and 2 to inner products from reproducing kernel Hilbert spaces, where the squared Frobenius norm of the cross-covariance matrix becomes the squared Hilbert-Schmidt norm of the cross-covariance operator. Let and where and
are two kernels. The empirical estimator of HSIC is:
where is the centering matrix . For linear kernels , HSIC yields (2).
Gretton et al. (2005)
originally proposed HSIC as a test statistic for determining whether two sets of variables are independent. They prove that the empirical estimator converges to the population value at a rate of, and Song et al. (2007)
provide an unbiased estimator. Whenand
are universal kernels, HSIC = 0 implies independence, but HSIC is not an estimator of mutual information. HSIC is equivalent to maximum mean discrepancy between the joint distribution and the product of the marginal distributions, and HSIC with a specific kernel family is equivalent to distance covariance(Sejdinovic et al., 2013).
HSIC is not invariant to isotropic scaling, but it can be made invariant through normalization. This normalized index is known as centered kernel alignment (Cortes et al., 2012; Cristianini et al., 2002):
Below, we report results of CKA with a linear kernel and the RBF kernel . For the RBF kernel, there are several possible strategies for selecting the bandwidth , which controls the extent to which similarity of small distances is emphasized over large distances. We set as a fraction of the median distance between examples. In practice, we find that RBF and linear kernels give similar results across most experiments, so we use linear CKA unless otherwise specified. Our framework extends to any valid kernel, including kernels equivalent to neural networks (Lee et al., 2018; Jacot et al., 2018; Garriga-Alonso et al., 2019; Novak et al., 2019).
|Linear Reg. ()||only||✓||✓|
|SVCCA ()||If same subspace kept||✓||✓|
|SVCCA ()||If same subspace kept||✓||✓|
are the left-singular vectors ofand sorted in descending order according to the corresponding singular vectors. denotes the nuclear norm. and
are truncated identity matrices that select left-singular vectors such that the cumulative variance explained reaches some threshold. For RBF CKA,and are kernel matrices constructed by evaluating the RBF kernel between the examples as in Section 3, and is the centering matrix . See Appendix C for more detail about each technique.
In this section, we briefly review linear regression, canonical correlation, and other related methods in the context of measuring similarity between neural network representations. We let and represent any orthonormal bases for the columns of and , i.e. , or orthogonal transformations thereof. Table 1 summarizes the formulae and invariance properties of the indexes used in experiments. For a comprehensive general review of linear indexes for measuring multivariate similarity, see Ramsay et al. (1984).
A simple way to relate neural network representations is via linear regression. One can fit every feature in as a linear combination of features from . A suitable summary statistic is the total fraction of variance explained by the fit:
We are unaware of any application of linear regression to measuring similarity of neural network representations, although Romero et al. (2015) used a least squares loss between activations of two networks to encourage thin and deep “student” networks to learn functions similar to wide and shallow “teacher” networks.
Canonical correlation finds bases for two matrices such that, when the original matrices are projected onto these bases, the correlation is maximized. For , the th canonical correlation coefficient is given by:
The vectors and that maximize are the canonical weights, which transform the original data into canonical variables and . The constraints in (6) enforce orthogonality of the canonical variables.
For the purpose of this work, we consider two summary statistics of the goodness of fit of CCA:
where denotes the nuclear norm. The mean squared CCA correlation is also known as Yanai’s GCD measure (Ramsay et al., 1984), and several statistical packages report the sum of the squared canonical correlations under the name Pillai’s trace (SAS Institute, 2015; StataCorp, 2015). The mean CCA correlation was previously used to measure similarity between neural network representations in Raghu et al. (2017).
CCA is sensitive to perturbation when the condition number of or is large (Golub & Zha, 1995). To improve robustness, singular vector CCA (SVCCA) performs CCA on truncated singular value decompositions of and (Raghu et al., 2017; Mroueh et al., 2015; Kuss & Graepel, 2003). As formulated in Raghu et al. (2017), SVCCA keeps enough principal components of the input matrices to explain a fixed proportion of the variance, and drops remaining components. Thus, it is invariant to invertible linear transformation only if the retained subspace does not change.
Morcos et al. (2018) propose a different strategy to reduce the sensitivity of CCA to perturbation, which they term “projection-weighted canonical correlation” (PWCCA):
where is the column of , and is the vector of canonical variables formed by projecting to the canonical coordinate frame. As we show in Appendix C.3, PWCCA is closely related to linear regression, since:
Other work has studied alignment between individual neurons, rather than alignment between subspaces. Li et al. (2015) examined correlation between the neurons in different neural networks, and attempt to find a bipartite match or semi-match that maximizes the sum of the correlations between the neurons, and then to measure the average correlations. Wang et al. (2018) proposed to search for subsets of neurons and such that, to within some tolerance, every neuron in can be represented by a linear combination of neurons from and vice versa. They found that the maximum matching subsets are very small for intermediate layers.
Among non-linear measures, one candidate is mutual information, which is invariant not only to invertible linear transformation, but to any invertible transformation. Li et al. (2015) previously used mutual information to measure neuronal alignment. In the context of comparing representations, we believe mutual information is not useful. Given any pair of representations produced by deterministic functions of the same input, mutual information between either and the input must be at least as large as mutual information between the representations. Moreover, in fully invertible neural networks (Dinh et al., 2017; Jacobsen et al., 2018), the mutual information between any two layers is equal to the entropy of the input.
Linear CKA is closely related to CCA and linear regression. If and are centered, then and are also centered, so:
When performing the linear regression fit of with design matrix , , so:
When might we prefer linear CKA over CCA? One way to show the difference is to rewrite and in terms of their singular value decompositions , . Let the eigenvector of (left-singular vector of ) be indexed as . Then is:
Let the eigenvalue of (squared singular value of ) be indexed as . Linear CKA can be written as:
Linear CKA thus resembles CCA weighted by the eigenvalues of the corresponding eigenvectors, i.e. the amount of variance in or that each explains. SVCCA (Raghu et al., 2017) and projection-weighted CCA (Morcos et al., 2018) were also motivated by the idea that eigenvectors that correspond to small eigenvalues are less important, but linear CKA incorporates this weighting symmetrically and can be computed without a matrix decomposition.
Comparison of (13) and (14) immediately suggests the possibility of alternative weightings of scalar products between eigenvectors. Indeed, as we show in Appendix D.1, the similarity index induced by “canonical ridge” regularized CCA (Vinod, 1976)
, when appropriately normalized, interpolates between, linear regression, and linear CKA.
|CKA (RBF 0.2)||80.6|
|CKA (RBF 0.4)||99.1|
|CKA (RBF 0.8)||99.3|
Accuracy of identifying corresponding layers based on maximum similarity for 10 architecturally identical 10-layer CNNs trained from different initializations, with logits layers excluded. For SVCCA, we used a truncation threshold of 0.99 as recommended inRaghu et al. (2017). For asymmetric indexes (PWCCA and linear regression) we symmetrized the similarity as . CKA RBF kernel parameters reflect the fraction of the median Euclidean distance used as . Results not significantly different from the best result are bold-faced (, jackknife z-test).
We propose a simple sanity check for similarity indexes: Given a pair of architecturally identical networks trained from different random initializations, for each layer in the first network, the most similar layer in the second network should be the architecturally corresponding layer. We train 10 networks and, for each layer of each network, we compute the accuracy with which we can find the corresponding layer in each of the other networks by maximum similarity. We then average the resulting accuracies. We compare CKA with CCA, SVCCA, PWCCA, and linear regression.
We first investigate a simple VGG-like convolutional network based on All-CNN-C (Springenberg et al., 2015) (see Appendix E for architecture details). Figure 2 and Table 2 show that CKA passes our sanity check, but other methods perform substantially worse. For SVCCA, we experimented with a range of truncation thresholds, but no threshold revealed the layer structure (Appendix F.2); our results are consistent with those in Appendix E of Raghu et al. (2017).
We also investigate Transformer networks, where all layers are of equal width. In AppendixF.1, we show similarity between the 12 sublayers of the encoders of Transformer models (Vaswani et al., 2017) trained from different random initializations. All similarity indexes achieve non-trivial accuracy and thus pass the sanity check, although RBF CKA and performed slightly better than other methods. However, we found that there are differences in feature scale between representations of feed-forward network and self-attention sublayers that CCA does not capture because it is invariant to non-isotropic scaling.
CKA can reveal pathology in neural networks representations. In Figure 3, we show CKA between layers of individual CNNs with different depths, where layers are repeated 2, 4, or 8 times. Doubling depth improved accuracy, but greater multipliers hurt accuracy. At 8x depth, CKA indicates that representations of more than half of the network are very similar to the last layer. We validated that these later layers do not refine the representation by training an -regularized logistic regression classifier on each layer of the network. Classification accuracy in shallower architectures progressively improves with depth, but for the 8x deeper network, accuracy plateaus less than halfway through the network. When applied to ResNets (He et al., 2016), CKA reveals no pathology (Figure 4). We instead observe a grid pattern that originates from the architecture: Post-residual activations are similar to other post-residual activations, but activations within blocks are not.
CKA is equally effective at revealing relationships between layers of different architectures. Figure 5
shows the relationship between different layers of networks with and without residual connections. CKA indicates that, as networks are made deeper, the new layers are effectively inserted in between the old layers. Other similarity indexes fail to reveal meaningful relationships between different architectures, as we show in AppendixF.5.
In Figure 6, we show CKA between networks with different layer widths. Like Morcos et al. (2018), we find that increasing layer width leads to more similar representations between networks. As width increases, CKA approaches 1; CKA of earlier layers saturates faster than later layers. Networks are generally more similar to other networks of the same width than they are to the widest network we trained.
CKA can also be used to compare networks trained on different datasets. In Figure 7, we show that models trained on CIFAR-10 and CIFAR-100 develop similar representations in their early layers. These representations require training; similarity with untrained networks is much lower. We further explore similarity between layers of untrained networks in Appendix F.3.
Equation 14 suggests a way to further elucidating what CKA is measuring, based on the action of one representational similarity matrix (RSM) applied to the eigenvectors of the other RSM . By definition, points in the same direction as , and its norm is the corresponding eigenvalue. The degree of scaling and rotation by thus indicates how similar the action of is to , for each eigenvector of . For visualization purposes, this approach is somewhat less useful than the CKA summary statistic, since it does not collapse the similarity to a single number, but it provides a more complete picture of what CKA measures. Figure 8 shows that, for large eigenvectors, and have similar actions, but the rank of the subspace where this holds is substantially lower than the dimensionality of the activations. In the penultimate (global average pooling) layer, the dimensionality of the shared subspace is approximately 10, which is the number of classes in the CIFAR-10 dataset.
Measuring similarity between the representations learned by neural networks is an ill-defined problem, since it is not entirely clear what aspects of the representation a similarity index should focus on. Previous work has suggested that there is little similarity between intermediate layers of neural networks trained from different random initializations (Raghu et al., 2017; Wang et al., 2018). We propose CKA as a method for comparing representations of neural networks, and show that it consistently identifies correspondences between layers, not only in the same network trained from different initializations, but across entirely different architectures, whereas other methods do not. We also provide a unified framework for understanding the space of similarity indexes, as well as an empirical framework for evaluation.
We show that CKA captures intuitive notions of similarity, i.e. that neural networks trained from different initializations should be similar to each other. However, it remains an open question whether there exist kernels beyond the linear and RBF kernels that would be better for analyzing neural network representations. Moreover, there are other potential choices of weighting in Equation 14 that may be more appropriate in certain settings. We leave these questions as future work. Nevertheless, CKA seems to be much better than previous methods at finding correspondences between the learned representations in hidden layers of neural networks.
We thank Gamaleldin Elsayed, Jaehoon Lee, Paul-Henri Mignot, Maithra Raghu, Samuel L. Smith, and Alex Williams for comments on the manuscript, Rishabh Agarwal for ideas, and Aliza Elkin for support.
Image style transfer using convolutional neural networks.In
Perceptual losses for real-time style transfer and super-resolution.In European Conference on Computer Vision, pp. 694–711. Springer, 2016.
Content and cluster analysis: assessing representational similarity in neural systems.Philosophical Psychology, 13(1):47–76, 2000.
Proceedings of the 1st International Workshop on Feature Extraction: Modern Questions and Challenges at NIPS 2015, volume 44 of Proceedings of Machine Learning Research, pp. 196–212, Montreal, Canada, 11 Dec 2015. PMLR.
Supervised feature selection via dependence estimation.In Proceedings of the 24th international conference on Machine learning, pp. 823–830. ACM, 2007.
Let and be matrices. Suppose is invariant to invertible linear transformation in the first argument, i.e. for arbitrary and any with . If , then .
where is a basis for the null space of the rows of and is a basis for the null space of the rows of . Then let .
Because and have rank by construction, also has rank . Thus, . ∎
Here we show that any similarity index that is invariant to orthogonal transformation can be made invariant to invertible linear transformation by orthogonalizing the columns of the input.
Let be an matrix of full column rank and let be an invertible matrix. Let and , where and and are invertible. If is invariant to orthogonal transformation, then .
Let . Then , and B is an orthogonal transformation:
Thus . ∎
Consider the linear regression fit of the columns of an matrix with an matrix :
Let , the thin QR decomposition of A. Then the fitted values are given by:
The residuals are orthogonal to the fitted values, i.e.
Assuming that was centered by subtracting its column means prior to the linear regression fit, the total fraction of variance explained by the fit is:
Although we have assumed that is obtained from QR decomposition, any orthonormal basis with the same span will suffice, because orthogonal transformations do not change the Frobenius norm.
Let be an matrix and be an matrix, and let . Given the thin QR decompositions of and , , such that , , the canonical correlations are the singular values of (Björck & Golub, 1973; Press, 2011) and thus the square roots of the eigenvalues of . The squared canonical correlations are the eigenvalues of . Their sum is .
Now consider the linear regression fit of the columns of with . Assume that has zero mean. Substituting for and for in Equation 16, and noting that :
Morcos et al. (2018) proposed to compute projection-weighted canonical correlation as:
where the are the columns of , and the are the canonical variables formed by projecting to the canonical coordinate frame. Below, we show that if we modify by squaring the dot products and , we recover linear regression. Specifically:
Our derivation begins by forming the SVD . is a diagonal matrix of the canonical correlations , and the matrix of canonical variables . Then is:
Noting that and :
Substituting for and for in Equation 16:
Beyond CCA, we could also consider the “canonical ridge” regularized CCA objective (Vinod, 1976):
Given the singular value decompositions and , one can form “partially orthogonalized” bases and . Given the singular value decomposition of their product , the canonical weights are given by and , as previously shown by Mroueh et al. (2015). As in the unregularized case (Equation 13), there is a convenient expression for the sum of the squared singular values in terms of the eigenvalues and eigenvectors of and . Let the left-singular vector of (eigenvector of ) be indexed as and let the eigenvalue of (squared singular value of ) be indexed as , and similarly let the left-singular vectors of be indexed as and the eigenvalues as . Then:
Unlike in the unregularized case, the singular values do not measure the correlation between the canonical variables. Instead, they become arbitrarily small as or increase. Thus, we need to normalize the statistic to remove the dependency on the regularization parameters.
Applying von Neumann’s trace inequality yields a bound: