Shared Representational Geometry Across Neural Networks

11/28/2018 ∙ by Qihong Lu, et al. ∙ Google Princeton University 0

Different neural networks trained on the same dataset often learn similar input-output mappings with very different weights. Is there some correspondence between these neural network solutions? For linear networks, it has been shown that different instances of the same network architecture encode the same representational similarity matrix, and their neural activity patterns are connected by orthogonal transformations. However, it is unclear if this holds for non-linear networks. Using a shared response model, we show that different neural networks encode the same input examples as different orthogonal transformations of an underlying shared representation. We test this claim using both standard convolutional neural networks and residual networks on CIFAR10 and CIFAR100.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Different people may share many cognitive functions (e.g. object recognition), but in general, the underlying neural implementation of these shared cognitive functions will be different across individuals. Similarly, when many instantiations of the same neural network architecture are trained on the same dataset, these networks tend to approximate the same mathematical function with very different weight configurations Dauphin2014-sq; Li2015-ur; Meng2018-vv. Concretely, given the same input, two trained networks tend to produce the same output, but their hidden activity patterns will be different. In what sense are these networks similar? Broadly speaking, any mathematical function has many equivalent paramterizations. Understanding the connection of these paramterizations might help us understand the intrinsic property of that function. What is the connection across these neural networks trained on the same data?

Prior research has shown that there are underlying similarities across the activity patterns from different networks trained on the same dataset Li2015-ur; Morcos2018-nf; Raghu2017-ng. One hypothesis is that the activity patterns of these networks span highly similar feature spaces Li2015-ur. Empirically, it has also been shown that different networks can be “aligned” by doing canonical correlation analysis on the singular components of their activity patterns Morcos2018-nf; Raghu2017-ng. Interestingly, in the case of linear networks, prior theoretical research has shown that different instances of the same network architecture will learn the same representational similarity relation across the inputs saxe2014-ux; Saxe2018-bl. And their activity patterns are connected by orthogonal transformations (assuming the training data is structured hierarchically, small norm weight initialization, and small learning rate) saxe2014-ux; Saxe2018-bl. Though many conclusions derived from linear networks generalized to non-linear networks Advani2017-fo; saxe2014-ux; Saxe2018-bl, it is unclear if this result holds in the non-linear setting.

In this paper, we test if different neural networks trained on the same dataset learn to represent the training data as different orthogonal transformations of some underlying shared representation. To do so, we leverage ideas developed for analyzing group-level neuroimaging data. Recently, techniques have been developed for functionally aligning different subjects to a shared representational space directly based on brain responses Chen2015-mi; Haxby2011-uf. Here, we propose to construct the shared representational space across neural networks with the shared response model (SRM) Chen2015-mi, a method for functionally aligning neuroimaging data across subjects Anderson2016-xh; Guntupalli2016-so; Haxby2011-uf; Vodrahalli2018-kw. SRM maps different subjects’ data to a shared space through matrices with orthonormal columns. In our work, we use SRM to show that, in some cases, orthogonal matrices can be sufficient for constructing a shared representational space across activity patterns from different networks. Namely, different networks learn different rigid-body transformations of the same underlying representation. This result is consistent with the theoretical predictions made on deep linear networks saxe2014-ux; Saxe2018-bl, as well as prior empirical works (Li2015-ur; Morcos2018-nf; Raghu2017-ng).

A) SRM aligns activity patterns from different networks to a shared space
B) In the shared space, inter-network RSM (iRSM) is similar to within-network RSM (wRSM)
Figure 1: A) A low dimensional visualization of the hidden activity patterns of two networks. Each point is the average activity pattern of a class in CIFAR10. Before SRM, the hidden activity patterns on the same stimulus across the two networks seem distinct, but these patterns can be accurately aligned by orthogonal transformations. B) Examples of shared space inter-network RSM (iRSM), native space within-network RSM (wRSM), and native space iRSM. In the shared space, iRSM is highly similar to the wRSM averaged across (ten) networks, suggesting the alignment is accurate. The native space iRSM is dissimilar from wRSM due to misalignment.

2 Methods

Here we introduce the shared response model (SRM) and the concept of a representational similarity matrix (RSM). We use SRM to construct a shared representational space where hidden activity patterns across networks can be meaningfully compared. And we use RSM to quantitatively evaluate the learned transformations.

Shared Response Model (SRM). SRM is formulated as in equation (1). Given neural networks. Let , be the set of activity patterns for -th layer of network , where is the number of units and is the number of examples. SRM seeks , a basis set for the shared space, and , the transformation matrices between the network-specific native space (the span of ) and the shared space (Fig 1A shows a schematic illustration of this process). are constrained to be matrices with orthonormal columns. Finally,

is a hyperparameter that control the dimensionality of the shared space. When

, is orthogonal, which represents a rigid-body transformation.

(1)

Representational Similarity Matrix (RSM). To assess the information encoded by hidden activity patterns, we use RSM Kriegeskorte2008-md; Kriegeskorte2013-xl, a method for comparing neural representations across different systems (e.g. monkey vs. human). Let matrix to be the matrix of activity patterns for a neural network layer, where each column of is an activity pattern evoked by an input. The within-network RSM of is the correlation matrix of , i.e., . Without loss of generality, we assume to be column-wise normalized, so . RSM is a matrix that reflects all pairwise similarities of the hidden activity patterns evoked by different inputs. We define inter-network RSM as . Figure 1B shows the RSMs from ten standard ConvNet trained on CIFAR10 for demonstration.

The averaged within-network RSM represents what’s shared across networks. If two networks have identical activity patterns (), their inter-network RSM will be identical to the averaged within-network RSM. However, if they are “misaligned” (e.g. off by an orthogonal transformation), their inter-network RSM will be different from the averaged within-network RSM. For example, consider two sets of patterns and , where is orthogonal. Then . With this observation, we use the correlation between inter-network RSM and within-network RSM to assess the quality of SRM alignment.

3 Results

The connection between SRM and representational similarity. We start with establishing a theoretical connection between SRM and RSM – if two sets of activity patterns , have identical RSMs, , can be represented as different orthogonal transformations of the same underlying shared representation. Namely, there exist , and , such that and , with and . We prove this in the case of two networks, and the generalization to networks is straightforward.

Proposition 1.

For two sets of activity patterns and , RSM() = RSM() if and only if and can be represented as different orthogonal transformations of the same shared representation .

Proof: For the forward direction, assume . Let and be compact SVDs. The assumption can be rewritten in terms of the SVDs:

. Under a generic setting, the eigenvalues will be distinct with probability one, so the two eigen-decompositions for corresponding covariance matrices are unique. Therefore, we have that

and . Let and let . Now, we can rewrite and as and . Finally, let , , and . By construction, this is a SRM solution that perfectly aligns and .

For the converse, assuming there is a SRM solution that achieves a perfect alignment for and . Namely, and , with and for some . Then,

(2)