Comparing representational geometries using the unbiased distance correlation

07/06/2020
by   Jörn Diedrichsen, et al.
0

Representational similarity analysis (RSA) tests models of brain computation by investigating how neural activity patterns change in response to different experimental conditions. Instead of predicting activity patterns directly, the models predict the geometry of the representation, i.e. to what extent experimental conditions are associated with similar or dissimilar activity patterns. RSA therefore first quantifies the representational geometry by calculating a dissimilarity measure for all pairs of conditions, and then compares the estimated representational dissimilarities to those predicted by the model. Here we address two central challenges of RSA: First, dissimilarity measures such as the Euclidean, Mahalanobis, and correlation distance, are biased by measurement noise, which can lead to incorrect inferences. Unbiased dissimilarity estimates can be obtained by crossvalidation, at the price of increased variance. Second, the pairwise dissimilarity estimates are not statistically independent. Ignoring the dependency makes model comparison with RSA statistically suboptimal. We present an analytical expression for the mean and (co-)variance of both biased and unbiased estimators of Euclidean and Mahalanobis distance, allowing us to exactly quantify the bias-variance trade-off. We then use the analytical expression of the co-variance of the dissimilarity estimates to derive a simple method correcting for this covariance. Combining unbiased distance estimates with this correction leads to a novel criterion for comparing representational geometries, the unbiased distance correlation, which, as we show, allows for near optimal model comparison.

READ FULL TEXT

page 2

page 4

page 6

page 7

page 11

page 14

page 15

page 23

research
05/05/2020

Bias-Variance Tradeoffs in Joint Spectral Embeddings

Latent position models and their corresponding estimation procedures off...
research
08/25/2020

Unbiased estimator for the variance of the leave-one-out cross-validation estimator for a Bayesian normal model with fixed variance

When evaluating and comparing models using leave-one-out cross-validatio...
research
05/26/2021

The statistical advantage of automatic NLG metrics at the system level

Estimating the expected output quality of generation systems is central ...
research
05/01/2023

An unbiased non-parametric correlation estimator in the presence of ties

An inner-product Hilbert space formulation of the Kemeny distance is def...
research
05/11/2022

De-biasing "bias" measurement

When a model's performance differs across socially or culturally relevan...
research
04/02/2018

Calibration of Sobol indices estimates in case of noisy output

This paper presents a simple noise correction method for Sobol' indices ...
research
11/13/2017

Checking validity of monotone domain mean estimators

Estimates of population characteristics such as domain means are often e...

Please sign up or login with your details

Forgot password? Click here to reset