A Test for Shared Patterns in Cross-modal Brain Activation Analysis

10/08/2019
by   Elena Kalinina, et al.
8

Determining the extent to which different cognitive modalities (understood here as the set of cognitive processes underlying the elaboration of a stimulus by the brain) rely on overlapping neural representations is a fundamental issue in cognitive neuroscience. In the last decade, the identification of shared activity patterns has been mostly framed as a supervised learning problem. For instance, a classifier is trained to discriminate categories (e.g. faces vs. houses) in modality I (e.g. perception) and tested on the same categories in modality II (e.g. imagery). This type of analysis is often referred to as cross-modal decoding. In this paper we take a different approach and instead formulate the problem of assessing shared patterns across modalities within the framework of statistical hypothesis testing. We propose both an appropriate test statistic and a scheme based on permutation testing to compute the significance of this test while making only minimal distributional assumption. We denote this test cross-modal permutation test (CMPT). We also provide empirical evidence on synthetic datasets that our approach has greater statistical power than the cross-modal decoding method while maintaining low Type I errors (rejecting a true null hypothesis). We compare both approaches on an fMRI dataset with three different cognitive modalities (perception, imagery, visual search). Finally, we show how CMPT can be combined with Searchlight analysis to explore spatial distribution of shared activity patterns.

READ FULL TEXT

page 1

page 5

page 6

page 8

page 10

research
07/25/2016

Learning Aligned Cross-Modal Representations from Weakly Aligned Data

People can recognize scenes across many different modalities beyond natu...
research
05/03/2018

Research on the Brain-inspired Cross-modal Neural Cognitive Computing Framework

To address modeling problems of brain-inspired intelligence, this thesis...
research
07/12/2018

Disjoint Mapping Network for Cross-modal Matching of Voices and Faces

We propose a novel framework, called Disjoint Mapping Network (DIMNet), ...
research
07/31/2018

Deep Cross Modal Learning for Caricature Verification and Identification(CaVINet)

Learning from different modalities is a challenging task. In this paper,...
research
04/16/2019

Shared Predictive Cross-Modal Deep Quantization

With explosive growth of data volume and ever-increasing diversity of da...
research
11/06/2019

A coupled autoencoder approach for multi-modal analysis of cell types

Recent developments in high throughput profiling of individual neurons h...
research
11/21/2021

TraVLR: Now You See It, Now You Don't! Evaluating Cross-Modal Transfer of Visio-Linguistic Reasoning

Numerous visio-linguistic (V+L) representation learning methods have bee...

Please sign up or login with your details

Forgot password? Click here to reset