Distributional robustness as a guiding principle for causality in cognitive neuroscience

02/14/2020 ∙ by Sebastian Weichwald, et al. ∙ 0

While probabilistic models describe the dependence structure between observed variables, causal models go one step further: they predict how cognitive functions are affected by external interventions that perturb neuronal activity. Inferring causal relationships from data is an ambitious task that is particularly challenging in cognitive neuroscience. Here, we discuss two difficulties in more detail: the scarcity of interventional data and the challenge of finding the right variables. We argue for distributional robustness as a guiding principle to tackle these problems. Modelling a target variable using the correct set of causal variables yields a model that generalises across environments or subjects (if these environments leave the causal mechanisms intact). Conversely, if a candidate model does not generalise, then either it includes non-causes of the target variable or the underlying variables are wrongly defined. In this sense, generalisability may serve as a guiding principle when defining relevant variables and can be used to partially compensate for the lack of interventional data.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.