Explanatory models in neuroscience: Part 1 – taking mechanistic abstraction seriously

04/03/2021
by   Rosa Cao, et al.
0

Despite the recent success of neural network models in mimicking animal performance on visual perceptual tasks, critics worry that these models fail to illuminate brain function. We take it that a central approach to explanation in systems neuroscience is that of mechanistic modeling, where understanding the system is taken to require fleshing out the parts, organization, and activities of a system, and how those give rise to behaviors of interest. However, it remains somewhat controversial what it means for a model to describe a mechanism, and whether neural network models qualify as explanatory. We argue that certain kinds of neural network models are actually good examples of mechanistic models, when the right notion of mechanistic mapping is deployed. Building on existing work on model-to-mechanism mapping (3M), we describe criteria delineating such a notion, which we call 3M++. These criteria require us, first, to identify a level of description that is both abstract but detailed enough to be "runnable", and then, to construct model-to-brain mappings using the same principles as those employed for brain-to-brain mapping across individuals. Perhaps surprisingly, the abstractions required are those already in use in experimental neuroscience, and are of the kind deployed in the construction of more familiar computational models, just as the principles of inter-brain mappings are very much in the spirit of those already employed in the collection and analysis of data across animals. In a companion paper, we address the relationship between optimization and intelligibility, in the context of functional evolutionary explanations. Taken together, mechanistic interpretations of computational models and the dependencies between form and function illuminated by optimization processes can help us to understand why brain systems are built they way they are.

READ FULL TEXT

page 3

page 8

page 14

page 15

page 17

page 19

page 20

page 28

research
04/03/2021

Explanatory models in neuroscience: Part 2 – constraint-based intelligibility

Computational modeling plays an increasingly important role in neuroscie...
research
06/01/2020

Artificial neural networks for neuroscientists: A primer

Artificial neural networks (ANNs) are essential tools in machine learnin...
research
07/07/2020

The curious case of developmental BERTology: On sparsity, transfer learning, generalization and the brain

In this essay, we explore a point of intersection between deep learning ...
research
03/04/2019

Deep Learning for Cognitive Neuroscience

Neural network models can now recognise images, understand text, transla...
research
06/18/2019

Barron Spaces and the Compositional Function Spaces for Neural Network Models

One of the key issues in the analysis of machine learning models is to i...
research
12/07/2021

Relating transformers to models and neural representations of the hippocampal formation

Many deep neural network architectures loosely based on brain networks h...
research
06/24/2019

A Review on Neural Network Models of Schizophrenia and Autism Spectrum Disorder

This survey presents the most relevant neural network models of autism s...

Please sign up or login with your details

Forgot password? Click here to reset