From deep learning to mechanistic understanding in neuroscience: the structure of retinal prediction

12/12/2019
by   Hidenori Tanaka, et al.
0

Recently, deep feedforward neural networks have achieved considerable success in modeling biological sensory processing, in terms of reproducing the input-output map of sensory neurons. However, such models raise profound questions about the very nature of explanation in neuroscience. Are we simply replacing one complex system (a biological circuit) with another (a deep network), without understanding either? Moreover, beyond neural representations, are the deep network's computational mechanisms for generating neural responses the same as those in the brain? Without a systematic approach to extracting and understanding computational mechanisms from deep neural network models, it can be difficult both to assess the degree of utility of deep learning approaches in neuroscience, and to extract experimentally testable hypotheses from deep networks. We develop such a systematic approach by combining dimensionality reduction and modern attribution methods for determining the relative importance of interneurons for specific visual computations. We apply this approach to deep network models of the retina, revealing a conceptual understanding of how the retina acts as a predictive feature extractor that signals deviations from expectations for diverse spatiotemporal stimuli. For each stimulus, our extracted computational mechanisms are consistent with prior scientific literature, and in one case yields a new mechanistic hypothesis. Thus overall, this work not only yields insights into the computational mechanisms underlying the striking predictive capabilities of the retina, but also places the framework of deep networks as neuroscientific models on firmer theoretical foundations, by providing a new roadmap to go beyond comparing neural representations to extracting and understand computational mechanisms.

READ FULL TEXT
research
10/04/2017

Mean-field theory of input dimensionality reduction in unsupervised deep neural networks

Deep neural networks as powerful tools are widely used in various domain...
research
10/31/2018

Analyzing biological and artificial neural networks: challenges with opportunities for synergy?

Deep neural networks (DNNs) transform stimuli across multiple processing...
research
04/03/2021

Explanatory models in neuroscience: Part 2 – constraint-based intelligibility

Computational modeling plays an increasingly important role in neuroscie...
research
03/26/2018

Deep learning as a tool for neural data analysis: speech classification and cross-frequency coupling in human sensorimotor cortex

A fundamental challenge in neuroscience is to understand what structure ...
research
09/25/2018

How can deep learning advance computational modeling of sensory information processing?

Deep learning, computational neuroscience, and cognitive science have ov...
research
02/16/2023

Rejecting Cognitivism: Computational Phenomenology for Deep Learning

We propose a non-representationalist framework for deep learning relying...
research
11/27/2018

Using Attribution to Decode Dataset Bias in Neural Network Models for Chemistry

Deep neural networks have achieved state of the art accuracy at classify...

Please sign up or login with your details

Forgot password? Click here to reset