A Unified Theory of Early Visual Representations from Retina to Cortex through Anatomically Constrained Deep CNNs

01/03/2019
by   Jack Lindsey, et al.
0

The visual system is hierarchically organized to process visual information in successive stages. Neural representations vary drastically across the first stages of visual processing: at the output of the retina, ganglion cell receptive fields (RFs) exhibit a clear antagonistic center-surround structure, whereas in the primary visual cortex, typical RFs are sharply tuned to a precise orientation. There is currently no unified theory explaining these differences in representations across layers. Here, using a deep convolutional neural network trained on image recognition as a model of the visual system, we show that such differences in representation can emerge as a direct consequence of different neural resource constraints on the retinal and cortical networks, and we find a single model from which both geometries spontaneously emerge at the appropriate stages of visual processing. The key constraint is a reduced number of neurons at the retinal output, consistent with the anatomy of the optic nerve as a stringent bottleneck. Second, we find that, for simple cortical networks, visual representations at the retinal output emerge as nonlinear and lossy feature detectors, whereas they emerge as linear and faithful encoders of the visual scene for more complex cortices. This result predicts that the retinas of small vertebrates should perform sophisticated nonlinear computations, extracting features directly relevant to behavior, whereas retinas of large animals such as primates should mostly encode the visual scene linearly and respond to a much broader range of stimuli. These predictions could reconcile the two seemingly incompatible views of the retina as either performing feature extraction or efficient coding of natural scenes, by suggesting that all vertebrates lie on a spectrum between these two objectives, depending on the degree of neural resources allocated to their visual system.

READ FULL TEXT
research
06/19/2017

Using deep learning to reveal the neural code for images in primary visual cortex

Primary visual cortex (V1) is the first stage of cortical image processi...
research
12/20/2013

Efficient Visual Coding: From Retina To V2

The human visual system has a hierarchical structure consisting of layer...
research
09/27/2018

A rotation-equivariant convolutional neural network model of primary visual cortex

Classical models describe primary visual cortex (V1) as a filter bank of...
research
11/06/2018

Revealing Fine Structures of the Retinal Receptive Field by Deep Learning Networks

Deep convolutional neural networks (CNNs) have demonstrated impressive p...
research
02/26/2016

A Single Model Explains both Visual and Auditory Precortical Coding

Precortical neural systems encode information collected by the senses, b...
research
02/06/2017

Deep Learning Models of the Retinal Response to Natural Scenes

A central challenge in neuroscience is to understand neural computations...
research
05/06/2020

A Model for Image Segmentation in Retina

While traditional feed-forward filter models can reproduce the rate resp...

Please sign up or login with your details

Forgot password? Click here to reset