Statistics of Visual Responses to Object Stimuli from Primate AIT Neurons to DNN Neurons

12/12/2016
by   Qiulei Dong, et al.
0

Cadieu et al. (Cadieu,2014) reported that deep neural networks(DNNs) could rival the representation of primate inferotemporal cortex for object recognition. Lehky et al. (Lehky,2011) provided a statistical analysis on neural responses to object stimuli in primate AIT cortex. They found the intrinsic dimensionality of object representations in AIT cortex is around 100 (Lehky,2014). Considering the outstanding performance of DNNs in object recognition, it is worthwhile investigating whether the responses of DNN neurons have similar response statistics to those of AIT neurons. Following Lehky et al.'s works, we analyze the response statistics to image stimuli and the intrinsic dimensionality of object representations of DNN neurons. Our findings show in terms of kurtosis and Pareto tail index, the response statistics on single-neuron selectivity and population sparseness of DNN neurons are fundamentally different from those of IT neurons except some special cases. By increasing the number of neurons and stimuli, the conclusions could alter substantially. In addition, with the ascendancy of the convolutional layers of DNNs, the single-neuron selectivity and population sparseness of DNN neurons increase, indicating the last convolutional layer is to learn features for object representations, while the following fully-connected layers are to learn categorization features. It is also found that a sufficiently large number of stimuli and neurons are necessary for obtaining a stable dimensionality. To our knowledge, this is the first work to analyze the response statistics of DNN neurons comparing with AIT neurons, and our results provide not only some insights into the discrepancy of DNN neurons with respect to IT neurons in object representation, but also shed some light on possible outcomes of IT neurons when the number of recorded neurons and stimuli is beyond the level in (Lehky,2011,2014).

READ FULL TEXT

page 5

page 11

page 12

page 18

page 20

page 25

research
10/31/2018

Conceptual Content in Deep Convolutional Neural Networks: An analysis into multi-faceted properties of neurons

In this paper we analyze convolutional layers of VGG16 model pre-trained...
research
12/04/2015

Neuron's Eye View: Inferring Features of Complex Stimuli from Neural Responses

Experiments that study neural encoding of stimuli at the level of indivi...
research
02/11/2022

Paraphrasing Magritte's Observation

Contrast Sensitivity of the human visual system can be explained from ce...
research
03/03/2020

Selectivity considered harmful: evaluating the causal impact of class selectivity in DNNs

Class selectivity, typically defined as how different a neuron's respons...
research
01/15/2022

Explainability Tools Enabling Deep Learning in Future In-Situ Real-Time Planetary Explorations

Deep learning (DL) has proven to be an effective machine learning and co...
research
05/01/2019

Gradient-free activation maximization for identifying effective stimuli

A fundamental question for understanding brain function is what types of...
research
08/13/2019

Neural Plasticity Networks

Neural plasticity is an important functionality of human brain, in which...

Please sign up or login with your details

Forgot password? Click here to reset