When and where do feed-forward neural networks learn localist representations?

06/11/2018
by   Ella M. Gale, et al.
0

According to parallel distributed processing (PDP) theory in psychology, neural networks (NN) learn distributed rather than interpretable localist representations. This view has been held so strongly that few researchers have analysed single units to determine if this assumption is correct. However, recent results from psychology, neuroscience and computer science have shown the occasional existence of local codes emerging in artificial and biological neural networks. In this paper, we undertake the first systematic survey of when local codes emerge in a feed-forward neural network, using generated input and output data with known qualities. We find that the number of local codes that emerge from a NN follows a well-defined distribution across the number of hidden layer neurons, with a peak determined by the size of input data, number of examples presented and the sparsity of input data. Using a 1-hot output code drastically decreases the number of local codes on the hidden layer. The number of emergent local codes increases with the percentage of dropout applied to the hidden layer, suggesting that the localist encoding may offer a resilience to noisy networks. This data suggests that localist coding can emerge from feed-forward PDP networks and suggests some of the conditions that may lead to interpretable localist representations in the cortex. The findings highlight how local codes should not be dismissed out of hand.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/28/2018

Feed-forward Uncertainty Propagation in Belief and Neural Networks

We propose a feed-forward inference method applicable to belief and neur...
research
03/18/2021

Linear Iterative Feature Embedding: An Ensemble Framework for Interpretable Model

A new ensemble framework for interpretable model called Linear Iterative...
research
05/24/2019

Loss Surface Modality of Feed-Forward Neural Network Architectures

It has been argued in the past that high-dimensional neural networks do ...
research
04/29/2019

Finding Invariants in Deep Neural Networks

We present techniques for automatically inferring invariant properties o...
research
03/03/2020

A Metric for Evaluating Neural Input Representation in Supervised Learning Networks

Supervised learning has long been attributed to several feed-forward neu...
research
09/17/2015

Some Theorems for Feed Forward Neural Networks

In this paper we introduce a new method which employs the concept of "Or...
research
06/24/2021

FF-NSL: Feed-Forward Neural-Symbolic Learner

Inductive Logic Programming (ILP) aims to learn generalised, interpretab...

Please sign up or login with your details

Forgot password? Click here to reset