A Metric for Evaluating Neural Input Representation in Supervised Learning Networks

03/03/2020
by   Richard R Carrillo, et al.
0

Supervised learning has long been attributed to several feed-forward neural circuits within the brain, with attention being paid to the cerebellar granular layer. The focus of this study is to evaluate the input activity representation of these feed-forward neural networks. The activity of cerebellar granule cells is conveyed by parallel fibers and translated into Purkinje cell activity; the sole output of the cerebellar cortex. The learning process at this parallel-fiber-to-Purkinje-cell connection makes each Purkinje cell sensitive to a set of specific cerebellar states, determined by the granule-cell activity during a certain time window. A Purkinje cell becomes sensitive to each neural input state and, consequently, the network operates as a function able to generate a desired output for each provided input by means of supervised learning. However, not all sets of Purkinje cell responses can be assigned to any set of input states due to the network's own limitations (inherent to the network neurobiological substrate), that is, not all input-output mapping can be learned. A limiting factor is the representation of the input states through granule-cell activity. The quality of this representation will determine the capacity of the network to learn a varied set of outputs. In this study we present an algorithm for evaluating quantitatively the level of compatibility/interference amongst a set of given cerebellar states according to their representation (granule-cell activation patterns) without the need for actually conducting simulations and network training. The algorithm input consists of a real-number matrix that codifies the activity level of every considered granule-cell in each state. The capability of this representation to generate a varied set of outputs is evaluated geometrically, thus resulting in a real number that assesses the goodness of the representation

READ FULL TEXT

page 6

page 7

page 8

page 9

research
07/24/2019

Backward-Forward Algorithm: An Improvement towards Extreme Learning Machine

Extreme learning machine (ELM), a randomized learning paradigm for a sin...
research
04/21/2017

Feed-forward approximations to dynamic recurrent network architectures

Recurrent neural network architectures can have useful computational pro...
research
12/23/2022

Capacity Studies for a Differential Growing Neural Gas

In 2019 Kerdels and Peters proposed a grid cell model (GCM) based on a D...
research
06/11/2018

When and where do feed-forward neural networks learn localist representations?

According to parallel distributed processing (PDP) theory in psychology,...
research
04/29/2019

Finding Invariants in Deep Neural Networks

We present techniques for automatically inferring invariant properties o...
research
06/22/2019

Repeated sequential learning increases memory capacity via effective decorrelation in a recurrent neural network

Memories in neural system are shaped through the interplay of neural and...
research
11/14/2018

Controllability, Multiplexing, and Transfer Learning in Networks using Evolutionary Learning

Networks are fundamental building blocks for representing data, and comp...

Please sign up or login with your details

Forgot password? Click here to reset