Some Theoretical Properties of a Network of Discretely Firing Neurons

05/03/2015
by   Stephen Luttrell, et al.
0

The problem of optimising a network of discretely firing neurons is addressed. An objective function is introduced which measures the average number of bits that are needed for the network to encode its state. When this is minimised, it is shown that this leads to a number of results, such as topographic mappings, piecewise linear dependence on the input of the probability of a neuron firing, and factorial encoder networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/06/2006

Modelling the Probability Density of Markov Sources

This paper introduces an objective function that seeks to minimise the a...
research
10/15/2004

Self-Organised Factorial Encoding of a Toroidal Manifold

It is shown analytically how a neural network can be used optimally to e...
research
10/13/2016

Why Deep Neural Networks for Function Approximation?

Recently there has been much interest in understanding why deep neural n...
research
01/25/2021

Approximating Probability Distributions by ReLU Networks

How many neurons are needed to approximate a target probability distribu...
research
06/06/2018

The effect of the choice of neural network depth and breadth on the size of its hypothesis space

We show that the number of unique function mappings in a neural network ...
research
02/15/2021

And/or trade-off in artificial neurons: impact on adversarial robustness

Since its discovery in 2013, the phenomenon of adversarial examples has ...
research
03/31/2021

Neural Response Interpretation through the Lens of Critical Pathways

Is critical input information encoded in specific sparse pathways within...

Please sign up or login with your details

Forgot password? Click here to reset