1 Stochastic Vector Quantiser
1.1 Reference
Luttrell S P, 1997, Mathematics of Neural Networks: Models, Algorithms and Applications
, Kluwer, Ellacott S W, Mason J C and Anderson I J (eds.), A theory of selforganising neural networks, 240244.
1.2 Objective Function
1.2.1 Mean Euclidean Distortion
(1) 

Encode then decode: .

= input vector; = code; = reconstructed vector.

Code vector for .

= input PDF; = stochastic encoder; = stochastic decoder.

= Euclidean reconstruction error.
1.2.2 Simplify
(2) 

Do the integration.

= reconstruction vector.

is the solution of , so it can be deduced by optimisation.
1.2.3 Constrain
(3) 

implies the components of are conditionally independent given .

implies the reconstruction is a superposition of contributions for .

The stochastic encoder samples times from the same .
1.2.4 Upper Bound
(4) 

is a stochastic vector quantiser with the vector code replaced by a scalar code .

is a nonlinear (note ) encoder with a superposition term .

: the stochastic encoder measures accurately and dominates.

: the stochastic encoder samples poorly and dominates.
2 Analytic Optimisation
2.1 References
Luttrell S P, 1999, Combining Artificial Neural Nets:
Ensemble and Modular MultiNet Systems, 235263, SpringerVerlag,
Sharkey A J C (ed.), Selforganised modular neural networks for encoding
data.
Luttrell S P, 1999, An Adaptive Network For Encoding Data Using Piecewise Linear Functions, Proceedings of the 9th International Conference on Artificial Neural Networks (ICANN99), Edinburgh, 710 September 1999, 198203.
2.2 Stationarity Conditions
2.2.1 Stationarity w.r.t.
(5) 

Stationarity condition is .
2.2.2 Stationarity w.r.t.
(6) 

Stationarity condition is , subject to .

3 types of solution: (trivial), (ensures ), and .
2.3 Circle
2.3.1 Input vector uniformly distributed on a circle
(7) 
2.3.2 Stochastic encoder PDFs symmetrically arranged around the circle
(8) 
2.3.3 Reconstruction vectors symmetrically arraged around the circle
(9) 
2.3.4 Stochastic encoder PDFs overlap no more than 2 at a time
(10) 
2.3.5 Stochastic encoder PDFs overlap no more than 3 at a time
(11) 
2.4 2Torus
2.4.1 Input vector uniformly distributed on a 2torus
(12) 
2.4.2 Joint encoding
(13) 

depends jointly on and .

Requires to encode .

For a given resolution the size of the codebook increases exponentially with input dimension.
2.4.3 Factorial encoding
(14) 

and are nonintersecting subsets of the allowed values of .

depends either on or on , but not on both at the same time.

Requires to encode .

For a given resolution the size of the codebook increases linearly with input dimension.
2.4.4 Stability diagram

Fixed , increasing : joint encoding is eventually favoured because the size of the codebook is eventually large enough.

Fixed , increase : factorial encoding is eventually favoured because the number of samples is eventually large enough.

Factorial encoding is encouraged by using a small codebook and sampling a large number of times.
3 Numerical Optimisation
3.1 References
Luttrell S P, 1997, to appear in Proceedings of the Conference on Information Theory and the Brain
, Newquay, 2021 September 1996, The emergence of dominance stripes and orientation maps in a network of firing neurons.
Luttrell S P, 1997, Mathematics of Neural Networks:
Models, Algorithms and Applications, Kluwer, Ellacott S W, Mason
J C and Anderson I J (eds.), A theory of selforganising neural networks,
240244.
Luttrell S P, 1999, submitted to a special issue of IEEE Trans. Information Theory on InformationTheoretic Imaging, Stochastic vector quantisers.
3.2 Gradient Descent
3.2.1 Posterior probability with infinite range neighbourhood

is needed to ensure a valid .

This does not restrict in any way.
3.2.2 Posterior probability with finite range neighbourhood
(15) 
(16) 

is the set of neurons that lie in a predefined “neighbourhood” of .

is the “inverse neighbourhood” of defined as .

Neighbourhood is used to introduce “lateral inhibition” between the firing neurons.

This restricts , but allows limited range lateral interactions to be used.
3.2.3 Probability leakage
(17) 

is the “leakage neighbourhood” of .

is the “inverse leakage neighbourhood” of defined as .

Leakage is to allow the network output to be “damaged” in a controlled way.

When the network is optimised it automatically becomes robust with respect to such damage.

Leakage leads to topographic ordering according to the defined neighbourhood.

This restricts , but allows topographic ordering to be obtained, and is faster to train.
3.2.4 Shorthand notation
(18) 

This shorthand notation simplifies the appearance of the gradients of and .

For instance, .
3.2.5 Derivatives w.r.t.
(19) 

The extra factor in arises because there is a hidden inside the .
3.2.6 Functional derivatives w.r.t.
(20)  

Differentiate w.r.t. because .
3.2.7 Neural response model
(21) 

This is a standard “sigmoid” function.

This restricts , but it is easy to implement, and leads to results similar to the ideal analytic results.
3.2.8 Derivatives w.r.t. and
(22)  
3.3 Circle
3.3.1 Training history

and were used.

The reference vectors (for ) are initialised close to the origin.

The training history leads to stationary just outside the unit circle.
3.3.2 Posterior probabilities

There is some overlap between the .
3.4 2Torus
3.4.1 Posterior probabilities: joint encoding

and were used, which lies inside the joint encoding region of the stability diagram.

Each of the posterior probabilities is large mainly in a localised region of the torus.

There is some overlap between the .
3.4.2 Posterior probabilities: factorial encoding

and were used, which lies inside the factorial encoding region of the stability diagram.

Each of the posterior probabilities is large mainly in a collarshaped region of the torus; half circle one way round the torus, and half the other way.

There is some overlap between the that circle the same way round the torus.

There is a localised region of overlap between a pair of that circle the opposite way round the torus.

These localised overlap regions are the mechanism by which factorial encoding has a small reconstruction distortion.
3.5 Multiple Independent Targets
3.5.1 Training data

The targets were unit height Gaussian bumps with .

The additive noise was uniformly distributed variables in
.
3.5.2 Factorial encoding

and were used.

Each of the reference vectors becomes large in a localised region.

Each input vector causes a subset of the neurons to fire corresponding to locations of the targets.

This is a factorial encoder because each neuron responds to only a subspace of the input.
3.6 Pair of Correlated Targets
3.6.1 Training data

The targets were unit height Gaussian bumps with .
3.6.2 Training history: joint encoding

and were used.

Each of the reference vectors becomes large in a pair of localised regions.

Each neuron responds to a small range of positions and separations of the pair of targets.

The neurons respond jointly to the position and separation of the targets.
3.6.3 Training history: factorial encoding

and were used; this is a 2stage encoder.

The second encoder uses as input the posterior probability output by the first encoder.

The objective function is the sum of the separate encoder objective functions (with equal weighting given to each).

The presence of the second encoder affects the optimisation of the first encoder via “selfsupervision”.

Each of the reference vectors becomes large in a single localised region.
3.6.4 Training history: invariant encoding

and were used; this is a 2stage encoder.

During training the ratio of the weighting assigned to the first and second encoders is increased from 1:5 to 1:40.

Each of the reference vectors becomes large in a single broad region.

Each neuron responds only the position (and not the separation) of the pair of targets.

The response of the neurons is invariant w.r.t. the separation of the targets.
3.7 Separating Different Waveforms
3.7.1 Training data

This data is the superposition of a pair of waveforms plus noise.

In each training vector the relative phase of the two waveforms is randomly selected.
3.7.2 Training history: factorial encoding

and were used.

Each of the reference vectors becomes one or other of the two waveforms, and has a definite phase.

Each neuron responds to only one of the waveforms, and then only when its phase is in a localised range.
3.8 Maternal + Foetal ECG
3.8.1 Training data

This data is an 8channel ECG recording taken from a pregnant woman.

The large spikes are the woman’s heart beat.

The noise masks the foetus’ heartbeat.

This data was whitened before training the neural network.
3.8.2 Factorial Encoding

and were used.

The results shown are computed for all neurons () for each 8dimensional input vector .

After limited training some, but not all, of the neurons have converged.

The broadly separated spikes indicate a neuron that responds to the mother’s heartbeat.

The closely separated spikes indicate a neuron that responds to the foetus’ heartbeat.
3.9 Visual Cortex Network (VICON)
3.9.1 Training Data

This is a Brodatz texture image, whose spatial correlation length is 510 pixels.
3.9.2 Orientation map

and were used.

Input window size = , neighbourhood size = , leakage neighbourhood size = were used.

Leakage probability was sampled from a 2dimensional Gaussian PDF, with in each direction.

Each of the reference vectors typically looks like a small patch of image.

Leakage induces topographic ordering across the array of neurons

This makes the array of reference vectors look like an “orientation map”.
3.9.3 Sparse coding

The trained network is used to encode and decode a typical input image.

Left image = input.

Middle image = posterior probability. This shows “sparse coding” with a small number of “activity bubbles”.

Right image = reconstruction. Apart from edge effects, this is a low resolution version of the input.
3.9.4 Dominance stripes

Interdigitate a pair of training images, so that one occupies on the black squares, and the other the white squares, of a “chess board”.

Preprocess this interdigitated image to locally normalise it using a finite range neighbourhood.

and were used.

Input window size = , neighbourhood size = , leakage neighbourhood size = were used.

Leakage probability was sampled from a 2dimensional Gaussian PDF, with in each direction.

The dominance stripe map records for each neuron which of the 2 interdigitated images causes it to respond more strongly.

The dominance stripes tend to run perpendicularly into the boundaries, because the neighbourhood window is truncated at the edge of the array.
4 References
Luttrell S P, 1997, to appear in Proceedings of the
Conference on Information Theory and the Brain, Newquay, 2021 September
1996, The emergence of dominance stripes and orientation maps in a
network of firing neurons.
Luttrell S P, 1997, Mathematics of Neural Networks:
Models, Algorithms and Applications, Kluwer, Ellacott S W, Mason
J C and Anderson I J (eds.), A theory of selforganising neural networks,
240244.
Luttrell S P, 1999, Combining Artificial Neural Nets:
Ensemble and Modular MultiNet Systems, 235263, SpringerVerlag,
Sharkey A J C (ed.), Selforganised modular neural networks for encoding
data.
Luttrell S P, 1999, An Adaptive Network For Encoding Data
Using Piecewise Linear Functions, Proceedings of the 9th International
Conference on Artificial Neural Networks (ICANN99), Edinburgh, 710
September 1999, 198203.
Luttrell S P, 1999, submitted to a special issue of IEEE Trans. Information Theory on InformationTheoretic Imaging, Stochastic vector quantisers.
Comments
There are no comments yet.