# Self-Organising Stochastic Encoders

The processing of mega-dimensional data, such as images, scales linearly with image size only if fixed size processing windows are used. It would be very useful to be able to automate the process of sizing and interconnecting the processing windows. A stochastic encoder that is an extension of the standard Linde-Buzo-Gray vector quantiser, called a stochastic vector quantiser (SVQ), includes this required behaviour amongst its emergent properties, because it automatically splits the input space into statistically independent subspaces, which it then separately encodes. Various optimal SVQs have been obtained, both analytically and numerically. Analytic solutions which demonstrate how the input space is split into independent subspaces may be obtained when an SVQ is used to encode data that lives on a 2-torus (e.g. the superposition of a pair of uncorrelated sinusoids). Many numerical solutions have also been obtained, using both SVQs and chains of linked SVQs: (1) images of multiple independent targets (encoders for single targets emerge), (2) images of multiple correlated targets (various types of encoder for single and multiple targets emerge), (3) superpositions of various waveforms (encoders for the separate waveforms emerge - this is a type of independent component analysis (ICA)), (4) maternal and foetal ECGs (another example of ICA), (5) images of textures (orientation maps and dominance stripes emerge). Overall, SVQs exhibit a rich variety of self-organising behaviour, which effectively discovers the internal structure of the training data. This should have an immediate impact on "intelligent" computation, because it reduces the need for expert human intervention in the design of data processing algorithms.

There are no comments yet.

## Authors

• 12 publications
• ### Using Stochastic Encoders to Discover Structure in Data

In this paper a stochastic generalisation of the standard Linde-Buzo-Gra...
08/21/2004 ∙ by Stephen Luttrell, et al. ∙ 0

• ### Stochastic Vector Quantisers

In this paper a stochastic generalisation of the standard Linde-Buzo-Gra...
12/16/2010 ∙ by Stephen Luttrell, et al. ∙ 0

• ### Self-Organised Factorial Encoding of a Toroidal Manifold

It is shown analytically how a neural network can be used optimally to e...
10/15/2004 ∙ by Stephen Luttrell, et al. ∙ 0

• ### On the advantages of stochastic encoders

Stochastic encoders have been used in rate-distortion theory and neural ...
02/18/2021 ∙ by Lucas Theis, et al. ∙ 0

• ### Comparing Haar-Hilbert and Log-Gabor Based Iris Encoders on Bath Iris Image Database

This papers introduces a new family of iris encoders which use 2-dimensi...
06/12/2011 ∙ by Nicolaie Popescu-Bodorin, et al. ∙ 0

• ### Folding and Unfolding on Metagraphs

Typed metagraphs are defined as hypergraphs with types assigned to hyper...
12/03/2020 ∙ by Ben Goertzel, et al. ∙ 6

• ### Learning to Manipulate Object Collections Using Grounded State Representations

We propose a method for sim-to-real robot learning which exploits simula...
09/17/2019 ∙ by Matthew Wilson, et al. ∙ 6

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Stochastic Vector Quantiser

### 1.1 Reference

Luttrell S P, 1997, Mathematics of Neural Networks: Models, Algorithms and Applications

, Kluwer, Ellacott S W, Mason J C and Anderson I J (eds.), A theory of self-organising neural networks, 240-244.

### 1.2 Objective Function

#### 1.2.1 Mean Euclidean Distortion

 (1)
• Encode then decode: .

• = input vector; = code; = reconstructed vector.

• Code vector for .

• = input PDF; = stochastic encoder; = stochastic decoder.

• = Euclidean reconstruction error.

#### 1.2.2 Simplify

 D = x′(y) ≡ ∫dxPr(x|y)x=∫dxPr(x)Pr(y|x)x∫dxPr(x)Pr(y|x) (2)
• Do the integration.

• = reconstruction vector.

• is the solution of , so it can be deduced by optimisation.

#### 1.2.3 Constrain

 Pr(y|x) = Pr(y1|x)Pr(y2|x)⋯Pr(yn|x) x′(y) = 1nn∑i=1x′(yi) (3)
• implies the components of are conditionally independent given .

• implies the reconstruction is a superposition of contributions for .

• The stochastic encoder samples times from the same .

#### 1.2.4 Upper Bound

 D ≤ D1+D2 D1 ≡ 2n∫dxPr(x)M∑y=1Pr(y|x)∥∥x−x′(y)∥∥2 D2 ≡ 2(n−1)n∫dxPr(x)∥∥ ∥∥x−M∑y=1Pr(y|x)x′(y)∥∥ ∥∥2 (4)
• is a stochastic vector quantiser with the vector code replaced by a scalar code .

• is a non-linear (note ) encoder with a superposition term .

• : the stochastic encoder measures accurately and dominates.

• : the stochastic encoder samples poorly and dominates.

## 2 Analytic Optimisation

### 2.1 References

Luttrell S P, 1999, Combining Artificial Neural Nets: Ensemble and Modular Multi-Net Systems, 235-263, Springer-Verlag, Sharkey A J C (ed.), Self-organised modular neural networks for encoding data.

Luttrell S P, 1999, An Adaptive Network For Encoding Data Using Piecewise Linear Functions, Proceedings of the 9th International Conference on Artificial Neural Networks (ICANN99), Edinburgh, 7-10 September 1999, 198-203.

### 2.2 Stationarity Conditions

#### 2.2.1 Stationarity w.r.t. x′(y)

 n∫dxPr(x|y)x=x′(y)+(n−1)∫dxPr(x|y)M∑y′=1Pr(y′|x)x′(y′) (5)
• Stationarity condition is .

#### 2.2.2 Stationarity w.r.t. Pr(y′|x)

 Pr(x)Pr(y|x)M∑y′=1(Pr(y′|x)−δy,y′)x′(y′).⎛⎝12x′(y′)−nx+(n−1)M∑y′′=1Pr(y′′|x)x′(y′′)⎞⎠=0 (6)
• Stationarity condition is , subject to .

• 3 types of solution: (trivial), (ensures ), and .

### 2.3 Circle

#### 2.3.1 Input vector uniformly distributed on a circle

 x = (cosθ,sinθ) ∫dxPr(x)(⋯) = 12π∫2π0dθ(⋯) (7)

#### 2.3.2 Stochastic encoder PDFs symmetrically arranged around the circle

 Pr(y|θ)=p(θ−2πyM) (8)

#### 2.3.3 Reconstruction vectors symmetrically arraged around the circle

 x′(y)=r(cos(2πyM),sin(2πyM)) (9)

#### 2.3.4 Stochastic encoder PDFs overlap no more than 2 at a time

 p(θ)=⎧⎪ ⎪⎨⎪ ⎪⎩10≤|θ|≤πM−sf(θ)πM−s≤|θ|≤πM+s0|θ|≥πM+s (10)

#### 2.3.5 Stochastic encoder PDFs overlap no more than 3 at a time

 p(θ)=⎧⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪⎩f1(θ)0≤|θ|≤−πM+sf2(θ)−πM+s≤|θ|≤3πM−sf3(θ)3πM−s≤|θ|≤πM+s0|θ|≥πM+s (11)

### 2.4 2-Torus

#### 2.4.1 Input vector uniformly distributed on a 2-torus

 x = (x1,x2) x1 = (cosθ1,sinθ1) x2 = (cosθ2,sinθ2) ∫dxPr(x)(⋯) = 14π2∫2π0dθ1∫2π0dθ2(⋯) (12)

#### 2.4.2 Joint encoding

 Pr(y|x)=Pr(y|x1,x2) (13)
• depends jointly on and .

• Requires to encode .

• For a given resolution the size of the codebook increases exponentially with input dimension.

#### 2.4.3 Factorial encoding

 Pr(y|x)={Pr(y|x1)y∈Y1Pr(y|x2)y∈Y2 (14)
• and are non-intersecting subsets of the allowed values of .

• depends either on or on , but not on both at the same time.

• Requires to encode .

• For a given resolution the size of the codebook increases linearly with input dimension.

#### 2.4.4 Stability diagram

• Fixed , increasing : joint encoding is eventually favoured because the size of the codebook is eventually large enough.

• Fixed , increase : factorial encoding is eventually favoured because the number of samples is eventually large enough.

• Factorial encoding is encouraged by using a small codebook and sampling a large number of times.

## 3 Numerical Optimisation

### 3.1 References

Luttrell S P, 1997, to appear in Proceedings of the Conference on Information Theory and the Brain

, Newquay, 20-21 September 1996, The emergence of dominance stripes and orientation maps in a network of firing neurons.

Luttrell S P, 1997, Mathematics of Neural Networks: Models, Algorithms and Applications, Kluwer, Ellacott S W, Mason J C and Anderson I J (eds.), A theory of self-organising neural networks, 240-244.

Luttrell S P, 1999, submitted to a special issue of IEEE Trans. Information Theory on Information-Theoretic Imaging, Stochastic vector quantisers.

### 3.2 Gradient Descent

#### 3.2.1 Posterior probability with infinite range neighbourhood

 Pr(y|x)=Q(x|y)∑My′=1Q(x|y′)
• is needed to ensure a valid .

• This does not restrict in any way.

#### 3.2.2 Posterior probability with finite range neighbourhood

 Pr(y∣∣x;y′)≡Q(x|y)δy∈N(y′)∑y′′=N(y′)Q(x|y′′) (15)
 (16)
• is the set of neurons that lie in a predefined “neighbourhood” of .

• is the “inverse neighbourhood” of defined as .

• Neighbourhood is used to introduce “lateral inhibition” between the firing neurons.

• This restricts , but allows limited range lateral interactions to be used.

#### 3.2.3 Probability leakage

 Pr(y|x)⟶∑y′∈L−1(y)Pr(y∣∣y′)Pr(y′|x) (17)
• is the amount of probability that leaks from location

to location .

• is the “leakage neighbourhood” of .

• is the “inverse leakage neighbourhood” of defined as .

• Leakage is to allow the network output to be “damaged” in a controlled way.

• When the network is optimised it automatically becomes robust with respect to such damage.

• Leakage leads to topographic ordering according to the defined neighbourhood.

• This restricts , but allows topographic ordering to be obtained, and is faster to train.

#### 3.2.4 Shorthand notation

 (18)
• This shorthand notation simplifies the appearance of the gradients of and .

• For instance, .

#### 3.2.5 Derivatives w.r.t. x′(y)

 ∂D1x′(y) = −4nM∫dxPr(x)(LTp)ydy ∂D2x′(y) = −4(n−1)nM2∫dxPr(x)(LTp)y¯d (19)
• The extra factor in arises because there is a hidden inside the .

#### 3.2.6 Functional derivatives w.r.t. logQ(x|y)

 δD1 = 2nM∫dxPr(x)M∑y=1δlogQ(x|y)(py(Le)y−(PTPLe)y) (20) δD2 = 4(n−1)nM2∫dxPr(x)M∑y=1δlogQ(x|y)(py(Ld)y−(PTPLd)y).¯d
• Differentiate w.r.t. because .

#### 3.2.7 Neural response model

 Q(x|y)=11+exp(−w(y).x−b(y)) (21)
• This is a standard “sigmoid” function.

• This restricts , but it is easy to implement, and leads to results similar to the ideal analytic results.

#### 3.2.8 Derivatives w.r.t. w(y) and b(y)

 ∂D1∂(b(y)w(y)) = 2nM∫dxPr(x)(py(Le)y−(PTPLe)y)(1−Q(x|y))(1x) (22) ∂D2∂(b(y)w(y)) = 4(n−1)nM2∫dxPr(x)(py(Ld)y−(PTPLd)y).¯d(1−Q(x|y))(1x)

### 3.3 Circle

#### 3.3.1 Training history

• and were used.

• The reference vectors (for ) are initialised close to the origin.

• The training history leads to stationary just outside the unit circle.

#### 3.3.2 Posterior probabilities

• Each of the posterior probabilities

(for ) is large mainly in a radian arc of the circle.

• There is some overlap between the .

### 3.4 2-Torus

#### 3.4.1 Posterior probabilities: joint encoding

• and were used, which lies inside the joint encoding region of the stability diagram.

• Each of the posterior probabilities is large mainly in a localised region of the torus.

• There is some overlap between the .

#### 3.4.2 Posterior probabilities: factorial encoding

• and were used, which lies inside the factorial encoding region of the stability diagram.

• Each of the posterior probabilities is large mainly in a collar-shaped region of the torus; half circle one way round the torus, and half the other way.

• There is some overlap between the that circle the same way round the torus.

• There is a localised region of overlap between a pair of that circle the opposite way round the torus.

• These localised overlap regions are the mechanism by which factorial encoding has a small reconstruction distortion.

### 3.5 Multiple Independent Targets

#### 3.5.1 Training data

• The targets were unit height Gaussian bumps with .

• The additive noise was uniformly distributed variables in

.

#### 3.5.2 Factorial encoding

• and were used.

• Each of the reference vectors becomes large in a localised region.

• Each input vector causes a subset of the neurons to fire corresponding to locations of the targets.

• This is a factorial encoder because each neuron responds to only a subspace of the input.

### 3.6 Pair of Correlated Targets

#### 3.6.1 Training data

• The targets were unit height Gaussian bumps with .

#### 3.6.2 Training history: joint encoding

• and were used.

• Each of the reference vectors becomes large in a pair of localised regions.

• Each neuron responds to a small range of positions and separations of the pair of targets.

• The neurons respond jointly to the position and separation of the targets.

#### 3.6.3 Training history: factorial encoding

• and were used; this is a 2-stage encoder.

• The second encoder uses as input the posterior probability output by the first encoder.

• The objective function is the sum of the separate encoder objective functions (with equal weighting given to each).

• The presence of the second encoder affects the optimisation of the first encoder via “self-supervision”.

• Each of the reference vectors becomes large in a single localised region.

#### 3.6.4 Training history: invariant encoding

• and were used; this is a 2-stage encoder.

• During training the ratio of the weighting assigned to the first and second encoders is increased from 1:5 to 1:40.

• Each of the reference vectors becomes large in a single broad region.

• Each neuron responds only the position (and not the separation) of the pair of targets.

• The response of the neurons is invariant w.r.t. the separation of the targets.

### 3.7 Separating Different Waveforms

#### 3.7.1 Training data

• This data is the superposition of a pair of waveforms plus noise.

• In each training vector the relative phase of the two waveforms is randomly selected.

#### 3.7.2 Training history: factorial encoding

• and were used.

• Each of the reference vectors becomes one or other of the two waveforms, and has a definite phase.

• Each neuron responds to only one of the waveforms, and then only when its phase is in a localised range.

### 3.8 Maternal + Foetal ECG

#### 3.8.1 Training data

• This data is an 8-channel ECG recording taken from a pregnant woman.

• The large spikes are the woman’s heart beat.

• The noise masks the foetus’ heartbeat.

• This data was whitened before training the neural network.

#### 3.8.2 Factorial Encoding

• and were used.

• The results shown are computed for all neurons () for each 8-dimensional input vector .

• After limited training some, but not all, of the neurons have converged.

• The broadly separated spikes indicate a neuron that responds to the mother’s heartbeat.

• The closely separated spikes indicate a neuron that responds to the foetus’ heartbeat.

### 3.9 Visual Cortex Network (VICON)

#### 3.9.1 Training Data

• This is a Brodatz texture image, whose spatial correlation length is 5-10 pixels.

#### 3.9.2 Orientation map

• and were used.

• Input window size = , neighbourhood size = , leakage neighbourhood size = were used.

• Leakage probability was sampled from a 2-dimensional Gaussian PDF, with in each direction.

• Each of the reference vectors typically looks like a small patch of image.

• Leakage induces topographic ordering across the array of neurons

• This makes the array of reference vectors look like an “orientation map”.

#### 3.9.3 Sparse coding

• The trained network is used to encode and decode a typical input image.

• Left image = input.

• Middle image = posterior probability. This shows “sparse coding” with a small number of “activity bubbles”.

• Right image = reconstruction. Apart from edge effects, this is a low resolution version of the input.

#### 3.9.4 Dominance stripes

• Interdigitate a pair of training images, so that one occupies on the black squares, and the other the white squares, of a “chess board”.

• Preprocess this interdigitated image to locally normalise it using a finite range neighbourhood.

• and were used.

• Input window size = , neighbourhood size = , leakage neighbourhood size = were used.

• Leakage probability was sampled from a 2-dimensional Gaussian PDF, with in each direction.

• The dominance stripe map records for each neuron which of the 2 interdigitated images causes it to respond more strongly.

• The dominance stripes tend to run perpendicularly into the boundaries, because the neighbourhood window is truncated at the edge of the array.

## 4 References

Luttrell S P, 1997, to appear in Proceedings of the Conference on Information Theory and the Brain, Newquay, 20-21 September 1996, The emergence of dominance stripes and orientation maps in a network of firing neurons.

Luttrell S P, 1997, Mathematics of Neural Networks: Models, Algorithms and Applications, Kluwer, Ellacott S W, Mason J C and Anderson I J (eds.), A theory of self-organising neural networks, 240-244.

Luttrell S P, 1999, Combining Artificial Neural Nets: Ensemble and Modular Multi-Net Systems, 235-263, Springer-Verlag, Sharkey A J C (ed.), Self-organised modular neural networks for encoding data.

Luttrell S P, 1999, An Adaptive Network For Encoding Data Using Piecewise Linear Functions, Proceedings of the 9th International Conference on Artificial Neural Networks (ICANN99), Edinburgh, 7-10 September 1999, 198-203.

Luttrell S P, 1999, submitted to a special issue of IEEE Trans. Information Theory on Information-Theoretic Imaging, Stochastic vector quantisers.