DeepAI AI Chat
Log In Sign Up

Self Organizing Maps

What are Self-Organizing Maps?

A self-organizing map is a neural network that uses unsupervised competitive learning among its nodes to optimize the model architecture in real time, without requiring any additional training. Instead of relying just on backpropagation or some other weight-adjustment approach to fire and update neurons, an extra feedback path is added to all nodes, regardless of synapse connection strength. Each neuron then competes with the others on their layer to be activated. This is a winner-take-all approach, with only one “winning” node activated at a time. The idea is that, regardless of the type of input data, the network can self-organize and output classifications in a discrete, coordinate-defined map (lattice) for further processing by other layers.


How do Self-Organizing Maps Work?

This competition between nodes is possible because the neural network includes an extra negative feedback path (lateral inhibition connections) between each one. The ultimate goal is to convert an incoming signal pattern of any number of dimensions into a one or two dimensional topological map for easier statistical analysis.

While there are several mechanisms to decide which neuron wins the competition, generally a discriminant function is used. Most often, this function is defined by the squared Euclidean distance between the input vector and the weight vector. Whichever node’s weight vector comes nearest to matching the input vector wins and gets activated.

With this approach, regardless of the type of data coming in, the nodes self-organize to respond to any stimuli (input patterns). The locations of tuned nodes (the winners) are saved by creating a coordinate system for the input features on the lattice map and using the winning nodes as the terminals. The entire process is also sometimes referred to as a non-linear generalization of principal component analysis (PCA).