DeepAI AI Chat
Log In Sign Up

Radial Basis Function Networks

What is a Radial Basis Function Network?

A Radial Basis Function Network, or RBFN for short, is a form of neural network that relies on the integration of the Radial Basis Function and is specialized for tasks involving non-linear classification. RBFNs differ from traditional multilayer perceptron networks because they do not simply take input vector and multiply by a coefficient before summing the results. Instead, RBFNs use Radial Basis Function neurons that each evaluate the input vector, comparing its stored training value to the input, and producing a measure of similarity. Each similarity value is then multiplied by weights and summed in the output layer. Any new input can easily be computed through a measure of the Euclidean distance between the input and training data.


How does a Radial Basis Function Network work?

There are three main components to a Radial Basis Function Network including the input vector that is being classified, the Radial Basis Function neurons, and the output nodes. As referenced above, the RBFN contains Radial Basis Function neurons arranged in a layer. Each neuron in the layer stores information from the training data called a "prototype" vector. When an input vector is processed by the RBFN, each neuron compares its prototype vector to the input and outputs a value between 0 and 1 denoting similarity. A value of 1 indicates that the input vector matches the prototype, and as the distance between the input and the prototype grows, the value lowers exponentially toward 0. This assigned value is also known as the "activation" value, and its rate of change makes the shape of the neuron's response a bell curve.

An example of a data set with two categories.
The black cross points represent training vectors from each cluster category.

Now that each neuron has produced a value of similarity, the output nodes take a weighted sum of the values. By weighting the sums, the output nodes can polarize the data such that the classification becomes more distinct. For example, two output nodes will have different weights to represent their respective categories. One node may be assigned a positive weight, and the other, a negative weight. Given this network architecture, an input vector can be processed by a RBF layer and assigned a similarity score, and then distinctly categorized by a set of output nodes.