DeepAI AI Chat
Log In Sign Up


What is a Synapse in Machine Learning?

A synapse is the connection between nodes, or neurons, in an artificial neural network (ANN). Similar to biological brains, the connection is controlled by the strength or amplitude of a connection between both nodes, also called the synaptic weight. Multiple synapses can connect the same neurons, with each synapse having a different level of influence (trigger) on whether that neuron is “fired” and activates the next neuron.

How are Synapses used in Neural Networks?

In ANNs, each neuron is defined through its input and its activation function, and its outputs. A synapse is often referred to as a node in the machine learning terminology. Unlike biological neurons, however, these artificial neurons do not fire, they output a value from a continuous function. The choice of activation function will control how much this out behaves as a biological neuron. Mathematically, the synapse is represented as a weight vector. The weights control how the output of the previous layer is fed through the activation function for each node. Using the convenient scalability of linear algebra, all the weight vectors of the synapses for an entire node layer can be represented as a single weight matrix. The resulting output of all these nodes is used as the input for the next layer of nodes and so on throughout the neural network “brain” until the final output layer is reached.

The synapse is defined by these weights and expressed algebraically (for a linear synapse) as:

Typically a bias term will be added and the output values will be passed through a choice of activation function before being used as inputs into the subsequent layer.