A New Backpropagation Algorithm without Gradient Descent

01/25/2018 ∙ by Varun Ranganathan, et al. ∙ PES University 0

The backpropagation algorithm, which had been originally introduced in the 1970s, is the workhorse of learning in neural networks. This backpropagation algorithm makes use of the famous machine learning algorithm known as Gradient Descent, which is a first-order iterative optimization algorithm for finding the minimum of a function. To find a local minimum of a function using gradient descent, one takes steps proportional to the negative of the gradient (or of the approximate gradient) of the function at the current point. In this paper, we develop an alternative to the backpropagation without the use of the Gradient Descent Algorithm, but instead we are going to devise a new algorithm to find the error in the weights and biases of an artificial neuron using Moore-Penrose Pseudo Inverse. The numerical studies and the experiments performed on various datasets are used to verify the working of this alternative algorithm.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Artificial Neural Network (ANN), inspired by biological neural networks, is based on a collection of connected units or nodes called artificial neurons. These systems are used as a learning algorithm which tries to mimic how the brain works. ANNs are consider as universal function approximators, that is, it can approximate the function for the data sent through it. It is based on the multilayer perceptron

[3]

model which is a class of feedforward artificial neural networks, consisting of at least three layers of models. Learning occurs in the perceptron by changing connection weights after each piece of data is processed, based on the amount of error in the output compared to the expected result. This is an example of supervised learning, and is carried out through backpropagation, a generalization of the least mean squares algorithm in the linear perceptron. The multilayer perceptron model coupled with the backpropagation algorithm gave rise to the Artificial Neural Network, which can be effectively and efficiently used as a learning algorithm.



Backpropagation [1]

is a method used in artificial neural networks to calculate the error contribution of each neuron after a batch of data is processed. It is commonly used by the gradient descent optimization algorithm to adjust the weight of neurons by calculating the gradient of the loss function. This technique is also sometimes called backward propagation of errors, because the error is calculated at the output and distributed back through the network layers. It also requires a known, desired output for each input value — it is therefore considered to be a supervised learning method.



Gradient Descent [4] is an iterative approach that takes small steps to reach to the local minima of the function. This is used to update the weights and biases of each neuron in a neural network. Gradient descent is based on the observation that if the multivariable function is defined and differentiable in a neighborhood of a point , then decreases fastest if one goes from in the direction of the negative gradient of at , . It follows that, if for small enough, then .

In other words, the term is subtracted from because we want to move against the gradient, toward the minimum. With this observation in mind, one starts with a guess for a local minimum of , and considers the sequence , , , … such that = , for . We have , so hopefully the sequence converges to the desired local minimum.

Even though this method works well in general, it has a few limitations. Firstly, due to the iterative nature of the algorithm, it takes a lot of time to converge to the local minima of the function. Secondly, gradient descent is relatively slow close to the minimum: technically, its asymptotic rate of convergence is inferior to many other methods. Thirdly, the gradient methods are ill-defined for non-differentiable functions.

During the paper we will be referring to the Moore-Penrose Pseudo Inverse [8]

. In mathematics, and in particular linear algebra, a pseudoinverse A+ of a matrix A is a generalization of the inverse matrix. The most widely known type of matrix pseudoinverse is the Moore–Penrose inverse, which was independently described by E. H. Moore in 1920, Arne Bjerhammar in 1951, and Roger Penrose in 1955. A common use of the pseudoinverse is to compute a ‘best fit’ (least squares) solution to a system of linear equations that lacks a unique solution. Another use is to find the minimum (Euclidean) norm solution to a system of linear equations with multiple solutions. The pseudoinverse facilitates the statement and proof of results in linear algebra. The pseudoinverse is defined and unique for all matrices whose entries are real or complex numbers. It can be computed using the singular value decomposition.



In this paper, we formulate another method of finding the errors in weights and biases of the neurons in a neural network. But first, we would like to present a few assumptions made in the model of the neural network, to make our method feasible.

2 Modifications to the neuron structure

We have made one change to the structure of an artificial neuron. We assume that there is a weight and bias associated for each input, that is, each element in the input vector is multiplied by a weight and a bias is added to it. This is a slight alteration from the traditional artificial neuron where there is a common bias applied to the overall output of the neural network. This change will not alter the goal or the end result of a neural network. The proof for this statement is shown below:



For input vector of size ‘n’:

(1)
(2)

Where :

(3)

Therefore, having a separate bias for each input element will make no difference to the end result.

Figure 1: Neuron

3 The New Backpropagation Algorithm

3.1 Calculating new weights and biases for a neuron

Taking one neuron at a time, there is one input entering into the neuron, which is multiplied by some weight and a bias is added to this product. This value is then sent through an activation function, and the output from activation function is taken as the output of the neuron.



Let be the input into the neuron,
the original weight applied to that input is
and the original bias applied to that input is .

Let be the output given initially when the input passes through the neuron.

Let be the output that we require.
Based on the required output, we will require a different weight and bias value, say and respectively.

The original output is calculated as,

(4)

But, we required as the output. Therefore,

(5)

Let,

(6)
(7)

Where,
is the error in the weight and,
is the error in the bias.

(8)
(9)
(10)
(11)
(12)

Therefore,

(13)

Now,

(14)

(15)

But, is not a square matrix.
Therefore, We will have to find the Moore-Penrose Pseudo-Inverse of the matrix .

(16)

After obtaining and , change the original weight and bias to the new weight and bias in accordance to,

(17)
(18)

where is the learning rate.

3.2 Tackling multiple inputs

The above mentioned method of changing weights and biases of the neuron can be extended for a vector input of length .

Figure 2: Neuron for vector length of ‘n’

Let the input vector belong to the dimension.

In this case, each element of the input vector will be multiplied by its respective weight in the neuron, and a bias will be added to each of the products. Therefore, there will be input elements, corresponding weights and biases, and outputs from each weight-bias block. These outputs are added up to give one single output and passed on the activation function.

During the backpropagation stage, the desired output is distributed amongst all the weight-bias pairs, such that, for a block of weight and bias (, ), the required output for that block will be of the required output.

That is, For all weight-bias blocks (, )

(19)

The weights and biases are initialized to random values in the beginning, that is, absolute weightage given to each element in the input vector is randomized. The relative weightage given to each element in the input vector should be the same. Each weight-bias block will give the same output, so that the cumulative output will give us the required answer. Therefore, this method of dividing the weights will work.

3.3 Activation Function for non-linearity

To achieve non-linearity, the general approach taken is to pass the summation of the output from all weight-bias pairs through a non-linear activation function [6]. During the backpropagation phase, to correct the weights and biases values of the neuron, we cannot simply pass the actual output vector required. If we do so, it will change the weights and biases as though there is no activation function, and when the forward propagation of the same vector occurs, the neuron outputs will go through the activation function, and give a wrong result. Therefore, we must pass the output vector required through the inverse of the activation function. We need to make sure that we will have to choose an activation function such that its domain and range are the same, so as to avoid math errors and to avoid loss of data. The new vector after applying the inverse activation function is the actual vector sent to the layers of the network during the backpropagation phase.

3.4 Network Architecture

Figure 3: Neural Network Representation

Figure 3 shows the representation of a neural network. Each neuron outputs one value. The output of every neuron in one layer is sent as the input to every neuron in the next layer. Therefore, each layer can be associated with a buffer list, so that the output from each neuron in that layer can be stored and passed on to the next layer as input. This would help in the implementation of a neural network by simplifying the forward propagation task.

Figure 4: Neural Network Implementation

The input forward propagates through the network and at the last (output) layer it gives out an output vector. Now, for this last layer, the required output is known. Therefore the weights and biases of the neurons of the last layer can be easily changed.

We do not know the required output vectors for the previous layers. We can only make a calculated guess. Using a simple intuition by asking ourselves the question, ”What should be the input (which is the output vector of the previous layer) to the last layer such that the output would be correct?”, we can arrive at a conclusion that the input, which would be the correct required output for the previous layer, is a vector which should have given no error in the output of the last layer. This can be illustrated by the following equations.
If

(20)

Then what vector will satisfy the equation

(21)

This approach can be extended to all the previous layers.

Another issue arises that many neurons will give their own ‘required’ input, so that their outputs will be correct. This could happen in a multiclass classification problem, wherein the output vector required is one-hot encoded vector (where the element of the vector at the position of the required class is 1, and the other elements in the vector are 0). Therefore, we take the average of all vectors. This will give an equal weightage of all the feedbacks from each neuron. Pass this averaged required input vector to the previous layers as the required output from that layer.



This concludes the complete working of the neural network with our devised backpropagation algorithm.

4 Differences with Extreme Learning Machines

Extreme learning machines [7] are feedforward neural network for classification, regression, clustering, sparse approximation, compression and feature learning with a single layer or multilayers of hidden nodes, where the parameters of hidden nodes (not just the weights connecting inputs to hidden nodes) need not be tuned. These hidden nodes can be randomly assigned and never updated (i.e. they are random projection but with nonlinear transforms), or can be inherited from their ancestors without being changed. In most cases, the output weights of hidden nodes are usually learned in a single step, which essentially amounts to learning a linear model.

Even though both the method use the Moore-Penrose Pseudo Inverse, there are a few significant differences between the ELM and the proposed backpropagtion method explained in this paper. The ELM is a feedforward network which is aims at replacing the traditional artificial neural network, whereas this paper provides an alternative for the backpropagation algorithm used in traditional artificial neural networks. The ELM algorithm provides only a forward propagation technique to change the weights and bias of the neurons in the last hidden layer, whereas we have provided a method of backpropagation to change the weights and biases of all neurons in every layer.

5 Results

5.1 Telling-Two-Spirals-Apart Problem

Alexis P. Wieland proposed a useful benchmark task for neural networks: distinguishing between two intertwined spirals. Although this task is easy to visualize, it is hard for a network to learn due to its extreme non-linearity. In this report we exhibit a network architecture that facilitates the learning of the spiral task, and then compare the learning speed of several variants of the backpropagation algorithm.

In our experiment, we are using the spiral dataset which contains 193 data points of each class. We have decided to model the network with a 16-32-64-32-2 configuration, with ‘Softplus’ activation function on all neurons of the network. We trained the model for 1000 epochs, with a learning rate of 0.0002.


Figure 5: Training data points for the Two-Spirals problem
Figure 6: Testing data points for the Two-Spirals problem

From the above 2 figures, we can see that although it doesn’t distinguish between the two spirals very well, we are able to get an accuracy of about 63%. This is due to the fact that the Softplus activation function is not the recommended activation function for this particular problem. The recommended activation function is ‘Tanh’ but, due to the fact that the domain of inverse of the Tanh function lies between and not between , it cannot be used in our backpropagation method without causing some loss of data.

Looking at figure 6, we can observe the non-linearity in the classification of the two sets of spirals, which proves that this backpropagation method is working.

5.2 Separating-Concentric-Circles Problem

Another type of natural patterns is concentric rings. As a test, we use the sklearn.dataset.make_circles function to create 2 concentric circles with each 100 data points, which were respectively assigned to two classes. We used an artificial neural network model with the configurations 16-64-32-2, again using the ‘Softplus’ activation function on all neurons of the network. We trained the model for 1000 epochs with a learning rate of 0.00001.

Figure 7: Training data points for the Concentric-Circles problem
Figure 8: Testing data points for the Concentric-Circles problem

Observing figure 8, we can see that there is a slight non-linearity in the classification of the 2 points. We can observe an accuracy rate of 61%. This low accuracy can again be justified with the fact that the softplus activation function is not suitable for such types of data.

5.3 XOR Problem

Continuing our tests on this alternate algorithm, we create a dataset with 1000 data points with each data sample containing 2 numbers, and 1 class number. If the 2 numbers are both positive or negative, the class is 0, else, the class number is 1. The XOR function is applied on the sign of the number.

Our model was of configuration 4-8-16-32-1 where ‘Softplus’ activation function is applied by all neurons. The learning rate was set to 0.0001 and the network was trained for 100 epochs.

Figure 9: Training data points for the XOR problem
Figure 10: Testing data points for the XOR problem

A validation accuracy of 81% was achieved.

5.4 Wisconsin Breast Cancer Dataset

To further test our neural network model, we used a real-world dataset in testing our neural network. This dataset contains 699 samples, where each sample has 10 attributes as the features, and 1 class attribute. This dataset is taken from the UCI Machine Learning Repository, where samples arrive periodically as Dr. Wolberg reports his clinical cases.

The model had a configuration of 16-2, and the ‘Softplus’ activation function is applied by all neurons. We trained the model for 1000 epochs with a learning rate of 0.0001. We could observe that the validation accuracy reached upto 90.4% at the 78th epoch. Even though the values of validation error and training error are erratic in the start, they seem to reach an almost constant value after some number of epochs.

Figure 11: Validation Accuracy while training for Wisconsin Breast Cancer Dataset
Figure 12: Training Error while training for Wisconsin Breast Cancer Dataset
Figure 13: Validation Error while training for Wisconsin Breast Cancer Dataset

From the above experiments, we can conclude that the Softplus activation function is more suited to the Wisconsin Breast Cancer Dataset and that our proposed backpropagation algorithm truly works.

6 Discussions and Conclusion

From the above stated facts and results, we can observe a few properties with this method. This proposed method of backpropagation can be used very well with activation functions where the domain of the activation function matches the range of its inverse. This property eases the requirement that the activation function must be differentiable. Therefore, ReLU-like activation functions such as LeakyReLU, Softplus, S-shaped rectified linear activation unit (SReLU), etc. will be a good match with this method.



Further optimizations must be made to this method, so that, it can be efficiently used. The requirement of a different type of activation function could accelerate the discovery of many more activation functions which could fit various different models.

We believe that because this backpropagation method suits ReLU-like [2] activation functions, it can be enhanced to be used in the fields of biomedical engineering, due to the asymmetric behaviour of data collected in such fields where the number of data points in different classes are not balanced. Possibly in the future, if a suitable replacement for activation functions, such as Sigmoid and Tanh, are created, this method could be used more frequently.

References