Based on the given data set, a second data set is generated. Any training instance with a desired output is relabeled as , and any training instance that has a desired output is relabeled as . Using the two data sets, the GRNN, RBFNN, SVM and FFNN algorithms are implemented, and the statistics for each algorithm are recorded.
I-a General Regression Neural Network
I-B Radial Basis Function Neural Networks
The RBFNN algorithm involves two methods to find out the optimal value. One variation on the algorithm uses Kohonen Unsupervised Learning and Back-Propagation.
I-B1 Learning Vector Quantizer - I
The clustering algorithm constructs clusters of similar input vectors (patterns), where similarity is usually measured in terms of Euclidean distance. The training process of Learning Vector Quantizer-I (LVQ-I) is based on competition. During training, the cluster unit whose weight vector is the ”closest” to the current input pattern is declared as the winner. The corresponding weight vector and that of the neighboring units are then adjusted to better resemble the input pattern. It is not strictly necessary that LVQ-I uses a neighborhood function, thereby updating only the weights of the winning output unit.
I-B2 Learning Vector Quantizer - II
The Kohonen Learning Vector Quantizer - II (LVQ-II) uses information from a supervisor to implement a reward and punish scheme. LVQ-II assumes that the classifications of all input patterns are known. If the winning cluster unit correctly classifies the pattern, then the weights to that unit are rewarded by moving the weights to better match the input pattern. However, if the winning unit misclassified the input pattern, the weights are penalized by moving the weights away from the input vector. Similarly to LVQ-I, a conscience factor can be incorporated to penalize frequent winners.
A RBFNN is a FFNN where hidden units do not implement an activation function, but represents a radial basis function. A RBFNN approximates a desired function by superposition of nonorthogonal, radially symmetric functions. RBFNNs can improve accuracy and decrease training time complexity. The architecture of a RBFNN is similar to that of a FFNN, with the differences being that the hidden units implement a radial basis function, weights from the input units to the a hidden unit represent the center of the radial basis function, and some radial basis functions are characterized by a width, . For such basis functions, the weight from the basis unit in the input layer to each hidden unit represents the width of the basis function. Note that the input unit has an input signal of . The output units of a RBFNN implement linear activation functions, so the output is simply a linear combination of basis functions. As with FFNNs, RBFNNs are universal approximators.
A limitation of K-Nearest Neighbors is that a large database of training examples must be kept in order to predictions. The LVQ-I algorithm allows learning with a much smaller subset of patterns that best represent the training data.
I-C Support Vector Machines
The SVM algorithm uses the derived data set that results from analyzing the given data set. This algorithm also has two parts: Linear SVMs where a linear kernel is used and Radial Basis SVMs where a Gaussian kernel is used. Both SVM algorithms are implemented using scikit-learn. scikit-learn is a Python package which is used to implement machine learning algorithms. In this case,percent of the data set is used as a training set, and the remaining percent is used as the test set.
I-D Feedforward Neural Network
The FFNN algorithm uses scikit-learn to solve the problem using different numbers of hidden layers. The
value is determined using 1, 2 and 4 hidden layers. By convention, the hidden functions must be uniform. However, every hidden layer can use a different activation function, for example, a Gaussian activation function for neurons in the first layer, linear activation functions for neurons in the second layer, etc. The algorithm used in this paper is a sigmoidal activation function.
The given data set is used to discover the value for the GRNNs and RBFNNs and FFNN. The derived data set is used to discover the value for the SVMs. Further analysis compares the performance of each of these algorithms and determines the best performing algorithm.
Ii-a Radial Basis Function Neural Networks
. First, the network weights, the learning rate, and the neighborhood radius are initialized. The Kohonen LVQ-I algorithm initializes the weights with random values, samples from a uniform distribution, or by taking the first input patterns as the initial weight vectors. Stopping conditions may be: a maximum number of epochs is reached, stop when weight adjustments are sufficiently small, or a small enough quantization error has been reached. While the stopping conditions are not true, for each pattern, the Euclidean distance is calculated. The output unit for which the distance is the smallest is found. Then, all the weights are updated for the neighborhood. After each pattern has had all of the weights updated, the learning rate is updated. Then, the neighborhood radius is reduced at specified learning iterations.
Each hidden unit implements a radial basis function, or kernel function, which is a strictly positive, radially symmetric function. A RBF has a unique maximum at its center, , and the function usually drops off to zero rapidly further away from the center. The output of a hidden unit indicates the closeness of the input vector, , to the center of the basis function. Some RBFs are characterized by a width, , which specifies the width of the receptive field of the RBF in the input space for the hidden unit .
The Gaussian kernel is an example of radial basis function kernel, and is shown in Equation (1).
Algorithms for training RBFNNs can vary. Methods used to train RBFNNs differ in the number of parameters that are learned. The fixed centers algorithm adapts only the weights between the hidden and output layers. Adaptive centers training algorithms adapt weights, centers, and deviations. Gradient descent can be used to adjust weights, centers, and widths. Centers can be initialized in an unsupervised training step prior to training the weights between hidden units (radial basis) and output units.
Weight initialization can be based on gradient-based optimization methods. Gradient-based optimization methods, such as gradient descent, are very sensitive to the initial weight vectors. If the initial position is near the local minimum, then convergence will be quick. However, if the initial weight vector is on a flat area on the error surface, then convergence is slow. Also, large initial weight values may prematurely saturate units due to extreme output values with associated zero derivatives. In the case of optimization algorithms, such as Particle Swarm Optimization (PSO) and Genetic Algorithms (GAs), initialization should be uniformly distributed over the entire search space to ensure that all parts of the search space are covered.
Momentum can be based on stochastic learning. Stochastic learning, where weights are adjusted after each pattern presentation, has the disadvantage of fluctuating changes in the sign of the error derivatives. The network spends a lot of time going back and forth, unlearning what the previous steps have learned. Batch learning is a solution to this problem, since weight changes are accumulated and applied only after all patterns in the training set have been presented. Another solution is to keep with stochastic learning, and to add a momentum term. The idea of the momentum term is to average the weight changes, thereby ensuring that the search path is in the average downhill direction. The momentum term is then simply the previous weight change weighted by a scalar value .
Iii-a Radial Basis Function Neural Networks
Gradient Descent Training of RBFNNs begins with the selection of the number of centers. For each center, the center location, mean, and values are chosen. Then, each weight is initialized. Next, a loop that runs until a stopping condition is reached computes the output, weight adjustment step size, and adjusts the weights. Lastly, the center step size is also computed, and the centers are adjusted. The width step size is computed, and then the widths are adjusted.
Before the LVQ-I training phase, the RBFNN is initialized. The centers are initialized by setting all the weights to the average value of all inputs in the training set. The weights are initialized by setting all
to the standard deviation of all input values of the training set. The hidden-to-output weights,, are initialized to small random values. At the end of each LVQ-I iteration, the basis function widths are recalculated. For each hidden unit, the average of the Euclidean distance between is computed, and the input patterns for which the hidden unit is selected is the winner. The width, , is set to the average.
The training set provides data for the neural network to be trained on, and the bias value represents the threshold values of neurons in the next layer. To simplify learning equations, the input vector is augmented to include and additional input unit, or the bias unit. The weight of the bias unit serves as the value of the threshold. The strength of the output signal is influenced by the threshold value, or bias.
The results of the statistical analysis of a comparison of all the algorithms is shown in Table I.
|Without Kohonen Unsupervised Learning and Back-Propagation|
|With Kohonen Unsupervised Learning and Back-Propagation|
|Radial Basis SVM|
Iv-a Radial Basis Function Neural Networks
The results of the statistical analysis are shown in Table II.
|Without Kohonen Unsupervised Learning and Back-Propagation|
|With Kohonen Unsupervised Learning and Back-Propagation|
A problem with LVQ networks is that one cluster unit may dominate as the winning cluster unit, thus putting most patterns in one cluster. To prevent one output unit from dominating, a ”conscience” factor that penalizes an output for winning too many times may be incorporated in a function to determine the winning output unit.
The accuracy of a RBFNN is influenced by: the number of basis functions used, the location of the basis functions, and the width of the receptive field, . A larger represents more of the input space by that basis function. The larger the number of basis functions that are used, the better the approximation of the target function. However, the cost is an increase in computational complexity. The location of the basis functions are defined by the center vector, , for each basis function. Basis functions should be evenly distributed to cover the entire input space.
Training of a RBFNN should consider methods to find the best values for the parameters that affect the accuracy of the RBFNN. For the RBFNN without Kohonen Unsupervised Learning and Back-Propagation, the accuracy is 0.2367, while the precision is 0.3454. The recall is 0.2335, while the F1 value is 0.2342. For the RBFNN with Kohonen Unsupervised Learning and Back-Propagation, the accuracy is 0.2735, while the precision is 0.4245. The recall is 0.3253, while the F1 value is 0.3726.
The aim of this project is to develop a code to discover the optimal value that maximum the F1 score and the optimal value that maximizes the accuracy and to find out if they are the same. Four algorithms which can be used to solve this problem are: GRNNs, RBFNNs, SVMs, and FFNNs. Based on the given data set, a second data set is generated. Any training instance with a desired output is relabeled as , and any training instance that has a desired output is relabeled as . Using the two data sets, the GRNN, RBFNN, SVM and FFNN algorithms are implemented, and the statistics for each algorithm are analyzed and compared.
The RBFNN with Kohonen Unsupervised Learning and Back-Propagation performs better than the RBFNN without Kohonen Unsupervised Learning and Back-Propagation. The statistical comparison shows the RBFNN with Kohonen Unsupervised Learning and Back-Propagation results in a more accurate solution than that of the RBFNN without Kohonen Unsupervised Learning and Back-Propagation.
Vi Breakdown of the Work
Vinika Gupta - SVM, Analysis, and LaTeX Report (Abstract/Introduction/Methodology).
Alison Jenkins - RBFNNs, GRNN, Analysis, and LaTeX Report (Experiment, Results/Editing/Correct Formatting).
Mary Lenoir - FFNN, Analysis, and LaTeX Report (Final Edit, Format, References).
-  Casey, Kenan, et. al. An Evolutionary Approach for Achieving Scalability with General Regression Neural Networks. Auburn University, 2019.
Dozier, J. Genetic Algorithms
. Powerpoint Presentation, COMP6970 - Computational Intelligence and Adversarial Machine Learning class, Auburn University, 2019.
-  Dozier, J. GRNNs, RBFNs, SVMs, NNs. Powerpoint Presentation, COMP6970 - Computational Intelligence and Adversarial Machine Learning class, Auburn University, 2019.
-  Dozier, J. Instance-Based Machine Learning. Powerpoint Presentation, COMP6970 - Computational Intelligence and Adversarial Machine Learning class, Auburn University, 2019.
-  Engelbrecht, Andries P. Computational Intelligence: An Introduction. John Wiley & Sons, 2007.
-  Joseph, Anthony D., et al. Adversarial Machine Learning. Cambridge University Press, 2018.
-  Sarkar, Dipanjan. Text Analytics with Python. Apress, 2016.