PairNets: Novel Fast Shallow Artificial Neural Networks on Partitioned Subspaces

01/24/2020
by   Luna M. Zhang, et al.
26

Traditionally, an artificial neural network (ANN) is trained slowly by a gradient descent algorithm such as the backpropagation algorithm since a large number of hyperparameters of the ANN need to be fine-tuned with many training epochs. To highly speed up training, we created a novel shallow 4-layer ANN called "Pairwise Neural Network" ("PairNet") with high-speed hyperparameter optimization. In addition, a value of each input is partitioned into multiple intervals, and then an n-dimensional space is partitioned into M n-dimensional subspaces. M local PairNets are built in M partitioned local n-dimensional subspaces. A local PairNet is trained very quickly with only one epoch since its hyperparameters are directly optimized one-time via simply solving a system of linear equations by using the multivariate least squares fitting method. Simulation results for three regression problems indicated that the PairNet achieved much higher speeds and lower average testing mean squared errors (MSEs) for the three cases, and lower average training MSEs for two cases than the traditional ANNs. A significant future work is to develop better and faster optimization algorithms based on intelligent methods and parallel computing methods to optimize both partitioned subspaces and hyperparameters to build the fast and effective PairNets for applications in big data mining and real-time machine learning.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

02/10/2020

Pairwise Neural Networks (PairNets) with Low Memory for Fast On-Device Applications

A traditional artificial neural network (ANN) is normally trained slowly...
01/12/2019

Recombination of Artificial Neural Networks

We propose a genetic algorithm (GA) for hyperparameter optimization of a...
07/04/2020

Understanding the effect of hyperparameter optimization on machine learning models for structure design problems

To relieve the computational cost of design evaluations using expensive ...
02/12/2021

Exploiting Spline Models for the Training of Fully Connected Layers in Neural Network

The fully connected (FC) layer, one of the most fundamental modules in a...
08/27/2021

Artificial Neural Networks Based Analysis of BLDC Motor Speed Control

Artificial Neural Network (ANN) is a simple network that has an input, a...
09/30/2019

Training-Free Artificial Neural Networks

We present a numerical scheme for the computation of Artificial Neural N...
01/06/2022

Efficient Global Optimization of Two-layer ReLU Networks: Quadratic-time Algorithms and Adversarial Training

The non-convexity of the artificial neural network (ANN) training landsc...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Traditionally, an artificial neural network (ANN) is trained very slowly by a gradient descent algorithm such as the backpropagation algorithm [1-3] since a large number of hyperparameters of the ANN need to be fine-tuned with a larger number of training epochs. In particular, a deep neural network [4-9], such as a convolutional neural network (CNN), typically takes a long time to be trained well. Other intelligent training algorithms use various advanced optimization methods such as genetic algorithms [10-17], particle swarm optimization methods [18], and annealing algorithms [19] to try to find optimal hyperparameters of an ANN. However, these commonly used training algorithms take very long training time. An important research goal is to develop a new ANN with high computation speed and high performance, such as low validation errors, for various machine learning applications especially involving big data mining and real-time computation.

Neural network structure optimization algorithms also take a lot of time to try to find optimal or near-optimal numbers of different layers and numbers of neurons on different layers for big data mining problems. Especially, deep neural networks need much longer time. Thus, it is useful to develop fast shallow neural networks with relatively small numbers of neurons on different layers. We created a novel shallow 4-layer ANN with high-speed hyperparameter optimization. Training data are partitioned into local

-dimensional subspaces. Local shallow 4-layer ANNs are trained by using the partitioned data sets in the local -dimensional subspaces. This divide-and-conquer approach can optimize the local ANNs with simpler nonlinear functions more easily using smaller data sets in the local subspaces. Based on positive preliminary simulation results, we will continue to develop more advanced optimization algorithms to optimize both partitioned subspaces and hyperparameters to build fast and effective ANNs.

2 Pairwise Neural Network (PairNet)

We propose a novel shallow ANN called “Pairwise Neural Network” (PairNet) that consists of only four layers of neurons to map inputs on the first layer to one output on the fourth layer.

Layer 1: Layer 1 has neuron pairs to map inputs to

outputs. Each pair has two neurons where one neuron has an increasing activation function

that generates a positive normalized value, and the other neuron has a decreasing activation function that generates a negative normalized value for .

Layer 2: Layer 2 consists of neurons, where each neuron has an activation function to map inputs to an output as a complementary decision fusion. Each of the inputs is an output of one of the two neurons of each neuron pair on Layer 1. Let denote , and denote for . The activation functions of neurons on Layer 2 are given as , …, , , where are hyperparameters to be optimized for , , and . . For a special case, the weights () are equal, so for .

Layer 3: Layer 3 also consists of neurons but transforms the outputs of the second layer to individual output decisions. for , for , where . (activation functions) are defined as , where .

Layer 4: Layer 4 generates a final nonlinear output .

3 Fast Training Algorithm with Hyperparameter Optimization on Partitioned Subspaces

A data set has data, where each data consists of inputs for , and one output . An input has intervals in such that , , …, , and for , and . Then there are () -dimensional subspaces for . data are distributed in the -dimensional subspaces. A -dimensional subspace has data with outputs for , , and . For each -dimensional subspace such as (, , …, , and ), a PairNet can map inputs for to one output for . Thus, a local PairNet is built using all of the data points in a local -dimensional subspace. This divide-and-conquer approach can train the local PairNet using specific local data features to improve model performance.

For a regression problem, Layer 4 of a PairNet calculates a final output decision by computing a weighted average of the individual output decisions of Layer 3. The final ouptut is generated by a nonlinear function , , where . Finally, , where for .

The objective optimization function for a PairNet for is given below,

(1)

After setting and , we have linear equations with hyperparameters ( and ) for as follows:

(2)

The above system of linear equations (2) can be quickly solved to find optimal hyperparameters ( and ) for . Each subspace must have at least data points. A new PairNet model selection algorithm is given in Algorithm 1.

1:: the number of candidate PairNet models
2:the best PairNet model
3:Randomly generate subspaces for .
4:Calculate hyperparameters using equation (2) for subspaces to generate local PairNets.
5:Evaluate the performance of the local PairNets.
6:Set the best model to be this PairNet, which has local PairNets.
7:for  to  do
8:     Randomly generate subspaces for .
9:     Calculate hyperparameters using equation (2) for subspaces to generate local PairNets.
10:     Evaluate the performance of the local PairNets.
11:     If this newly generated PairNet (which has local PairNets) has better performance than the best PairNet, then the best model is the newly generated PairNet.
12:end for
13:return the best PairNet model.
Algorithm 1 PairNet Model Selection Algorithm with Fast Hyperparameter Optimization

4 Simulation Results

To compare an ANN and the PairNet, three different simulations using three different functions are done. The first 3-input-1-output benchmark function [20-23] is given below:

(3)

The second 3-input-1-output function is given below:

(4)

The third 3-input-1-output function is given below:

(5)

Three training data sets (each with training data) are generated by the three functions shown in equations (3), (4), and (5) such that , , , where the operator is used, , , , and . Three testing data sets (each with testing data) are generated by the three functions shown in equations (3), (4), and (5) such that , , , where the operator is used, .

For simulations, the best 20-layer ANN was selected from five random 20-layer ANNs using ReLU (500 epochs), and the best PairNet was selected from five random PairNets with random

for and random 3-dimensional subspaces. Results shown in Table 1 indicated that the PairNets outperformed traditional ANNs in terms of speed and testing mean squared errors (MSEs).

Method Function (sec)
PairNet 3.06 0.191 0.225
ANN 199.3 0.022 0.249
PairNet 3.01 0.00075 0.00227
ANN 192.6 0.04214 0.02510
PairNet 2.84 10.930 7.7513
ANN 277.6 86.861 66.798
Table 1: Performance comparison between the best ANNs and the best PairNets

Simulation results for , , and shown in Table 2 indicated that the more number of different partitioned subspaces, the better a PairNet tended to perform in terms of training MSE () and testing MSE () in most cases. In addition, a PairNet with more subspaces is not always better than that with fewer subspaces. An important future work is to develop a new high-speed optimization algorithm to find both best partitioned subspaces and optimal hyperparameters for building the best PairNet.

Partitions (--) Subspaces
2-2-2 8 1.926 0.940 0.1713 0.1325 258.0 148.4
2-3-4 24 0.857 0.673 0.1091 0.0903 224.3 132.4
3-3-3 27 0.939 0.606 0.0348 0.0302 78.30 46.39
3-4-5 60 0.444 0.624 0.0253 0.0242 82.60 44.47
4-4-4 64 0.534 0.702 0.0111 0.0160 37.60 37.60
4-5-6 120 0.245 0.426 0.0065 0.0122 25.94 35.82
5-5-5 125 0.291 0.563 0.0041 0.0085 14.39 23.27
6-6-6 216 0.168 0.245 0.0018 0.0030 7.966 7.300
Table 2: Performance analysis for the PairNets on different partitioned subspaces for , , and

5 Conclusions

The new shallow 4-layer PairNet can be trained very quickly with only one epoch since its hyperparameters are directly optimized one-time via simply solving a system of linear equations by using the multivariate least squares fitting method. Different from gradient descent training algorithms and other training algorithms such as genetic algorithms, the new training algorithm with direct hyperparameter computation can quickly train the PairNet because it does not need slow training with a large number of epochs. Initial simulation results show that the shallow PairNet is much faster than traditional ANNs. For accuracy, the PairNet may not always achieve the lowest training MSE but can achieve lower testing MSEs than traditional ANNs.

In addition, the divide-and-conquer approach used by Algorithm 1 is effective and efficient to build local PairNet models on local -dimensional subspaces. For big data mining applications, partitioning a big data space into many small data subspaces is useful since each local PairNet covering a small data subspace is built more quickly using fewer data points than a global PairNet covering the whole big data space.

6 Future Works

More robust simulations with much more complex data sets with more inputs will be done to further evaluate the PairNet and to further compare the PairNet and traditional ANNs. Additionally, a new PairNet with a new activation function of the neuron on Layer 4 will be created for classification applications. The new PairNet will be further evaluated by commonly used benchmark classification problems. The PairNet can be optimized to reduce the training MSE and testing MSE by model selection via optimizing partitioned local -dimensional subspaces.

Although the PairNet is a shallow neural network, it is actually a wide neural network if is large because both the second layer and the third layer have neurons with the first layer having

neurons. Thus, the PairNet has the curse of dimensionality. However, we will develop advanced divide-and-conquer methods to solve it. The preliminary simulations applied a random data partitioning method to divide a whole

-dimensional space into many -dimensional subspaces. More intelligent data partitioning methods will be created to build more effective local PairNets on optimized -dimensional subspaces.

A significant future work is to develop more effective and faster hyperparameter optimization algorithms using parallel computing methods to find the best high-speed PairNet model with ideal activation functions on optimized -dimensional subspaces for various applications in real-time machine learning and big data mining.

References

[1] Werbos, P. (1974) Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences. PhD thesis, Harvard University.

[2] Werbos, P. (1990) Backpropagation through time: what it does and how to do it. Proceedings of the IEEE 78(10): 1550–1160.

[3] Rumelhart, D. E.; Hinton, G. E.;  & Williams, R. J. (1986) Learning representations by back-propagating errors. Nature 323: 533–536.

[4] LeCun, Y., Bengio, Y. & Hinton, G.E. (2015) Deep learning. Nature 521, pp. 436–444.

[5] Krizhevsky, A., Sutskever, I. & Hinton, G.E. (2012) Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems 25, pp. 1097–1105. Cambridge, MA: MIT Press.

[6] He, K., Zhang, X., Ren, S. & Sun, J.  (2016) Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778.

[7] Esteva, A., Kuprel, B., Novoa, R.A., Ko, J., Swetter, S.M., Blau, H.M. & Thrun, S. (2017) Dermatologist-level classification of skin cancer with deep neural networks. Nature 542(7639):115–118.

[8] Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed S., Anguelov D., Erhan, D., Vanhoucke, V. & Rabinovich, A. (2015) Going Deeper with Convolutions. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–9.

[9] Szegedy, C., Ioffe, S., Vanhoucke, V. & Alemi, A. (2017) Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17), pp. 4278–4284.

[10] Sun, Y., Xue, B., Zhang, M.  & Yen, G. (2018) Automatically Designing CNN Architectures Using Genetic Algorithm for Image Classification. [Online.] Available: \(https://arxiv.org/pdf/1808.03818.pdf\).

[11] You, Z.  & Pu, Y.  (2015) The Genetic Convolutional Neural Network Model Based on Random Sample. International Journal of u- and e- Service, Science and Technology, pp. 317–326.

[12] Ijjina, E. P.  & Mohan, C. K. (2016) Human action recognition using genetic algorithms and convolutional neural networks. Pattern Recognition, vol. 59, pp. 199–212.

[13] Fujino, S., Hatanaka, T., Mori, N.  & Matsumoto, K.  (2017) The evolutionary deep learning basedon deep convolutional neural network for the anime storyboard recognition. International Symposium on Distributed Computing and Artificial Intelligence (DCAI 2017), pp. 278–285.

[14] Bochinski, E., Senst, T.  & Sikora, T.  (2017) Hyper-parameter optimization for convolutional neural network committees based on evolutionary algorithms. 2017 IEEE International Conference on Image Processing (ICIP 2017), pp. 3924—3928.

[15] Tian, H., Pouyanfar, S., Chen, J., Chen, S.-C.  & Iyengar, S. S.  (2018) Automatic Convolutional Neural Network Selection for Image Classification Using Genetic Algorithms. 2018 IEEE International Conference on Information Reuse and Integration (IRI 2018), pp. 444– 451.

[16] Loussaief, S.  & Abdelkrim, A.  (2018) Convolutional Neural Network Hyper-Parameters Optimization based on Genetic Algorithms. International Journal of Advanced Computer Science and Applications, vol. 9, no. 10, pp. 252—266.

[17] Baldominos, A., Saez, Y.  & Isasi, P.  (2018) Model Selection in Committees of Evolved Convolutional Neural Networks using Genetic Algorithms. Intelligent Data Engineering and Automated Learning – IDEAL 2018, pp. 364–373.

[18] Sinha, T., Haidar, A.  & Verma, B.  (2018) Particle Swarm Optimization Based Approach for Finding Optimal Values of Convolutional Neural Network Parameters. 2018 IEEE Congress on Evolutionary Computation (CEC), pp. 1–6.

[19] Ayumi, V., Rasdi Rere, L. M., Fanany, M. I.  & Arymurthy, A. M.  (2016) Optimization of convolutional neural network using microcanonical annealing algorithm. 2016 International Conference on Advanced Computer Science and Information Systems (ICACSIS), pp. 506–3511.

[20] Kondo T. (1986) Revised GMDH algorithm estimating degree of the complete polynomial. Trans. SOC. Instrument and Contr. Engineers, vol. 22, no. 9, pp. 928–934.

[21] Sugeno M. & Kang G. T. (1988) Structure identification of fuzzy model, Fuzzy Sets Syst., vol. 28, pp. 15–33.

[22] Takagi H. & Hayashi I. (1991) NN-driven fuzzy reasoning. Int. J. Approxi- mate Reasoning, vol. 5, no. 3, pp. 191–212.

[23] Jang, J.S.R. (1993) ANFIS: adaptive-network-based fuzzy inference system. IEEE Transactions on Systems, Man, and Cybernetics 23(3): 665–685.