The extreme learning machine (ELM)  is an effective solution for Single-hidden-layer feedforward networks (SLFNs) due to its unique characteristics, i.e., extremely fast learning speed, good generalization performance, and universal approximation capability . Thus ELM has been widely applied in classification and regression .
The incremental ELM proposed in  achieves the universal approximation capability by adding hidden nodes one by one. However, it only updates the output weight for the newly added hidden node, and freezes the output weights of the existing hidden nodes. Accordingly those output weights are no longer the optimal least-squares solution of the standard ELM algorithm. Then the inverse-free algorithm was proposed in  to update the output weights of the added node and the existing nodes simultaneously, and the updated weights are identical to the optimal solution of the standard ELM algorithm. The ELM algorithm in  was based on an inverse-free algorithm to compute the regularized pseudo-inverse, which was deduced from an inverse-free recursive algorithm to update the inverse of a Hermitian matrix.
Before the recursive algorithm to update the inverse was utilized in , it had been mentioned in previous literatures [5, 6, 7, 8, 9], while its improved version had been utilized in [9, 10]. Accordingly from the improved recursive algorithm [9, 10], we deduce a more efficient inverse-free algorithm to update the regularized pseudo-inverse, from which we develop the proposed ELM algorithm . Moreover, the proposed ELM algorithm computes the output weights directly from the updated inverse, to further reduce the computational complexity by avoiding the calculation of the regularized pseudo-inverse. Lastly, instead of updating the inverse, the proposed ELM algorithm updates the factors of the inverse by the inverse factorization proposed in , since the recursive algorithm to update the inverse may introduce numerical instabilities in the processor units with the finite precision, which occurs only after a very large number of iterations .
This correspondence is organized as follows. Section ii@ describes the ELM model. Section iii@ introduces the existing inverse-free ELM algorithm . In Section iv@, we deduce the proposed inverse-free ELM algorithms, and compare the expected computational complexities of the existing and proposed algorithms. Section v@ evaluates the existing and proposed algorithms by numerical experiments. Finally, we make conclusion in Section vi@.
Ii Architecture of the ELM
In the ELM model, the -th input node, the -th hidden node, and the -th output node can be denoted as , , and , respectively, while all the input nodes, hidden nodes, and output nodes can be denoted as , , and , respectively. Accordingly the ELM model can be represented in a compact form as
where , ,
, and the activation functionis entry-wise, i.e., for a matrix input . In (1), the activation function can be chosen as linear, sigmoid, Gaussian models, etc.
Assume there are totally distinct training samples, and let and denote the -th training input and the corresponding -th training output, respectively, where . Then the input sequence and the output sequence in the training set can be represented as
In an ELM, only the output weight is adjustable, while (i.e., the input weights) and (i.e., the biases of the hidden nodes) are randomly fixed. Denote the desired output as
. Then an ELM simply minimizes the estimation error
by finding a least-squares solution for the problem
where denotes the Frobenius norm.
Iii The Existing Inverse-Free ELM Algorithm
In machine learning, it is a common strategy to increase the hidden node number gradually until the desired accuracy is achieved. However, when this strategy is applied in ELM directly, the matrix inverse operation in (10) for the conventional ELM will be required when a few or only one extra hidden node is introduced, and accordingly the algorithm will be computational prohibitive. Accordingly an inverse-free strategy was proposed in , to update the output weights incrementally with the increase of the hidden nodes. In each step, the output weights obtained by the inverse-free algorithm are identical to the solution of the standard ELM algorithm using the inverse operation.
Assume that in the ELM with hidden nodes, we add one extra hidden node, i.e., the hidden node
, which has the input weight row vectorand the bias . Then from (5) it can be seen that the extra row needs to be added to , i.e.,
where () denotes for the ELM with hidden nodes. In , , and what follows, we add the overline to emphasize the extra vector or scalar, which is added to the matrix or vector for the ELM with hidden nodes.
After is updated by (11), the conventional ELM updates the output weights by (10) that involves an inverse operation. To avoid that inverse operation, the algorithm in  utilizes an inverse-free algorithm to update
In , (i.e., for the ELM with hidden nodes) is computed from iteratively by
and , the column of , is computed by
Then we can write (12) as
where , a column vector with entries, satisfies
Iv Proposed Inverse-Free ELM Algorithms
Actually the inverse-free recursive algorithm by (22) and (23c) had been mentioned in previous literatures [5, 6, 7, 8, 9], before it was deduced in  by utilizing the Sherman-Morrison formula and the Schur complement. That inverse-free recursive algorithm can be regarded as the application of the block matrix inverse lemma [5, p.30], and was called the lemma for inversion of block-partitioned matrix [6, Ch. 14.12], [7, equation (16)]. To develop multiple-input multiple-output (MIMO) detectors, the inverse-free recursive algorithm was applied in [7, 8], and its improved version was utilized in [9, 10].
Iv-a Derivation of Proposed ELM Algorithms
respectively, where can be computed by
are computed from and in . The derivation of (30b) is also in Appendix A.
|Dataset+||Node||Weight Error||Output Error (training)||Output Error (testing)||Testing|
|Kernel||Number||||Alg. 1||Alg. 2||Alg. 3||||Alg. 1||Alg. 2||Alg. 3||||Alg. 1||Alg. 2||Alg. 3||MSE|
Since the processor units are limited in precision, the recursive algorithm utilized to update may introduce numerical instabilities, which occurs only after a very large number of iterations . Thus instead of the inverse of (i.e., ), we can also update the inverse factors  of , since usually the factorization is numerically stable . The inverse factors include the upper-triangular and the diagonal , which satisfy
From (31) we can deduce
where the lower-triangular is the conventional factor  of .
The inverse factors can be computed from directly by the inverse factorization in , i.e.,
Iv-B Summary and Complexity Analysis of ELM Algorithms
Firstly let us summarize the existing and proposed inverse-free ELM algorithms, which all compute the output by (6), and compute the estimation error by (7). In (6) and (7), the output weight is required.
The existing inverse-free ELM Algorithm  uses (15), (16) and (14) to update the regularized pseudo-inverse , from which the output weight is computed by (13). The proposed Algorithm uses (21), (27), (25), (26) and (14) to update the regularized pseudo-inverse , from which the output weight is computed by (29b) and (28). The proposed Algorithm uses (21), (24c) and (22) to update the unique inverse , from which the output weight is computed by (30b) and (28). The proposed Algorithm uses (21), (34b) and (33b) to update the factors of , from which the output weight is computed by (30b), (36) and (28).
|Kernel||Number||Alg. 1||Alg. 2||Alg. 3|
In the remainder of this subsection, we compare the expected flops (floating-point operations) of the existing ELM algorithm in  and the proposed ELM algorithms. Obviously flops are required to multiply a matrix by a matrix, and flops are required to sum two matrices in size .
In Table i@, we compare the flops of the existing ELM algorithm  and the proposed ELM algorithms , and . As in , the flops of the existing ELM algorithm do not include the entries for simplicity, since usually the ELM has large (the number of training examples) and (the number of hidden nodes). The flops of the proposed ELM algorithms do not include the entries that are or . Since usually , it can easily be seen from Table i@ that with respect to the existing ELM algorithm, the proposed ELM algorithms , and only require about , and of flops, respectively.
Notice that in the proposed ELM algorithm , computed in (27) can be utilized in (26) and (29a). The dominant computational load of the proposed ELM algorithm comes from (21), (27), (25) and (29b), of which the flops are , , and , respectively. Moreover, in the proposed ELM algorithms and , the dominant computational load comes from (21) and (30b), of which the flops are and , respectively.
V Numerical Experiments
We follow the simulations in , to compare the existing inverse-free ELM algorithm and the proposed inverse-free ELM algorithms on MATLAB software platform under a Microsoft-Windows Server with GB of RAM. We utilize a fivefold cross validation to partition the datasets into training and testing sets. To measure the performance, we employ the mean squared error (MSE) for regression problems, and employ four commonly used indices for classification problems, i.e., the prediction accuracy (ACC), the sensitivity (SN), the precision (PE) and the Matthews correlation coefficient (MCC). Moreover, the regularization factor is set to to avoid over-fitting.
For the regression problem, we consider energy efficiency dataset , housing dataset , airfoil self-noise dataset , and physicochemical properties of protein dataset . For those datasets, different activation functions are chosen, which include Gaussian, sigmoid, sine and triangular. As Table iv@ in , Table ii@ shows the regression performance. In table ii@, the weight error and the output error are defined as and , respectively, where and are computed by an inverse-free ELM algorithm, and and are computed by the standard ELM algorithm. We set the initial hidden node number to , and utilize the existing and proposed inverse-free ELM algorithms to add the hidden nodes one by one till the hidden node number reaches . Table ii@ includes the simulation results for the hidden node numbers , and .
As observed from Table ii@, after iteration (i.e., the node number ), the weight error and the output error are less than . For the existing inverse-free ELM algorithm and the proposed algorithms and , the weight error and the output error are less than after iterations (i.e., the node number ), and are not greater than after iterations (i.e., the node number ). However, for the proposed algorithms , the weight error and the output error are not greater than after iterations, and are not greater than after iterations, since the recursive algorithm to update introduces numerical instabilities after a very large number of iterations . Overall, the standard ELM, the existing inverse-free ELM algorithm and the proposed ELM algorithms , and achieve the same testing MSEs, which have been listed in the last column of Table ii@.
|Proposed Alg. 1||3.41||3.77|
|Proposed Alg. 2||45.50||44.04|
|Proposed Alg. 3||26.28||31.04|
The speedups in training time of the proposed ELM algorithms , and over the existing inverse-free ELM algorithm are shown in Table iii@, where we add just one node to reach and nodes, respectively, and we do simulations to compute the average training time. The speedups are computed by , i.e., the ratio between the training time of the existing ELM algorithm and that of the proposed ELM algorithm. As observed from Table iii@, all the proposed algorithms significantly accelerate the existing inverse-free ELM algorithm.
For the classification problem, we consider MAGIC Gamma telescope dataset , musk dataset , adult dataset  and diabetes dataset . For each dataset, five activation functions are simulated, i.e., Gaussian, sigmoid, Hardlim, triangular and sine. In the simulations, the standard ELM, the existing inverse-free ELM algorithm and the proposed ELM algorithms , and achieve the same performance, which have been listed in Table iv@.
Lastly, in Table v@ we simulate the existing and proposed algorithms on the Modified National Institute of Standards and Technology (MNIST) dataset  with training images and testing images, to show the performance on big data. To give the testing accuracy, we set the initial hidden node number to , and utilize the existing and proposed ELM algorithms to add hidden nodes one by one till the hidden node number reaches . To give the speedups of the proposed algorithms over the existing algorithm, we compare the training time to reach nodes by adding one node, and do simulations to compute the average training time.
As observed from Table v@, the existing and proposed inverse-free ELM algorithms bear the same testing accuracy, while all the proposed algorithms significantly accelerate the existing inverse-free ELM algorithm. Moreover, from Table v@ and Table iii@, it can be seen that usually the proposed algorithm is faster than the proposed algorithm , and the proposed algorithm is faster than the proposed algorithm .
To reduce the computational complexity of the existing inverse-free ELM algorithm , in this correspondence we utilize the improved recursive algorithm [9, 10] to deduce the proposed ELM algorithms , and . The proposed algorithm includes a more efficient inverse-free algorithm to update the regularized pseudo-inverse . To further reduce the computational complexity, the proposed algorithm computes the output weights directly from the updated inverse , and avoids computing the regularized pseudo-inverse . Lastly, instead of updating the inverse , the proposed ELM algorithm updates the factors of the inverse by the inverse factorization , since the inverse-free recursive algorithm to update the inverse introduces numerical instabilities after a very large number of iterations . With respect to the existing ELM algorithm, the proposed ELM algorithms , and are expected to require only , and of flops, respectively. In the numerical experiments, the standard ELM, the existing inverse-free ELM algorithm and the proposed ELM algorithms 1, 2 and 3 achieve the same performance in regression and classification, while all the proposed algorithms significantly accelerate the existing inverse-free ELM algorithm. Moreover, in the simulations, usually the proposed algorithm is faster than the proposed algorithm , and the proposed algorithm is faster than the proposed algorithm .
into which substitute (24b) to obtain
into which substitute (21) to obtain
-  G. B. Huang, Q. Y. Zhu, and C. K. Siew, “Extreme learning machine: Theory and applications,”, Neurocomputing, vol. 70, nos. 1-3, pp. 489-501, Dec. 2006.
-  G. B. Huang, L. Chen, and C. K. Siew, “Universal approximation using incremental constructive feedforward networks with random hidden nodes”, IEEE Trans. Neural Netw., vol. 17, no. 4, pp. 879-892, Jul. 2006.
-  G. B. Huang, H. Zhou, X. Ding, and R. Zhang, “Extreme learning machine for regression and multiclass classification,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 42, no. 2, pp. 513-529, Apr. 2012.
-  S. Li, Z. You, H. Guo, X. Luo, and Z. Zhao, “Inverse-Free Extreme Learning Machine With Optimal Information Updating”, IEEE Trans. on Cybernetics, vol. 46, no. 5, pp. 1229-1241, May 2016.
-  H. L tkepohl, Handbook of Matrices, New York: John Wiley Sons, 1996.
-  T. K. Mood and W. C. Stirling, Mathematical Methods and Algorithms for Signal Processing, Prentice Hall, 2000.
-  L. Szczecinski and D. Massicotte, “Low complexity adaptation of MIMO MMSE receivers, implementation aspects”, Proc. Global Commun. Conf. (Globecom’05), St. Louis, MO, USA, Nov. 2005.
-  Y. Shang and X. G. Xia, “On fast recursive algorithms for V-BLAST with optimal ordered SIC detection”, IEEE Trans. Wireless Commun., vol. 8, pp. 2860-2865, June 2009.
-  H. Zhu, W. Chen, and F. She, “Improved fast recursive algorithms for V-BLAST and G-STBC with novel efficient matrix inversions”, Proc. IEEE Int. Conf. Commun., pp. 211-215, 2009.
-  H. Zhu, W. Chen, B. Li, and F, Gao, “A Fast Recursive Algorithm for G-STBC”, IEEE Trans. on Commun., vol. 59, no. 8, Aug. 2011.
-  H. Zhu, W. Chen, and B. Li, “Efficient Square-Root and Division Free Algorithms for Inverse Factorization and the Wide-Sense Givens Rotation with Application to V-BLAST”, IEEE Vehicular Technology Conference (VTC), 2010 Fall, 6-9 Sept., 2010.
-  J. Benesty, Y. Huang, and J. Chen, “A fast recursive algorithm for optimum sequential signal detection in a BLAST system”, IEEE Trans. Signal Process., pp. 1722-1730, July 2003.
-  Y. Miche, M. van Heeswijk, P. Bas, O. Simula, and A. Lendasse, “TROP-ELM: A double-regularized ELM using LARS and Tikhonov regularization”, Neurocomputing, vol. 74, no. 16, pp. 2413-2421, 2011.
-  Y. Miche et al., “OP-ELM: Optimally pruned extreme learning machine”, IEEE Trans. Neural Netw., vol. 21, no. 1, pp. 158-162, Jan. 2010.
-  G. H. Golub and C. F. Van Loan, Matrix Computations, third ed. Baltimore, MD: Johns Hopkins Univ. Press, 1996.
-  A. Tsanas and A. Xifara, “Accurate quantitative estimation of energy performance of residential buildings using statistical machine learning tools”, Energy Build., vol. 49, pp. 560-567, Jun. 2012.
R. Setiono and H. Liu, “A connectionist approach to generating oblique decision trees”,IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 29, no. 3, pp. 440-444, Jun. 1999.
-  R. L pez-Gonz lez, “Neural networks for variational problems in engineering”, Ph.D. dissertation, Dept. Comput. Lang. Syst., Tech. Univ. Catalonia, Barcelona, Spain, Sep. 2008.
-  M. Lichman, UCI Machine Learning Repository, School Inf. Comput. Sci., Univ. California, Irvine, CA, USA, 2013. [Online]. Available: http://archive.ics.uci.edu/ml
-  R. K. Bock et al., “Methods for multidimensional event classification: A case study using images from a Cherenkov gamma-ray telescope”, Nucl. Instrum. Methods Phys. Res. A, vol. 516, pp. 511-528, Jan. 2004.
-  T. G. Dietterich, R. H. Lathrop, and T. L. Perez, “Solving the multiple instance problem with axis-parallel rectangles”, Artif. Intell., vol. 89, nos.1-2, pp. 31-71, 1997.
R. Kohavi, “Scaling up the accuracy of Naive-Bayes classifiers: A decision-tree hybrid”,Proc. 2nd Int. Conf. Knowl. Disc. Data Min., Portland, OR, USA, 1996, pp. 202 C207.
-  Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition”, Proc. IEEE, vol. 86, no. 11, pp. 2278-2324, Nov. 1998.