Efficient Inverse-Free Algorithms for Extreme Learning Machine Based on the Recursive Matrix Inverse and the Inverse LDL' Factorization

11/12/2019 ∙ by Hufei Zhu, et al. ∙ Shenzhen University 0

The inverse-free extreme learning machine (ELM) algorithm proposed in [4] was based on an inverse-free algorithm to compute the regularized pseudo-inverse, which was deduced from an inverse-free recursive algorithm to update the inverse of a Hermitian matrix. Before that recursive algorithm was applied in [4], its improved version had been utilized in previous literatures [9], [10]. Accordingly from the improved recursive algorithm [9], [10], we deduce a more efficient inverse-free algorithm to update the regularized pseudo-inverse, from which we develop the proposed inverse-free ELM algorithm 1. Moreover, the proposed ELM algorithm 2 further reduces the computational complexity, which computes the output weights directly from the updated inverse, and avoids computing the regularized pseudoinverse. Lastly, instead of updating the inverse, the proposed ELM algorithm 3 updates the LDLT factor of the inverse by the inverse LDLT factorization [11], to avoid numerical instabilities after a very large number of iterations [12]. With respect to the existing ELM algorithm, the proposed ELM algorithms 1, 2 and 3 are expected to require only (8+3)/M , (8+1)/M and (8+1)/M of complexities, respectively, where M is the output node number. In the numerical experiments, the standard ELM, the existing inverse-free ELM algorithm and the proposed ELM algorithms 1, 2 and 3 achieve the same performance in regression and classification, while all the 3 proposed algorithms significantly accelerate the existing inverse-free ELM algorithm

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The extreme learning machine (ELM) [1] is an effective solution for Single-hidden-layer feedforward networks (SLFNs) due to its unique characteristics, i.e., extremely fast learning speed, good generalization performance, and universal approximation capability [2]. Thus ELM has been widely applied in classification and regression [3].

The incremental ELM proposed in [2] achieves the universal approximation capability by adding hidden nodes one by one. However, it only updates the output weight for the newly added hidden node, and freezes the output weights of the existing hidden nodes. Accordingly those output weights are no longer the optimal least-squares solution of the standard ELM algorithm. Then the inverse-free algorithm was proposed in [4] to update the output weights of the added node and the existing nodes simultaneously, and the updated weights are identical to the optimal solution of the standard ELM algorithm. The ELM algorithm in [4] was based on an inverse-free algorithm to compute the regularized pseudo-inverse, which was deduced from an inverse-free recursive algorithm to update the inverse of a Hermitian matrix.

Before the recursive algorithm to update the inverse was utilized in [4], it had been mentioned in previous literatures [5, 6, 7, 8, 9], while its improved version had been utilized in [9, 10]. Accordingly from the improved recursive algorithm [9, 10], we deduce a more efficient inverse-free algorithm to update the regularized pseudo-inverse, from which we develop the proposed ELM algorithm . Moreover, the proposed ELM algorithm computes the output weights directly from the updated inverse, to further reduce the computational complexity by avoiding the calculation of the regularized pseudo-inverse. Lastly, instead of updating the inverse, the proposed ELM algorithm updates the factors of the inverse by the inverse factorization proposed in [11], since the recursive algorithm to update the inverse may introduce numerical instabilities in the processor units with the finite precision, which occurs only after a very large number of iterations [12].

This correspondence is organized as follows. Section ii@ describes the ELM model. Section iii@ introduces the existing inverse-free ELM algorithm [4]. In Section iv@, we deduce the proposed inverse-free ELM algorithms, and compare the expected computational complexities of the existing and proposed algorithms. Section v@ evaluates the existing and proposed algorithms by numerical experiments. Finally, we make conclusion in Section vi@.

Ii Architecture of the ELM

In the ELM model, the -th input node, the -th hidden node, and the -th output node can be denoted as , , and , respectively, while all the input nodes, hidden nodes, and output nodes can be denoted as , , and , respectively. Accordingly the ELM model can be represented in a compact form as

(1)

and

(2)

where , ,

, and the activation function

is entry-wise, i.e., for a matrix input . In (1), the activation function can be chosen as linear, sigmoid, Gaussian models, etc.

Assume there are totally distinct training samples, and let and denote the -th training input and the corresponding -th training output, respectively, where . Then the input sequence and the output sequence in the training set can be represented as

(3)

and

(4)

respectively. We can substitute (3) into (1) to obtain

(5)

where is the value sequence of all hidden nodes, and is the Kronecker product [4]. Then we can substitute (5) and (4) into (2) to obtain the actual training output sequence

(6)

In an ELM, only the output weight is adjustable, while (i.e., the input weights) and (i.e., the biases of the hidden nodes) are randomly fixed. Denote the desired output as

. Then an ELM simply minimizes the estimation error

(7)

by finding a least-squares solution for the problem

(8)

where denotes the Frobenius norm.

For the problem (8), the unique minimum norm least-squares solution is [1]

(9)

To avoid over-fitting, the popular Tikhonov regularization [13, 14] can be utilized to modify (9) into

(10)

where denotes the regularization factor. Obviously (9) is just the special case of (10) with . Thus in what follows, we only consider (10) for the ELM with Tikhonov regularization.

Iii The Existing Inverse-Free ELM Algorithm

In machine learning, it is a common strategy to increase the hidden node number gradually until the desired accuracy is achieved. However, when this strategy is applied in ELM directly, the matrix inverse operation in (

10) for the conventional ELM will be required when a few or only one extra hidden node is introduced, and accordingly the algorithm will be computational prohibitive. Accordingly an inverse-free strategy was proposed in [4], to update the output weights incrementally with the increase of the hidden nodes. In each step, the output weights obtained by the inverse-free algorithm are identical to the solution of the standard ELM algorithm using the inverse operation.

Assume that in the ELM with hidden nodes, we add one extra hidden node, i.e., the hidden node

, which has the input weight row vector

and the bias . Then from (5) it can be seen that the extra row needs to be added to , i.e.,

(11)

where () denotes for the ELM with hidden nodes. In , , and what follows, we add the overline to emphasize the extra vector or scalar, which is added to the matrix or vector for the ELM with hidden nodes.

After is updated by (11), the conventional ELM updates the output weights by (10) that involves an inverse operation. To avoid that inverse operation, the algorithm in [4] utilizes an inverse-free algorithm to update

(12)

that is the regularized pseudo-inverse of , and then substitutes (12) into (10) to compute the output weights by

(13)

In [4], (i.e., for the ELM with hidden nodes) is computed from iteratively by

(14)

where

(15)

and , the column of , is computed by

(16)

Let

(17)

and

(18)

Then we can write (12) as

(19)

From (17) we have , into which we substitute (11) to obtain

(20)

where , a column vector with entries, satisfies

(21)

The inverse-free recursive algorithm computes by equations (11), (16), (13) and (14) in [4], which can be written as

(22)

and

(23a)
(23b)
(23c)

respectively. Notice that in (22) and (23c), is a column vector with entries, and is a scalar.

Iv Proposed Inverse-Free ELM Algorithms

Actually the inverse-free recursive algorithm by (22) and (23c) had been mentioned in previous literatures [5, 6, 7, 8, 9], before it was deduced in [4] by utilizing the Sherman-Morrison formula and the Schur complement. That inverse-free recursive algorithm can be regarded as the application of the block matrix inverse lemma [5, p.30], and was called the lemma for inversion of block-partitioned matrix [6, Ch. 14.12], [7, equation (16)]. To develop multiple-input multiple-output (MIMO) detectors, the inverse-free recursive algorithm was applied in [7, 8], and its improved version was utilized in [9, 10].

Updating the
Intermediate Results
Updating the
Output Weights
Existing Alg.
Proposed Alg.
Proposed Alg.
Proposed Alg.
TABLE I: Comparison of Flops among the Existing and Proposed ELM Algorithms

Iv-a Derivation of Proposed ELM Algorithms

In the improved version [9, 10], equation (23c) has been simplified into [9, equation (20)]

(24a)
(24b)
(24c)

Accordingly we can utilize (24c) to simplify (16) and (15) into

(25)

and

(26)

respectively, where can be computed by

(27)

Moreover, from (25) and (26) we can deduce an efficient algorithm to update the output weight , i.e.,

(28)

where

(29a)
(29b)

The derivation of (25)-(29b) is in Appendix A.

To further reduce the computational complexity, we can update the unique inverse by (21), (24c) and (22), and update the output weight by (28) where

(30a)
(30b)

are computed from and in . The derivation of (30b) is also in Appendix A.

Dataset+ Node Weight Error Output Error (training) Output Error (testing) Testing
Kernel Number [4] Alg. 1 Alg. 2 Alg. 3 [4] Alg. 1 Alg. 2 Alg. 3 [4] Alg. 1 Alg. 2 Alg. 3 MSE
Airfoil 3 6e-16 8e-16 6e-16 6e-16 8e-15 1e-14 1e-14 1e-14 4e-15 7e-15 5e-15 5e-15 4.8e-2
+ 100 2e-11 3e-11 1e-8 2e-11 5e-12 8e-12 4e-9 5e-12 3e-12 5e-12 2e-9 2e-12 1.1e-2
Gaussian 500 2e-9 6e-10 4e-6 2e-10 1e-10 5e-11 3e-7 2e-11 7e-11 3e-11 2e-7 1e-11 7.7e-3
Energy 3 2e-14 1e-14 1e-14 7e-15 7e-14 5e-14 4e-14 4e-14 3e-14 2e-14 2e-14 2e-14 3.0e-2
+ 100 3e-11 5e-11 4e-8 2e-11 5e-12 6e-12 5e-9 3e-12 3e-12 4e-12 3e-9 1e-12 5.0e-3
Sigmoid 500 2e-9 3e-10 1e-6 1e-10 1e-10 2e-11 6e-8 7e-12 6e-11 1e-11 4e-8 4e-12 3.7e-3
Housing 3 3e-16 4e-16 7e-16 5e-16 2e-15 3e-15 5e-15 4e-15 1e-15 2e-15 3e-15 2e-15 8.6e-2
+ 100 2e-12 3e-12 6e-10 1e-12 1e-12 9e-13 3e-10 5e-13 1e-12 3e-12 6e-10 7e-13 7.3e-3
Sine 500 4e-10 6e-11 4e-8 2e-11 5e-11 7e-12 6e-9 3e-12 4e-10 7e-11 4e-8 3e-11 5.4e-3
Protein 3 2e-15 3e-15 8e-16 9e-16 5e-14 6e-14 3e-14 3e-14 2e-14 3e-14 1e-14 1e-14 1.8e-1
+ 100 2e-11 2e-11 2e-9 3e-11 4e-11 5e-11 4e-9 6e-11 2e-11 2e-11 2e-9 3e-11 5.6e-2
Triangular 500 2e-9 1e-9 3e-6 1e-9 1e-9 1e-9 2e-6 1e-9 9e-10 7e-10 1e-6 6e-10 4.9e-2
TABLE II: Experimental Results of the Existing and Proposed Algorithms for Regression Problems

Since the processor units are limited in precision, the recursive algorithm utilized to update may introduce numerical instabilities, which occurs only after a very large number of iterations [12]. Thus instead of the inverse of (i.e., ), we can also update the inverse factors [11] of , since usually the factorization is numerically stable [15]. The inverse factors include the upper-triangular and the diagonal , which satisfy

(31)

From (31) we can deduce

(32)

where the lower-triangular is the conventional factor [15] of .

The inverse factors can be computed from directly by the inverse factorization in [11], i.e.,

(33a)
(33b)

where

(34a)
(34b)

We can show that in (34a) and in (24b) satisfy

(35)

and in (34b) is equal to in (24a), by substituting (31) into (34a) and (34b), respectively. After updating and , we compute the output weight by (30b),

(36)

and (28), where (36) is deduced by substituting (35) into (30a).

Iv-B Summary and Complexity Analysis of ELM Algorithms

Firstly let us summarize the existing and proposed inverse-free ELM algorithms, which all compute the output by (6), and compute the estimation error by (7). In (6) and (7), the output weight is required.

The existing inverse-free ELM Algorithm [4] uses (15), (16) and (14) to update the regularized pseudo-inverse , from which the output weight is computed by (13). The proposed Algorithm uses (21), (27), (25), (26) and (14) to update the regularized pseudo-inverse , from which the output weight is computed by (29b) and (28). The proposed Algorithm uses (21), (24c) and (22) to update the unique inverse , from which the output weight is computed by (30b) and (28). The proposed Algorithm uses (21), (34b) and (33b) to update the factors of , from which the output weight is computed by (30b), (36) and (28).

Dataset+ Nodes Speedups
Kernel Number Alg. 1 Alg. 2 Alg. 3
Airfoil+ 100 2.43 7.99 5.66
Gaussian 500 2.61 3.96 2.54
Energy+ 100 2.30 4.47 3.47
Sigmoid 500 2.51 2.32 1.55
Housing+ 100 2.73 4.64 3.32
Sine 500 2.77 1.92 1.41
Protein+ 100 2.54 19.04 16.28
Triangular 500 2.66 22.09 19.29
TABLE III: Speedups in Training Time of the Proposed Algorithms over the Existing Algorithm

In the remainder of this subsection, we compare the expected flops (floating-point operations) of the existing ELM algorithm in [4] and the proposed ELM algorithms. Obviously flops are required to multiply a matrix by a matrix, and flops are required to sum two matrices in size  [4].

In Table i@, we compare the flops of the existing ELM algorithm [4] and the proposed ELM algorithms , and . As in [4], the flops of the existing ELM algorithm do not include the entries for simplicity, since usually the ELM has large (the number of training examples) and (the number of hidden nodes). The flops of the proposed ELM algorithms do not include the entries that are or . Since usually , it can easily be seen from Table i@ that with respect to the existing ELM algorithm, the proposed ELM algorithms , and only require about , and of flops, respectively.

Dataset Kernel

Mean/Variance

Training Testing
ACC SN PE MCC ACC SN PE MCC
MAGIC Gaussian Mean 0.8645 0.9472 0.8584 0.6975 0.8618 0.9459 0.8561 0.6914
Variance 0.0019 0.0018 0.0018 0.0045 0.0068 0.0058 0.0064 0.0153
Sigmoid Mean 0.8602 0.9468 0.8536 0.6877 0.8588 0.9458 0.8525 0.6844
Variance 0.0019 0.0019 0.0018 0.0044 0.0065 0.0049 0.0063 0.0146
Hardlim Mean 0.8312 0.9277 0.8315 0.6202 0.8270 0.9249 0.8284 0.6104
Variance 0.0038 0.0046 0.0045 0.0088 0.0069 0.0066 0.0083 0.0147
Triangular Mean 0.8592 0.9419 0.8555 0.6852 0.8561 0.9398 0.8532 0.6780
Variance 0.0023 0.0025 0.0024 0.0052 0.0060 0.0051 0.0066 0.0131
Sine Mean 0.8640 0.9487 0.8569 0.6966 0.8620 0.9475 0.8552 0.6919
Variance 0.0017 0.0016 0.0016 0.0040 0.0068 0.0058 0.0061 0.0152
Musk Gaussian Mean 0.9453 0.6791 0.9522 0.7767 0.9412 0.6613 0.9396 0.7586
Variance 0.0031 0.0193 0.0097 0.0135 0.0070 0.0321 0.0196 0.0238
Sigmoid Mean 0.9474 0.6925 0.9539 0.7862 0.9432 0.6745 0.9412 0.7679
Variance 0.0030 0.0181 0.0097 0.0128 0.0068 0.0308 0.0189 0.0231
Hardlim Mean 0.9351 0.6185 0.9397 0.7309 0.9299 0.5969 0.9214 0.7075
Variance 0.0036 0.0216 0.0128 0.0161 0.0076 0.0341 0.0247 0.0268
Triangular Mean 0.9447 0.6751 0.9528 0.7744 0.9406 0.6579 0.9390 0.7561
Variance 0.0032 0.0191 0.0099 0.0137 0.0069 0.0318 0.0196 0.0232
Sine Mean 0.9462 0.6889 0.9479 0.7808 0.9419 0.6722 0.9326 0.7620
Variance 0.0025 0.0145 0.0088 0.0105 0.0067 0.0301 0.0173 0.0218
Adult Gaussian Mean 0.8362 0.9321 0.8612 0.5309 0.8359 0.9307 0.8626 0.5259
Variance 0.0010 0.0018 0.0015 0.0034 0.0012 0.0020 0.0016 0.0041
Sigmoid Mean 0.8316 0.9313 0.8569 0.5160 0.8311 0.9297 0.8582 0.5101
Variance 0.0014 0.0026 0.0023 0.0051 0.0017 0.0027 0.0023 0.0060
Hardlim Mean 0.8208 0.9314 0.8457 0.4786 0.8200 0.9298 0.8466 0.4711
Variance 0.0023 0.0038 0.0034 0.0085 0.0026 0.0039 0.0034 0.0094
Triangular Mean 0.8367 0.9338 0.8607 0.5318 0.8366 0.9327 0.8620 0.5270
Variance 0.0009 0.0018 0.0015 0.0032 0.0012 0.0019 0.0015 0.0040
Sine Mean 0.8377 0.9340 0.8616 0.5349 0.8377 0.9330 0.8630 0.5307
Variance 0.0008 0.0016 0.0014 0.0028 0.0011 0.0017 0.0014 0.0035
Diabetes Gaussian Mean 0.7973 0.6010 0.7572 0.5339 0.7681 0.5604 0.7048 0.4663
Variance 0.0103 0.0251 0.0199 0.0239 0.0308 0.0668 0.0697 0.0684
Sigmoid Mean 0.7889 0.5746 0.7504 0.5124 0.7738 0.5548 0.7233 0.4781
Variance 0.0091 0.0209 0.0166 0.0207 0.0312 0.0655 0.0703 0.0693
Hardlim Mean 0.7673 0.5380 0.7124 0.4602 0.7340 0.4892 0.6515 0.3819
Variance 0.0159 0.0529 0.0278 0.0402 0.0348 0.0811 0.0775 0.0800
Triangular Mean 0.7964 0.5994 0.7558 0.5317 0.7674 0.5579 0.7046 0.4645
Variance 0.0103 0.0249 0.0193 0.0238 0.0313 0.0677 0.0709 0.0704
Sine Mean 0.7972 0.5912 0.7633 0.5327 0.7721 0.5560 0.7174 0.4742
Variance 0.0096 0.0228 0.0184 0.0220 0.0306 0.0662 0.0690 0.0679
TABLE IV: Experimental Results of the Existing and Proposed Algorithms for Classification Problems

Notice that in the proposed ELM algorithm , computed in (27) can be utilized in (26) and (29a). The dominant computational load of the proposed ELM algorithm comes from (21), (27), (25) and (29b), of which the flops are , , and , respectively. Moreover, in the proposed ELM algorithms and , the dominant computational load comes from (21) and (30b), of which the flops are and , respectively.

V Numerical Experiments

We follow the simulations in [4], to compare the existing inverse-free ELM algorithm and the proposed inverse-free ELM algorithms on MATLAB software platform under a Microsoft-Windows Server with GB of RAM. We utilize a fivefold cross validation to partition the datasets into training and testing sets. To measure the performance, we employ the mean squared error (MSE) for regression problems, and employ four commonly used indices for classification problems, i.e., the prediction accuracy (ACC), the sensitivity (SN), the precision (PE) and the Matthews correlation coefficient (MCC). Moreover, the regularization factor is set to to avoid over-fitting.

For the regression problem, we consider energy efficiency dataset [16], housing dataset [17], airfoil self-noise dataset [18], and physicochemical properties of protein dataset [19]. For those datasets, different activation functions are chosen, which include Gaussian, sigmoid, sine and triangular. As Table iv@ in [4], Table ii@ shows the regression performance. In table ii@, the weight error and the output error are defined as and , respectively, where and are computed by an inverse-free ELM algorithm, and and are computed by the standard ELM algorithm. We set the initial hidden node number to , and utilize the existing and proposed inverse-free ELM algorithms to add the hidden nodes one by one till the hidden node number reaches . Table ii@ includes the simulation results for the hidden node numbers , and .

As observed from Table ii@, after iteration (i.e., the node number ), the weight error and the output error are less than . For the existing inverse-free ELM algorithm and the proposed algorithms and , the weight error and the output error are less than after iterations (i.e., the node number ), and are not greater than after iterations (i.e., the node number ). However, for the proposed algorithms , the weight error and the output error are not greater than after iterations, and are not greater than after iterations, since the recursive algorithm to update introduces numerical instabilities after a very large number of iterations [12]. Overall, the standard ELM, the existing inverse-free ELM algorithm and the proposed ELM algorithms , and achieve the same testing MSEs, which have been listed in the last column of Table ii@.

Algorithm Gaussian Sigmoid Hardlim Triangular Sine
Accuracy Speedups Accuracy Speedups Accuracy Speedups Accuracy Speedups Accuracy Speedups
Existing Alg.
Proposed Alg. 1 3.41 3.77
Proposed Alg. 2 45.50 44.04
Proposed Alg. 3 26.28 31.04
TABLE V: Experimental Results of the Existing and Proposed Algorithms on MNIST Dataset

The speedups in training time of the proposed ELM algorithms , and over the existing inverse-free ELM algorithm are shown in Table iii@, where we add just one node to reach and nodes, respectively, and we do simulations to compute the average training time. The speedups are computed by , i.e., the ratio between the training time of the existing ELM algorithm and that of the proposed ELM algorithm. As observed from Table iii@, all the proposed algorithms significantly accelerate the existing inverse-free ELM algorithm.

For the classification problem, we consider MAGIC Gamma telescope dataset [20], musk dataset [21], adult dataset [22] and diabetes dataset [19]. For each dataset, five activation functions are simulated, i.e., Gaussian, sigmoid, Hardlim, triangular and sine. In the simulations, the standard ELM, the existing inverse-free ELM algorithm and the proposed ELM algorithms , and achieve the same performance, which have been listed in Table iv@.

Lastly, in Table v@ we simulate the existing and proposed algorithms on the Modified National Institute of Standards and Technology (MNIST) dataset [23] with training images and testing images, to show the performance on big data. To give the testing accuracy, we set the initial hidden node number to , and utilize the existing and proposed ELM algorithms to add hidden nodes one by one till the hidden node number reaches . To give the speedups of the proposed algorithms over the existing algorithm, we compare the training time to reach nodes by adding one node, and do simulations to compute the average training time.

As observed from Table v@, the existing and proposed inverse-free ELM algorithms bear the same testing accuracy, while all the proposed algorithms significantly accelerate the existing inverse-free ELM algorithm. Moreover, from Table v@ and Table iii@, it can be seen that usually the proposed algorithm is faster than the proposed algorithm , and the proposed algorithm is faster than the proposed algorithm .

Vi Conclusion

To reduce the computational complexity of the existing inverse-free ELM algorithm [4], in this correspondence we utilize the improved recursive algorithm [9, 10] to deduce the proposed ELM algorithms , and . The proposed algorithm includes a more efficient inverse-free algorithm to update the regularized pseudo-inverse . To further reduce the computational complexity, the proposed algorithm computes the output weights directly from the updated inverse , and avoids computing the regularized pseudo-inverse . Lastly, instead of updating the inverse , the proposed ELM algorithm updates the factors of the inverse by the inverse factorization [11], since the inverse-free recursive algorithm to update the inverse introduces numerical instabilities after a very large number of iterations [12]. With respect to the existing ELM algorithm, the proposed ELM algorithms , and are expected to require only , and of flops, respectively. In the numerical experiments, the standard ELM, the existing inverse-free ELM algorithm and the proposed ELM algorithms 1, 2 and 3 achieve the same performance in regression and classification, while all the proposed algorithms significantly accelerate the existing inverse-free ELM algorithm. Moreover, in the simulations, usually the proposed algorithm is faster than the proposed algorithm , and the proposed algorithm is faster than the proposed algorithm .

Appendix A Derivation of (25), (26), (27), (28), (29b) and (30b)

Substitute (11) and (18) into (12) to obtain

(37)

Substitute (24c) into (22), which is then substituted into (37) to obtain , i.e.,

(38)

To deduce (25), denote the second entry in the right side of (38) as

(39)

into which substitute (24b) to obtain

(40)

and then substitute (19) into (40).

To deduce (26), substitute (24b) into the first entry in the right side of (38), and denote it as , i.e.,

(41)

Then substitute (39) into (41) to obtain

(42)

into which substitute (21) to obtain

(43)

Finally we need to substitute (19) into (43).

To deduce (27), substitute (21) into (24a) to obtain , into which substitute (19).

By substituting (26) into (14) and substituting (14) into (13), we can deduce (28) where satisfies (29b) and

(44)

into which substitute (29b) and (13) to deduce (29a).

To deduce (30a), substitute (19) into (29a) to obtain , into which substitute (21) to obtain

(45)

and then substitute (24b) into (45). Moreover, to deduce (30b), substitute (25) into (29b) to obtain

(46)

into which substitute (13).

References

  • [1] G. B. Huang, Q. Y. Zhu, and C. K. Siew, “Extreme learning machine: Theory and applications,”, Neurocomputing, vol. 70, nos. 1-3, pp. 489-501, Dec. 2006.
  • [2] G. B. Huang, L. Chen, and C. K. Siew, “Universal approximation using incremental constructive feedforward networks with random hidden nodes”, IEEE Trans. Neural Netw., vol. 17, no. 4, pp. 879-892, Jul. 2006.
  • [3] G. B. Huang, H. Zhou, X. Ding, and R. Zhang, “Extreme learning machine for regression and multiclass classification,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 42, no. 2, pp. 513-529, Apr. 2012.
  • [4] S. Li, Z. You, H. Guo, X. Luo, and Z. Zhao, “Inverse-Free Extreme Learning Machine With Optimal Information Updating”, IEEE Trans. on Cybernetics, vol. 46, no. 5, pp. 1229-1241, May 2016.
  • [5] H. L tkepohl, Handbook of Matrices, New York: John Wiley Sons, 1996.
  • [6] T. K. Mood and W. C. Stirling, Mathematical Methods and Algorithms for Signal Processing, Prentice Hall, 2000.
  • [7] L. Szczecinski and D. Massicotte, “Low complexity adaptation of MIMO MMSE receivers, implementation aspects”, Proc. Global Commun. Conf. (Globecom’05), St. Louis, MO, USA, Nov. 2005.
  • [8] Y. Shang and X. G. Xia, “On fast recursive algorithms for V-BLAST with optimal ordered SIC detection”, IEEE Trans. Wireless Commun., vol. 8, pp. 2860-2865, June 2009.
  • [9] H. Zhu, W. Chen, and F. She, “Improved fast recursive algorithms for V-BLAST and G-STBC with novel efficient matrix inversions”, Proc. IEEE Int. Conf. Commun., pp. 211-215, 2009.
  • [10] H. Zhu, W. Chen, B. Li, and F, Gao, “A Fast Recursive Algorithm for G-STBC”, IEEE Trans. on Commun., vol. 59, no. 8, Aug. 2011.
  • [11] H. Zhu, W. Chen, and B. Li, “Efficient Square-Root and Division Free Algorithms for Inverse Factorization and the Wide-Sense Givens Rotation with Application to V-BLAST”, IEEE Vehicular Technology Conference (VTC), 2010 Fall, 6-9 Sept., 2010.
  • [12] J. Benesty, Y. Huang, and J. Chen, “A fast recursive algorithm for optimum sequential signal detection in a BLAST system”, IEEE Trans. Signal Process., pp. 1722-1730, July 2003.
  • [13] Y. Miche, M. van Heeswijk, P. Bas, O. Simula, and A. Lendasse, “TROP-ELM: A double-regularized ELM using LARS and Tikhonov regularization”, Neurocomputing, vol. 74, no. 16, pp. 2413-2421, 2011.
  • [14] Y. Miche et al., “OP-ELM: Optimally pruned extreme learning machine”, IEEE Trans. Neural Netw., vol. 21, no. 1, pp. 158-162, Jan. 2010.
  • [15] G. H. Golub and C. F. Van Loan, Matrix Computations, third ed. Baltimore, MD: Johns Hopkins Univ. Press, 1996.
  • [16] A. Tsanas and A. Xifara, “Accurate quantitative estimation of energy performance of residential buildings using statistical machine learning tools”, Energy Build., vol. 49, pp. 560-567, Jun. 2012.
  • [17]

    R. Setiono and H. Liu, “A connectionist approach to generating oblique decision trees”,

    IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 29, no. 3, pp. 440-444, Jun. 1999.
  • [18] R. L pez-Gonz lez, “Neural networks for variational problems in engineering”, Ph.D. dissertation, Dept. Comput. Lang. Syst., Tech. Univ. Catalonia, Barcelona, Spain, Sep. 2008.
  • [19] M. Lichman, UCI Machine Learning Repository, School Inf. Comput. Sci., Univ. California, Irvine, CA, USA, 2013. [Online]. Available: http://archive.ics.uci.edu/ml
  • [20] R. K. Bock et al., “Methods for multidimensional event classification: A case study using images from a Cherenkov gamma-ray telescope”, Nucl. Instrum. Methods Phys. Res. A, vol. 516, pp. 511-528, Jan. 2004.
  • [21] T. G. Dietterich, R. H. Lathrop, and T. L. Perez, “Solving the multiple instance problem with axis-parallel rectangles”, Artif. Intell., vol. 89, nos.1-2, pp. 31-71, 1997.
  • [22]

    R. Kohavi, “Scaling up the accuracy of Naive-Bayes classifiers: A decision-tree hybrid”,

    Proc. 2nd Int. Conf. Knowl. Disc. Data Min., Portland, OR, USA, 1996, pp. 202 C207.
  • [23] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition”, Proc. IEEE, vol. 86, no. 11, pp. 2278-2324, Nov. 1998.