Efficient Ridge Solutions for the Incremental Broad Learning System on Added Inputs by Updating the Inverse or the Inverse Cholesky Factor of the Hermitian matrix in the Ridge

11/12/2019
by   Hufei Zhu, et al.
0

This brief proposes two BLS algorithms to improve the existing BLS for new added inputs in [7]. The proposed BLS algorithms avoid computing the ridge inverse, by computing the ridge solution (i.e., the output weights) from the inverse or the inverse Cholesky factor of the Hermitian matrix in the ridge inverse. The proposed BLS algorithm 1 updates the inverse of the Hermitian matrix by the matrix inversion lemma [12]. To update the upper-triangular inverse Cholesky factor of the Hermitian matrix, the proposed BLS algorithm 2 multiplies the inverse Cholesky factor with an upper-triangular intermediate matrix, which is computed by a Cholesky factorization or an inverse Cholesky factorization. Assume that the newly added input matrix corresponding to the added inputs is p * k, where p and k are the number of added training samples and the total node number, respectively. When p > k, the inverse of a sum of matrices [11] is utilized to compute the intermediate variables by a smaller matrix inverse in the proposed algorithm 1, or by a smaller inverse Cholesky factorization in the proposed algorithm 2. Usually the Hermitian matrix in the ridge inverse is smaller than the ridge inverse. Thus the proposed algorithms 1 and 2 require less flops (floating-point operations) than the existing BLS algorithm, which is verified by the theoretical flops calculation. In numerical experiments, the speedups for the case of p > k in each additional training time of the proposed BLS algorithms 1 and 2 over the existing algorithm are 1.95 - 5.43 and 2.29 - 6.34, respectively, and the speedups for the case of p < k are 8.83 - 10.21 and 2.28 - 2.58, respectively.

READ FULL TEXT
research
11/12/2019

Efficient Ridge Solution for the Incremental Broad Learning System on Added Nodes by Inverse Cholesky Factorization of a Partitioned Matrix

To accelerate the existing Broad Learning System (BLS) for new added nod...
research
12/31/2019

Efficient Decremental Learning Algorithms for Broad Learning System

The decremented learning algorithms are required in machine learning, to...
research
10/17/2019

Reducing the Computational Complexity of Pseudoinverse for the Incremental Broad Learning System on Added Inputs

In this brief, we improve the Broad Learning System (BLS) [7] by reducin...
research
05/21/2021

Low-Memory Implementations of Ridge Solutions for Broad Learning System with Incremental Learning

The existing low-memory BLS implementation proposed recently avoids the ...
research
11/12/2019

Efficient Inverse-Free Algorithms for Extreme Learning Machine Based on the Recursive Matrix Inverse and the Inverse LDL' Factorization

The inverse-free extreme learning machine (ELM) algorithm proposed in [4...
research
05/14/2020

Efficient and Stable Algorithms to Extend Greville's Method to Partitioned Matrices Based on Inverse Cholesky Factorization

Greville's method has been utilized in (Broad Learn-ing System) BLS to p...
research
11/11/2022

Theoretical error estimates for computing the matrix logarithm by Padé-type approximants

In this article, we focus on the error that is committed when computing ...

Please sign up or login with your details

Forgot password? Click here to reset