Extreme Learning Machine design for dealing with unrepresentative features

12/04/2019 ∙ by Nicolás Nieto, et al. ∙ Universidad Nacional del Litoral 0

Extreme Learning Machines (ELMs) have become a popular tool in the field of Artificial Intelligence due to their very high training speed and generalization capabilities. Another advantage is that they have a single hyper-parameter that must be tuned up: the number of hidden nodes. Most traditional approaches dictate that this parameter should be chosen smaller than the number of available training samples in order to avoid over-fitting. In fact, it has been proved that choosing the number of hidden nodes equal to the number of training samples yields a perfect training classification with probability 1 (w.r.t. the random parameter initialization). In this article we argue that in spite of this, in some cases it may be beneficial to choose a much larger number of hidden nodes, depending on certain properties of the data. We explain why this happens and show some examples to illustrate how the model behaves. In addition, we present a pruning algorithm to cope with the additional computational burden associated to the enlarged ELM. Experimental results using electroencephalography (EEG) signals show an improvement in performance with respect to traditional ELM approaches, while diminishing the extra computing time associated to the use of large architectures.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The use of random weights in Neural Networks was first proposed by Schmidt

et al in Schmidt et al. (1992). The idea was then reintroduced under the concept of Extreme Learning Machines (ELMs) in 2006 by Huang et al (Huang et al. (2004)

), and has been widely used by the Machine Learning community ever since

Huang et al. (2011); Wong et al. (2014); Zhang et al. (2018); Song and Zhang (2013). This is so mainly because the training process is very fast and they have good generalization capabilities.

Another quite appealing aspect of ELMs is that, unlike most of the machine learning techniques, they have only one hyper-parameter that must be tuned up: the number of hidden nodes. Furthermore, Huang and Babri Huang and Babri (1998) have proved that given a training dataset consisting of samples, the model can learn them exactly using hidden nodes. Nevertheless, perfect classification over a training set usually entails a loss of generalization capabilities. In fact, it is widely accepted that given training samples, is an upper bound for the number of nodes that can be chosen for the hidden layer (Huang and Babri (1998)).

In this work we argue that while choosing the number of hidden nodes close to is indeed a bad idea, choosing may result in a significantly better performance. In the next section, we provide an explanation to why such a choice might improve performance, and run an experiment to attest it. A post-training pruning method in order to mitigate the extra computational burden in the training stage, coming from the choice of a larger number of hidden nodes, is described in Section 4.

2 Extreme Learning Machines

For simplicity, we shall consider an ELM within the context of a binary classification problem. We point out, however, that all the results presented in this work remain valid for multi-class problems.

Given an arbitrary vector

, the ELM classification output is given by

(1)

where is the matrix associated to the hidden layer,

is an activation function,

is the bias vector, and

is the weight vector connecting the hidden layer to the output. Here and in the sequel, the action of on a vector or a matrix is meant to be its components-wise evaluation.

The training process of an ELM consists of two main steps. First, the entries of and

are randomly chosen as independent realizations of an absolutely continuous random variable (usually with uniform distribution in

). The second step consists of finding an appropriate vector . This can be done as described below.

Let us consider a dataset consisting of training samples , , stacked in a matrix . Let be the desired output vector, where , gives account for the class of .

Let us define

(2)

where is a row vector with all its elements equal to 1. Then, training the ELM weight vector amounts to solving the linear system

(3)

Note that , meaning that the linear system is under-determined if , e.g. the number of nodes is less than the number of training samples. In this case, we can define the vector as a minimal-norm least-squares approximate solution of (3). That is

(4)

where is the Moore-Penrose generalized inverse.

When , the ELM’s generalization capability is associated with the fact that least squares solutions assign larger weights to the columns of which are most relevant for classification purposes. However, when , system (3) has a unique solution (with probability 1, as shown in Huang and Babri (1998)). Hence, in this case, the solution is forced to take all columns of into account, even those witch are irrelevant for classification proposes. This is a classic case of overfitting.

Although it is theoretically true that implies that the matrix has (with probability 1) independent column vectors, this does not necessary mean that the feature space be well represented. To illustrate this, let us consider the following simple example. Suppose that there are two columns of , and , such that (for a small ) and . Although these two vectors are strictly different, for all practical purposes they clearly encode the same feature information. More formally, while ensures that

be invertible, it can still present very small singular values, which is a reflection of a poor representation of the feature space. As shown in

Horn and Johnson (1990), adding random columns to a matrix increases its smaller non-zero singular value, which in a context of normalized data indicates a good representation of the feature space.

3 Changing the network size

In light of the above discussion, we argue that, in order to optimize the network performance, should be chosen differently. Our proposal is training the ELM by choosing followed by an a-posteriori pruning method for reducing the size of the model. Although working with a large number of hidden nodes may increase the computing time, this method gets rid of the need for training the network several times for validating . Hence, by an appropriate choice of the model, the final training time could end up being reduced, as it is shown in the next section.

We now present a simple experiment using artificially generated data in order to illustrate the potential advantage of choosing . Let us consider a binary classification problem in which the data points corresponding to the two classes are distributed as depicted in Figure 1. It is clear that the two displayed coordinates

are enough for classification. In an ideal situation, any additional coordinate added to the data points, taking random values independently of the class, should not be taken into account by the classifier. In a real problem, the data points might be highly contaminated with this kind of uninformative junk features (coordinates) which are,

a-priori, indistinguishable from the representative ones. Hence a question that immediately arises is: should the same kind of ELM architecture be used for this kind of problems?

Figure 1: Synthetic class distribution

Let us take a look at Figure 2

, where for several choices of the number of neurons

, the average test accuracy of an ELM is plotted. The purple line corresponds to the test accuracy obtained using the data as displayed in Figure 1, while the others two correspond to those obtained using the datapoints contaminated with different numbers of junk features. One can immediately see that while a small choice of is optimal for the two-coordinates case, the more unrepresentative features the data contains, the more convenient it becomes to choose .

Figure 2:

Mean test accuracy in synthetic data adding Junk Features (JF). Shading illustrates the standard deviation.

One might wonder, then, if this is just the result of the random choice of junk features, or if it is a scenario one might often expect in practical problems. To shed some light on this matter, we show an experiment using the DaSalla dataset (DaSalla et al. (2009)), which contains electroencephalography (EEG) signals recorded from three different subjects using 64 electrodes with a sampling frequency of 256 Hz. Here we used the EEG patterns related to imagination of mouth movement involved in the pronunciation of two vowels (/A/ and /U/).

Figure 3 illustrates the train and test accuracy of the ELM as a function of the parameter . The experiments were performed using 70% of the data for training and 30% for testing, with 50 random initializations and 20 cross validations. As it can be seen, at first the accuracy grows with , until reaching a local maximum, after which it starts to decay up to the global minimum, reached at . However, the test accuracy starts to grow again as further increases. These results coincide with the findings in Belkin et al. (2019)

, where a similar behaviour is reported in the context of traditional multi-layer perceptrons.

Figure 3: Mean accuracy obtained for training and testing data, as a function of the hidden layer size, . Shading illustrates the standard deviation of the results.

The overfitting observed in Figures 2 and 3 when can be explained by the fact that when training under this condition, we are forcing the network to take into account all the features, even those irrelevant for classification. To corroborate that this is in fact the reason, we have performed an experiment consisting of adding a disconnected neuron to the ELM (before training using the DaSalla dataset). That is, a column of random elements having no correlation with the classes was stacked to the right of the matrix . Figure 4 depicts the absolute value of the weight that the model assigns to the disconnected neuron as a function of the number of neurons . As seen, this weight remains small until approaches , which supports our previous hypothesis. It is timely to observe, however, that the weight assigned to the fake neuron starts to decay again after this point. This means that for , the ELM becomes capable of neglecting the value of the disconnected neuron, thus avoiding overfitting.

Figure 4: Mean absolute value of the weight of the neuron observing a fake feature as function of the hidden layer size (shading accounts for standard deviation over 50 initializations and 20 cross validations).

As we have shown, in certain cases, an ELM can benefit from choosing . Yet this has the downside of increasing the network size. In order to find a compromise between the ELM size and its performance, one could use a validation method and retrain the network increasing the number of neurons until the change in performance is small enough to be neglected. Nevertheless, the computational burden associated to solving (4) increases with , and the sensibility of the method with respect to the random initialization might require a few trials, making this idea unfeasible in practice. Hence, in order to make ELMs of practical use for real data as in the previous examples, in the next section, we introduce a pruning method that allows to reduce the network size after a single computation of (4) for . Comparisons between this pruning method, the standard setting, and a forward validation model in terms of performance and computational cost will also be shown.

4 Relevance-based pruning

Given that our proposal entails choosing significantly larger than , it shall prove useful to have a method for reducing the ELM dimension. As it was observed in the description of Figure 4, the weight of a neuron is proportional to the relevance of the corresponding feature. Hence, it is reasonable to “throw away” the neurons whose associated weights () are small enough. Given that the process of discarding a neuron and testing the performance of the resulting ELM is computationally inexpensive, the proposed pruning method begins using a large initial number of hidden nodes , and then discard one (or a few) at a time until the performance exhibits a significant drop. We shall refer to the resulting method as Relevance-Based Pruning (RBP). The steps for doing this are shown in Algorithm 1. In the next section we show some experiments in order to compare RBP’s performance against the traditional ELM approach.

  Set and .
  Initialize the elements of and as randomly with distribution .
  .
  .
  .
  Permute so that .
  Perform the same permutation on the columns of .
  Let ,
  while mean( do
     Let ,
     Remove the last element from .
     Remove the last column from .
  end while
Algorithm 1 : Relevance-Based Pruning (RBP)

5 Experiments

5.1 Experimental setting

Two EEG datasets were used for the experiments. EEG data typically presents high levels of noise, and the relevant classification information is believed to be encoded in a particular subset of the features. The first is DaSalla dataset, already described in Section 3. The second one is a P300-based BCI dataset Ledesma-Ramirez et al. (2010), consisting of 3780 EEG trials (630 with P300) acquired from 25 subjects using 10 channels at 256 Hz.

5.2 Performance analysis

In order to compare RBP against the standard “growing” ELM approach, we propose the following experimental setting: consider the datasets and and a maximum ELM size . For the traditional method, the steps followed are shown in Algorithm 2.

  Set .
  while  do
     Initialize the elements of and as randomly with distribution .
     .
     
     Compute the accuracy of the ELM using and .
     .
  end while
Algorithm 2 : Standard ELM growing scheme

For illustrating the results of the RBP method, Algorithm was run 1 setting .

Using 50 random initializations over 20 cross validations for and , for every value of , we run the proposed experiment in a randomly chosen subject of the DaSalla dataset, with a 70/30 train/test scheme. The resulting average test accuracy for both the standard growing method (Algorithm 2) and the RBP are depicted in Figure 5A. To validate the pruning criterion, the results obtained using a random pruning scheme (RND) are also shown.

An analogous test was made using a (randomly chosen) subject from the P300 database. Results are shown in Figure 5B. Since, in this case the dataset and feature space are much larger, instead of taking increments of 1 on the values of , a logarithmic grid was used. Also, given that the dataset is highly unbalanced, in order to evaluate performance we used the area under the ROC curve (AUC). The results correspond to five cross validations over 20 random initializations, with a choice of

Figure 5: Average performance obtained with standard forward (FWD), Random Prunning (RND) and Relevance-Based Pruning (RBP) methods for one subject of (a) DaSalla dataset and (b) one subject of the P300 database. Shading accounts for standard deviation.

As seen in Figure 5, RBP performs better than the forward scheme, and the average performance is (somewhat) monotonically increasing for . This means that choosing a large followed by a reduction of the network size by means of RBP (until a significant decay in validation performance is observed) will yield a result at least as good as using the forward scheme. Thus, we can choose large enough (e.g. as determined by the computational cost we are willing to pay), and then run RBP to reduce the network size without loosing classification performance.

5.3 Validation experiments

In order to validate our proposal, we shall make comparisons between three different settings for each dataset:

[noitemsep]

STD:

(traditional approach) the best result obtained using .

FWD:

start with and . Increase over a logarithmic grid until the validation performance levels up or the maximum value is reached (see Algorithm 2).

RBP:

start with and and decrease over a logarithmic grid until the first time that performance shows a significant (prescribed) reduction. The stopping criterion, grid and are set equal to those in FWD.

For our first experiment we take the full DaSalla dataset. We run twenty cross validations, splitting the data in 70%, 20% and 10% for training, validation and testing, respectively. The parameters were set to and . Table 1 shows the obtained results, from which two main issues can be observed: first, that both methods that allow for an ELM with a larger number of hidden nodes result in better performance, in terms of accuracy. Secondly, that between those two approaches, RBP is observed to required much less computing time with a comparable network size.

Method Accuracy time [ms]
STD
FWD
RBP
Table 1: Experimental results using the DaSalla database.

The second experiment was performed using the P300-based BCI dataset. The data was split exactly as in the first experiment, for training, validation and testing, respectively. For every subject, we performed five cross validations over twenty random initializations. The parameters were set to and . Results are depicted in Table 2, and those obtained for the first five subjects are illustrated in Figure 6.

Method AUC time [s]
STD
FWD
RBP
Table 2: Experimental results using the P300 database.
Figure 6: Results obtained with traditional (STD), forward (FWD) and Relevance-Based Pruning (RBP) methods on the P300 database.

As it can be seen, the AUC values obtained by FWD and RBP, i.e. the methods allowing for , are significantly larger than those obtained with STD. While FWD and RBP do not account for a significant difference in AUC, RBP requires much less computing time than the former. Additionally, RBP yields a much smaller network size resulting in a more compact ELM.

From a practical point of view, if the number of hidden nodes is a restriction, then RBP is the best choice because it will yield better classification performance with the same number of neurons. On the other hand, if only classification is relevant, then RBP is more appropriate since it will yield the same performance in less or comparable training time than FWD with a more compact network.

6 Conclusions and future work

In this work we have shown that there are certain classification problems for which ELMs can benefit from using a non-traditional approach. Insights on why standard approaches are suboptimal (depending on the type of data) were provided along with a new pruning method to deal with these type of problems.

Results show that using a large number of hidden neurons can be beneficial and that the Relevance-Based Pruning method provides a time-efficient way to reduce the ELM size without jeopardizing performance. Furthermore, implementation is very simple and the benefits can be significant in some cases.

In the future, we shall tackle the problem of choosing an appropriate maximum ELM size (

) depending on the problem. Also, we intend to incorporate this method in the context of regularized ELMs and explore its potential as a feature selection tool.

Bibliography

References

  • M. Belkin, D. Hsu, S. Ma, and S. Mandal (2019)

    Reconciling modern machine-learning practice and the classical bias–variance trade-off

    .
    Proceedings of the National Academy of Sciences 116 (32), pp. 15849–15854. Cited by: §3.
  • C. S. DaSalla, H. Kambara, M. Sato, and Y. Koike (2009) Single-trial classification of vowel speech imagery using common spatial patterns. Neural networks 22 (9), pp. 1334–1339. Cited by: §3.
  • R. A. Horn and C. R. Johnson (1990) Topics in matrix analysis. Cambridge University Press. Cited by: §2.
  • G. Huang and H. A. Babri (1998) Upper bounds on the number of hidden neurons in feedforward networks with arbitrary bounded nonlinear activation functions. IEEE transactions on neural networks 9 (1), pp. 224–229. Cited by: §1, §2.
  • G. Huang, D. H. Wang, and Y. Lan (2011) Extreme learning machines: a survey. International journal of machine learning and cybernetics 2 (2), pp. 107–122. Cited by: §1.
  • G. Huang, Q. Zhu, C. Siew, et al. (2004) Extreme learning machine: a new learning scheme of feedforward neural networks. Neural networks 2, pp. 985–990. Cited by: §1.
  • C. Ledesma-Ramirez, E. Bojorges-Valdez, O. Yáñez-Suarez, C. Saavedra, L. Bougrain, and G. G. Gentiletti (2010) An open-access P300 speller database. In Fourth International Brain-Computer Interface Meeting, Asilomar, California, USA. Cited by: §5.1.
  • W. F. Schmidt, M. A. Kraaijveld, R. P. Duin, et al. (1992) Feed forward neural networks with random weights. In

    International Conference on Pattern Recognition

    ,
    pp. 1–1. Cited by: §1.
  • Y. Song and J. Zhang (2013)

    Automatic recognition of epileptic eeg patterns via extreme learning machine and multiresolution feature extraction

    .
    Expert Systems with Applications 40 (14), pp. 5477–5489. Cited by: §1.
  • P. K. Wong, Z. Yang, C. M. Vong, and J. Zhong (2014) Real-time fault diagnosis for gas turbine generator systems using extreme learning machine. Neurocomputing 128, pp. 249–257. Cited by: §1.
  • Y. Zhang, Y. Wang, G. Zhou, J. Jin, B. Wang, X. Wang, and A. Cichocki (2018) Multi-kernel extreme learning machine for eeg classification in brain-computer interfaces. Expert Systems with Applications 96, pp. 302–310. Cited by: §1.