1 Introduction
With the development of information society, various industries have produced a great deal of data and how to effectively analyze these data becomes an urgent problem (1). Classification is as a basic form of data analysis; it has attracted scholars much attention (2; 3). In 2006, extreme learning machine is proposed by Huang as a new classification method(4; 5; 6; 7). In recent years, ELM has been extensively studied by researchers (8; 9; 10). Zhang et al. proposed a privileged knowledge extreme learning machine called ELM+ for radar signal recognition (11); in practical applications, many classification tasks has privileged knowledge, but the traditional ELM (4; 5) does not take advantage of these privileged knowledge; ELM+ makes full use of privileged knowledge to map input data into a feature space and a correction space; it uses the traditional ELM and the privileged knowledge to get a corrected hidden layer output matrix and output layer weights; by solving the corrected optimization problem, the solution of the ELM+ is obtained. Aiming at classification in blind domain adaptation, Uzair et al. developed a new ELM model named AELM (12)
. In order to cope with the problem that there is a big difference between the distribution of training data and target domain data, AELM uses a multiple classifiers system; For AELM algorithm, a global classifier is trained by whole training data which can classify all classes data; then the data is divided into
c subsets (c is the number of classes) and c local classifiers are also trained based on the c subsets. When a new data coming, the algorithm utilizes the (c+1) classifiers to classify the new data. The local classier whose square error is least with global classifier is as final training classification model. Deng et al. proposed a fast and accurate kernelbased supervised extreme learning machine referred to as RKELM (13). RKELM introduces kernel function and random selection mechanism to improve the performance of the algorithm. Aiming at reducing the size of the output matrix of the hidden layer, support vectors of RKELM is are randomly selected from training set and the number of support vectors is limited to less than the number of neurons in hidden layer. The traditional ELM is easily affected by noise, but with fast speed; sparse representation classification (SRC) has a good ability to resist noise, but its speed does not have too many advantages. So Cao et al. designed a new extreme learning machine method based on adaptive sparse representation for image classification called EASRC
(14). EASRC combines ELM with SRC, and employs regularization mechanism to improve generalization performance. For the sake of optimal regularization parameter selection, it adopts the leaveoneout cross validation (LOO) scheme. In addition, considering reducing computational complexity in or, EASRC uses SVD(Singular Value Decomposition) to reduce the dimensions of
or. Li et al. proposed an extreme learning machine method with transfer learning mechanism called TLELM
(15). Different from the traditional ELM, TLELM asks the difference between the old domain knowledge and the new domain knowledge must be as small as possible. By solving the optimization problem, the output weights of the new ELM can be got. For imbalanced and big data classification problem, Wang et al. proposed a distributed weighted extreme learning machine referred to as DWELM (16). To handle big data and imbalanced data, DWELM draws into MapReduce framework and sample weighting method. Aiming at data stream classification containing concept drift, Mirza et al. designed a metacognitive online sequential extreme learning machine called MOSELM (17). MOSELM is a development of OSELM (18), and uses online sequential learning method to learn data stream and deal with concept drift (19; 20; 21; 22); the sliding window can be adaptively adjusted according to the accuracy of classification; if a sample is correctly classified, it will be deleted from sliding window; if misclassified, it will be added to sliding window again and then SMOTE algorithm (23) is executed to retrain classifier.Rough set is a mathematical tool to analyze data proposed by Z. Pawlak (24)
. Because it can deal with imprecise, inconsistent and incomplete information and eliminate redundant attributes from feature sets without any preliminary or prior information, it has been widely used in pattern recognition, image processing, biological data analysis, expert systems and other fields in recent years
(25; 26; 27). Some researchers have investigated methods to combine rough set with neural network to better get classification models (28). Kothari et al. applied rough set theory in the architecture of unsupervised neural network (29) and the proposed algorithm uses the Kohonen learning rule to train neural network. Azadeh proposed an integrated data envelopment analysisartificial (DEA) neural networkrough set algorithm for assessment of personnel efficiency (30); at first, it uses rough set to determine many candidate reductions and then calculates the performance of neural network for each reduct; the best reduct is selected by ANN results though DEA. Ahn et al. proposed a hybrid intelligent system combining rough set with artificial neural network to predict the failure of firms (31) and a new reduction algorithm called 2D reduction was designed; in 2D reduction method, rough set is utilized to eliminate irrelevant or redundant attributes and then scans samples to delete the samples of inconsistent decisions; at last, association rules can be extracted from the data; for Ahn’s hybrid classification model, if a new instance matches some association rules, the instance will be classified by the association rules; otherwise, the algorithm will use the data reduced by 2D reduction method to train a classifier to classify this instance. Xu et al. introduced a rough rule granular extreme learning machine called RRGELM(32); RRGELM uses rough set to extract association rules and the number of neurons in hidden layer is decided by the number of the association rules; the weights of input layer is not randomly generated, they are determined by the conditions whether instances are covered by the association rules or not. The above works have promoted the developments of rough set and neural network. However, those models only utilize rough set to reduce attributes or the cost of training rough neural networks is too large. Inspired by the above models, a new classification method combined extreme learning machine with rough set referred to as RELM was proposed in this paper. For RELM, the input weights and biases of hidden layer are randomly generated, and the training set is divided into two parts: upper approximation set and lower approximation set to train upper approximation neurons and lower approximation neurons. In addition, attributes reduction is introduced to eliminate the influence of redundant attributes on classification results. Because the input weights and biases of hidden layer are randomly generated and the output weights can analytically determined, RELM can overcome the some disadvantages of conventional neural networks and has a fast training speed.The contributions of this paper are as follows:

A new extreme learning machine is designed in this paper. Different from the traditional ELM (4), RELM utilizes rough set to divide data into upper approximation set and lower approximation set and then uses the upper approximation set and lower approximation set to train upper boundary neurons and lower boundary neurons correspondingly. Every neuron of RELM cantains two neurons: a upper boundary neuron and a lower boundary neuron. The final classification result is decided by the two kind of neurons.

Attribute reduction is introduced for RELM. Rough set has obvious advantages in attribute reduction and it can preprocess data according to the data itself and dose not need any prior knowledge. By using rough set, RELM can remove redundant attributes without any information loss and improve the performance of the proposed extreme learning machine algorithm.

A new method for determining the number of neurons in hidden layer is proposed. In RELM, the number of neurons in hidden layer is determined by the sizes of positive region and boundary region. It can reduce the blindness of selecting the number of neurons in hidden layer.

Rough set is used to guide the learning process of ELM. Traditional algorithms often separate rough sets and neural networks and do not fuse ELM and rough sets very well. Different from existing algorithms which only use rough set to reduce attributes or determine the number of neurons in hidden layer (31; 32), RELM uses the results of the data divided by rough set to train different kinds of neurons; it preferably combines rough set with extreme learning machine.
The rest of this paper is organized as follows. In Section 2, the basic concepts and principles of ELM and rough set are reviewed. Section 3 introduces the proposed algorithm and describes the implementation process and principle of RELM. In Section 4, RELM and comparison algorithms are evaluated on data sets and the results are analysed in detail. Finally, the conclusions are stated in section 5.
2 Preliminaries
In this section, we give a description of necessary preparatory knowledge in this paper. Firstly, we look into the essence of extreme learning machine and introduces the steps of ELM. Then rough set is reviewed and we describes the basic concepts and principles of rough set in detail.
2.1 The model of ELM
ELM is a singlehidden layer feedback neural network; the input weights and biases of hidden layer nodes are randomly selected, and the output weights can be analytically determined by the least square method. ELM is very fast in speed and has a good generalization ability (9).
For N distinct sample , and , the output of ELM with L hidden layer nodes is as (33; 34):
(1) 
where is the input weights between the nodes of input layer and the nodes of hidden layer; is the output weights of the nodes of hidden layer and
is the active function of hidden nodes which is a nonlinear piecewise continuous function. From the literatures
(4; 35), ELM can approximate any target function with zero error. So Eq.(1) can be written as:(2) 
is output matrix of hidden layer nodes:
(3) 
is the output weights of hidden nodes and is the target matrix of ELM, where
(4) 
The smallest norm leastsquares solution of Eq.(2) (36) is
(5) 
is the MoorePenrose generalized inverse (22), and it can be calculated by orthogonal projection method, orthogonalization method and singular value composition (SVD) (37).
In order to reduce the influence of ill conditioned matrix on the calculation results and improve the robustness, ridge parameter is used in ELM
(38; 39; 40). The optimization problem of ELM can be described as(6) 
Where C is penalty factor; is the resident between target value and real value of the sample. According to KKT conditions, if , the solution of Eq.(6) can be expressed as (40):
(7) 
So the output of ELM is
(8) 
If , the solution of Eq.(6) can be expressed as
(9) 
So the output of ELM is
(10) 
For binary classification problem, the decision result of ELM is:
(11) 
For classification problem, the decision result of ELM is
(12) 
Where .
From the above descriptions, the main steps of ELM are summarized as follows:
2.2 The basics of rough set theory
Rough set is a useful tool for the study of imprecise and uncertain knowledge. Rough set can analyze categorical data according to data itself and it dose not need any prior knowledge; it has been successfully applied in attribute reduction (41). Rough set claims that the objects with the same attribute value should belong to the same decision class, otherwise, it violates the principle of classification consistency (42).
Let be an information system (43), where is the universe with a nonempty set of finite objects, is an attributes set, is the values set when the attribute is a and is the values set of attributes set A. f is a information function: if , it has . For an information system, and , where C is a conditional attributes set and D is a decision attribute set; the information system is also be named decision table. So it has the following definitions from the literatures (44; 45; 46).
Definition 2.1.
For an information system , let and the relation between attributes set B and universe is defined as:
(13) 
IND(B) is called indiscernibility relation (equivalence relation).
It is known that the objects x and y have the same values on the attribute set B if and it can also say x and y are indiscernible by attributes set B.
Definition 2.2.
For , the x’s equivalence class of the Bindiscernibillity relation is defined as follow:
(14) 
The partition of U generated by IND(B) is denoted U/IND(B) or abbreviated as U/B.
From Definition 2.2, it is known that equivalence class is a set that all objects has the same attributes values; if , it has and for .
Definition 2.3.
Let be an information system, and is an equivalence relation of universe U generated by ; we call is an approximation space, If , so the lower approximation and upper approximation are defined as:
(15) 
Rough set divides the universe into three regions: positive region, negative region and boundary region; it can be seen as Fig.1.
Bpositive region of X is as:
(16) 
Bnegative region of X is as:
(17) 
Bboundary region of X is as:
(18) 
means the basic concepts of can be certainly assigned to X; means the basic concepts of can be possibly assigned to X; means we can not sure whether the basic concepts of U belong to X or not. If the boundary region of X is empty which indicates , so the set X is crisp; if , the set X is rough (47). When the attributes contain more knowledge, the crisper the concepts will be; boundary region represents the uncertainty degree of knowledge; the greater the boundary is, the greater the uncertainty of knowledge will be (48). In order to measure the uncertainty of rough set, the following definition is introduced.
Definition 2.4.
(49) For an information system , , and C is the condition attributes set, D is the decision attribute set. , so the approximate quality of B for D is defined as:
(19) 
Approximate precision is as:
(20) 
Where .
is also called as dependence degree of B on D. If , it says B is completely dependent on D. If , B is partial dependent on D.
Definition 2.5.
(50) Give a decision table , C is the condition attributes set, D is the decision attribute set, and . For , if , it is said that a is redundant for D; otherwise a is essential for D.
It is obvious that removing the redundant attributes relative to D in B will not change the approximate ability of B to D.
3 The model of RELM
In this section, we give an introduction about the structure of RELM, and then describes the basic principles of RELM. How to train hidden nodes of RELM using rough set theory can be also found in this section. The detailed execution steps of RELM can be seen in Algorithm 3.
3.1 The structure of RELM
RELM is a development of ELM (4), but different from the conventional ELM, the neurons of RELM are rough neurons which are trained by the data divided by rough set. Rough set divide a universe into two distinct parts: lower approximation set and upper approximation set. For RELM, each neuron contains two neurons: upper approximation neuron which is trained by upper approximation set and lower approximation neuron which is trained by lower approximation set. The input weights and biases of upper approximation neurons and lower approximation neurons are randomly generated. The output weights of upper approximation neurons and lower approximation neurons are analytically determined as the method of ELM. The classification result of RELM is decided by the outputs of upper approximation neurons and lower approximation neurons although the training process of these two kinds of neurons is relatively independent. It is known that RELM closely combines ELM with rough sets and the division result of a universe by rough set is used to guide the learning process of RELM. So RELM is a kind of ELM based on uncertainty measure and can effectively analyze imprecise, inconsistent and incomplete information. The structure of RELM is showed as Fig.2.
3.2 Rough extreme learning machine for data classification
In the algorithm of RELM, the most important is training rough neurons. From Fig.2, it is known that there are two kinds of neurons: lower approximation neuron and upper approximation neuron. Each neuron actually contains one lower approximation neuron and one upper approximation neuron. For a training data set , A is the condition attributes set and D is the decision attribute set. So is divided into two parts by rough set: lower approximation set and upper approximation set .
(21) 
Let L is the number of neurons in hidden layer, is the input weights connecting input neurons with lower approximation neurons and is the biases of lower approximation neurons. Lower approximation neurons of RELM are trained by . According to the ELM theory, if , the output weights of lower approximation neurons are as:
(22) 
Where is the number of samples in , C is a ridge parameter, is the output matrix of lower approximation neurons in hidden layer, is the target matrix of and
is an identity matrix.
and in Eq.(22) are as follows:(23) 
If , the output weights of lower approximation neurons are as:
(24) 
Let is the input weights connecting input neurons with upper approximation neurons and is the biases of upper approximation neurons. Upper approximation neurons of RELM are trained by . So if , the output weights of upper approximation neurons are as:
(25) 
Where is the number of samples in upper approximation set. and are as:
(26) 
If , the output weights of lower approximation neurons are as:
(27) 
Suppose the test data set is , and are the output matrices of lower approximation neurons and upper approximation neurons for correspondingly. The final output matrices of hidden layer are as:
(28) 
So the target matrix of RELM for is
(29) 
Where c is a weight to balance the output matrices of lower approximation neurons and upper approximation neurons. is the output target matrix of lower approximation neurons and is the output target matrix of upper approximation neurons.
In order to eliminate the influence of redundant attributes on classification result, attribute reduction is introduced in RELM. From rough set theory, it is known that rough set can remove redundant attributes without any empirical knowledge. For the training data set , C is the condition attributes set, D is the decision attribute set and ; , the significance of the attribute a for the decision attribute set D based on the condition attributes set B is as:
(30) 
The greater the value of is, the more important a is for D. So can be used to select nonredundant attributes. The attribute reduction method is as follow.
The number of neurons in the hidden layer plays an important role in RLM. If the number of neurons in the hidden layer L of RELM is too large, it may be overfitting; if L is too small, the under fitting problem may appear. So RELM uses the dividing result of data by rough set to determine the number of neurons in the hidden layer. For a data set, if the larger the positive region is, it presents that the attributes set has a good ability to distinguish data, so it is better for RELM to determine a small L; if the larger the boundary region is, it indicates the attributes set can not divide data well, and it has a trend to choose a large L. The number of neurons in the hidden layer L is decided as
(31) 
Where and are the parameters predefined by user which mean the weights of positive region and boundary region correspondingly for determining L. It is obvious that L is decided according to the division of data self, in other words, it decreases the dependence on empirical knowledge; so the number of neurons in the hidden layer is determined by Eq.(31) can reduce the blindness of selecting L to a certain extent.
From the above descriptions, the steps of RELM are summarized as Algorithm 3.
From the algorithm 3, the steps of RELM and ELM are very different from each other. The neurons of RELM are rough neurons, which are trained based on the division of data by rough set, so RELM is a classification method based on uncertainty measure; the method of uncertainty measure has a good advantage in dealing with inconsistent and incomplete information and the number of neurons in hidden layer is also decided by the information provided by rough set which does not need too much experience knowledge. By utilizing the output results of the upper approximation neurons and lower approximation neurons, RELM can make full use of the information provided by rough sets to get a better classification result.
4 Experiment and results
In this section, we demonstrate the effectiveness of the proposed method and comparison algorithms on 19 data sets. To verify the capabilities of our algorithm, we choose CELM, CSELM, DELM, MELM, RandomSampleELM and SELM etc as comparison algorithms, all algorithms are executed on MATLAB R2017a platform. The configurations of the computer are as: Windows 7 OS, 8GB RAM memory, Intel i32120 dual core CPU.
4.1 Data set descriptions
In the experiments, there are 16 real data sets from the UCI data sets website^{2}^{2}2http://archive.ics.uci.edu/ml/datasets.html and 3 manmade data sets which are generated by MOA platform ^{3}^{3}3http://moa.cms.waikato.ac.nz/ (51). The descriptions of the 16 real data sets can be seen from the UCI data sets website, so we only give a brief introduction about the 3 manmade data sets. The information of all data sets can be seen in Table 1.
Hyperplane data set: For a d
dimensional space, a hyperplane is defined as
where and . If , the label of is remarked as a positive sample; otherwise the label of is marked as a negative sample. There are 10% noise in the Hyperplane data set.Waveform data set: There are 3 classes, 21 attributes in the data set. The goal of the task is to differentiate the three types of waveform. There are 2% noise in the Hyperplane data set.
STAGGER data set: There are 3 attributes for each sample in the data set: , , and . The concepts of the data are as: , and .
Data set  Attributes  Numerical attributes  Categorical attributes  Samples  Type 

Horse  26  9  17  300  mixed 
glass  9  9  0  214  numerical 
biodeg  41  17  24  500  mixed 
haberman  3  0  3  301  numerical 
lungcancer  31  31  0  57  categorical 
votes  6  6  0  435  categorical 
Germany  24  3  21  500  mixed 
Echocardiogra  12  4  8  131  mixed 
Tic  9  0  9  958  categorical 
parkinsons  22  22  0  195  numerical 
yeast  8  8  0  500  numerical 
vehicle  18  18  0  846  numerical 
pima  8  1  7  500  mixed 
segment  19  19  0  500  numerical 
Hepatitis  19  1  18  156  mixed 
STAGGER  3  0  3  500  categorical 
adult  13  5  8  500  mixed 
Hyperplane  40  40  0  500  numerical 
Wavefrom  21  21  0  500  numerical 
Because rough set can only analyse categorical data, the numerical data sets and fixed data sets are discretized. The discretization method is equal interval discretization, and the number of intervals is set as the number of labels in data set.
4.2 The comparison results of RELM with other ELM algorithms
In order to test the efficiency of RELM, in this section, we choose CELM(52), CSELM, DELM, MELM, RandomSampleELM and SELM as comparison algorithms(53). For RELM, c=0.5; the activation function is chosen from sigmoid, radbas, tribas, sine and hardlim. The other parameters of RELM and comparison algorithms are as in Tables 2 and 3. The test results and time overheard are also showed in Tables 2 and 3.
CELM  CSELM  DELM  MELM  RandSampleELM  SELM  RELM  L  C  function  

Horse  0.64700.02163  0.09700.0221  0.66500.0877  0.66800.1223  0.46300.1671  0.6480 0.00632  0.65500.0513  2  100  hardlim 
glass  0.74300.0219  0.09800.0225  0.75300.1011  0.74500.1032  0.74100.3956  0.64500.0401  0.61600.0474  20  100  hardlim 
biodeg  0.65370.0163  0.04370.0033  0.65590.0127  0.66050.0245  0.48460.5076  0.66900.0057  0.66140.0913  5  100  hardlim 
haberman  0.71880.0069  0.03170.0160  0.71280.0104  0.71280.0224  0.62270.4778  0.69400.0031  0.74350.0490  5  100  tribas 
lungcancer  0.38180.0575  10  0.43640.0717  0.36360.1212  0.43630.1858  0.36360.0383  0.31820.1154  20  100  tribas 
votes  0.77450.0298  0.65790.0022  0.79650.0327  0.92760.01810  0.54890.0146  0.93310.0033  0.93550.1044  1000  100  radbas 
Germany  0.68740.0025  0.41310.0101  0.68410.0047  0.68320.0019  0.38920.1956  0.68050.0104  0.69580.0197  2  1000  tribas 
Echocardiogram  0.73860.0482  0.01130.0120  0.72270.0869  0.67950.0783  0.19770.3127  0.67270.0618  0.74090.0565  2  1000  tribas 
Tic  0.65710.0179  0.49340.0252  0.64750.0243  0.65530.0190  0.43560.0190  0.65590.0154  0.66030.0114  5  1000  sigmoid 
parkinsons  0.80610.050  00  0.80760.0424  0.77690.0621  0.16920.3458  0.76760.0419  0.81690.0816  800  1000  hardlim 
yeast  0.54000.0189  0.02550.0079  0.54780.0307  0.67910.0156  0.56400.1990  0.70580.0209  0.60240.1589  1000  1000  hardlim 
vehicle  0.73220.1120  0.07160.0119  0.74780.0674  0.74570.0641  0.50420.4190  0.75920.0188  0.76910.0217  2  1000  radbas 
pima  0.66090.0320  0.16250.0235  0.67420.0461  0.66560.0409  0.26130.1118  0.63670.0276  0.68200.0201  5  1000  tribas 
segment  0.85510.0209  0.02390.0085  0.85510.0304  0.86160.0282  0.32330.3480  0.85980.0303  0.86580.0302  5  1000  hardlim 
Hepatitis  0.68070.0553  0.11340.0400  0.68650.0588  0.76920.0351  0.61350.1987  0.66530.0943  0.77690.0553  100  1000  tribas 
STAGGER  0.98980.0057  0.62030.0328  10  0.50650.0360  0.62810.0351  0.50590.0251  10  100  1000  hardlim 
adult  0.71970.0297  0.01610.0080  0.68800.0396  0.72930.0203  0.60240.5134  0.74910.0356  0.76820.0217  100  1000  tribas 
CELM  CSELM  DELM  MELM  RandSampleELM  SELM  RELM  

Horse  0.0147  0.0087  0.0091  0.0099  0.0156  0.0087  18.0397 
glass  0.0033  0.0629  0.0051  0.0092  0.0046  0.0080  0.2921 
biodeg  0.0072  0.3173  0.0119  0.0142  0.0188  0.0059  90.9761 
haberman  0.0072  0.0191  0.0053  0.0059  0.0073  0.0043  0.035092 
lungcancer  0.0031  0.0027  0.0033  0.0036  0.0048  0.0033  10.5244 
votes  0.8055  0.1512  0.1584  0.1735  2.9999  0.1204  4.0429 
Germany  0.0123  0.0091  0.0083  0.0164  0.0095  0.0090  39.0233 
Echocardiogram  0.0020  0.0107  0.0033  0.0029  0.0020  0.0026  0.6670 
Tic  0.0092  0.0563  0.0087  0.0085  0.0108  0.0087  3.435405 
parkinsons  0.0792  0.0204  0.0621  0.0916  1.9678  0.0508  2.3083 
yeast  0.2022  0.3253  0.2162  0.1776  2.9894  0.1848  0.5858 
vehicle  0.0068  0.0142  0.0073  0.0075  0.0098  0.0088  3.7902 
pima  0.0078  0.0091  0.0085  0.0073  0.0100  0.0062  0.0059 
segment  0.0069  0.2842  0.0069  0.0050  0.0082  0.0072  3.0836 
Hepatitis  0.0089  0.0142  0.0076  0.0078  0.0407  0.0069  4.9032 
STAGGER  0.0095  0.0099  0.0082  0.0141  0.0358  0.0114  0.0407 
adult  0.0107  0.0432  0.0092  0.0103  0.0382  0.0087  3.3013 
From Table 2, it is obvious that RELM is better than CELM, CSELM, DELM, MELM, RandSampleELM, SELM and RELM on most data sets; RELM gets the highest accuracy on 12 data sets and gets the second best on biodeg data set; it only loses to DELM, CSELM, MELM and SELM on Horse, glass, biodeg, lungcncer, and yeast data sets; the results indicate that the approximation ability of RELM is effective for classification task. Table 3 is the time overheard of RELM and the comparison algorithms. By analyzing the data, it is known that the proposed algorithm does not have much advantage on time overheard and it is not the least timeconsuming algorithm on most data sets, in other words, RELM is a timeconsuming algorithm. According to the ELM theory, ELM has a fast speed, so the most time is consumed in the rough set method. If combining the data from Table 2, it shows that rough set can improve the performance of the proposed algorithm; therefore for the classification task with low realtime requirement, it is worth considering using RELM.
4.3 The effect of activation function on the performance of the algorithm
For testing the effect of activation function on the performance of RELM, we choose sigmoid, radbas, tribas, sine, and hardlim as activation functions. Every data set is tested on the 5 activation functions and RELM is tested on 17 data sets. The number of neurons and test results are showed in Table 4.
sigmoid  radbas  tribas  sine  hardlim  L  

Horse  0.66500.0375  0.59000.0956  0.65100.0387  0.62100.0927  0.65800.0537  20 
glass  0.88080.0329  0.86440.0332  0.85750.0377  0.69860.3069  0.88360.0378  90 
biodeg  0.59340.0351  0.53410.0669  0.55990.0895  0.57190.1030  0.56640.07752  100 
haberman  0.66830.1556  0.62280.1969  0.72970.0371  0.54850.1089  0.72670.0234  50 
lungcancer  0.36360.1050  0.30910.1497  0.290910.0939  0.30000.1425  0.28180.1246  100 
votes  0.62960.1351  0.58070.0657  0.58000.1611  0.55520.0696  0.56760.1328  20 
Germany  0.69850.0171  0.61530.1639  0.66860.0988  0.51530.0987  0.70210.0148  30 
Echocardiogram  0.59090.1680  0.65000.1189  0.69090.0559  0.47960.1156  0.72050.0372  50 
Tic  0.65910.0289  0.65250.0143  0.66530.0161  0.58440.1329  0.65280.0115  50 
parkinsons  0.74310.0513  0.38620.2113  0.22310.0364  0.54770.1395  0.6046  30 
yeast  0.74130.0266  0.73650.0385  0.72700.0405  0.67550.1635  0.74970.0211  100 
vehicle  0.74610.0203  0.73790.0260  0.75070.0197  0.57660.1410  0.74150.0231  25 
pima  0.58500.1197  0.62040.0450  0.64370.0279  0.49700.0881  0.63830.0323  75 
segment  0.86890.0192  0.85930.0273  0.86950.0292  0.44790.2288  0.86650.0312  45 
Hepatitis  0.81920.0603  0.81540.0427  0.81350.0301  0.70580.2277  0.42130.2840  80 
STAGGER  0.55390.0971  0.48080.02526  0.52220.1179  0.66470.1967  0.47730.0536  50 
adult  0.77670.0245  0.77130.0359  0.76890.0287  0.53590.1859  0.77310.0252  50 
From Table 4, it can conclude that the accuracies of RELM are different with different activation functions. When the activation function is sigmoid, RELM gets 7 best accuracies; when the activation function is tribas, RELM gets 5 best accuracies; when the activation function is hardlim, RELM gets 4 best accuracies; the test results of the proposed algorithm are not very well for those experimental data sets if choosing radbas or sine as activation function, because there is no best accuracy for radbas and there is only one best accuracy for sine
. For every data set, the standard deviations of accuracies are also different with activation function. For example, the activation function is
sigmoid, the standard deviation is 0.0375; if the activation function is radbas, it changes to 0.0956. the similar situations can be found on the other data sets. From the test results, it is obvious that the activation function has a great impact on the performance of RELM; the activation functions of the best results on different data sets are also different, so how to select activation function depending on experimental data set. If users do not have too much empirical knowledge about choosing activation functions, sigmoid, tribas or hardlim seems be a good initial selection for those experimental data sets.4.4 The effect of the number of neurons in hidden layer on the performance of RELM
In order to test the effect of the number of nodes in hidden layer on performance of RELM, we set the activation function of RELM as sigmoid, the ridge parameter C is set as 1000 and the number of neurons in hidden layer varies from 1 to 1000. The test accuracies of RELM with different number of the neurons in hidden layer are showed as in Table 5.
L  5  50  200  500  800  1000 

Horse  0.61000.1505  0.53700.1245  0.46400.1342  0.47700.1084  0.50400.1424  0.50300.1216 
glass  0.72460.3046  0.77390.2346  0.55620.3631  0.48360.3667  0.49720.3663  0.55070.3667 
biodeg  0.53890.1559  0.55030.1160  0.49040.1169  0.48680.0945  0.43530.0727  0.50720.1196 
haberman  0.58910.1929  0.62670.2079  0.50690.1772  0.53560.1924  0.49700.2082  0.43270.1956 
lungcancer  0.20000.0717  0.28180.1512  0.30910.0636  0.39090.0963  0.26360.1572  0.34550.1032 
votes  0.67790.2021  0.62070.2469  0.62690.2145  0.50690.2220  0.59100.2294  0.62270.2320 
Germany  0.52640.2160  0.67010.1807  0.56530.1987  0.55990.1573  0.44910.1947  0.47190.1459 
Echocardiogram  0.73180.1354  0.61820.2096  0.63860.1447  0.60910.1827  0.59550.2221  0.61140.2745 
Tic  0.65630.0304  0.65310.0256  0.65000.0177  0.65880.0311  0.65500.0287  0.64380.0231 
parkinsons  0.75690.0422  0.56770.2581  0.43390.2316  0.57850.2181  0.54310.1860  0.53690.216 
yeast  0.46350.2474  0.69940.1549  0.62220.2063  0.67130.1699  0.59220.2214  0.54310.2452 
vehicle  0.75450.0238  0.50060.2552  0.62460.2233  0.65150.2149  0.60600.2230  0.54970.2287 
pima  0.59040.0765  0.52460.1099  0.46830.1204  0.53890.1197  0.5410.1155  0.49940.1026 
segment  0.81200.1916  0.67610.2353  0.36350.2873  0.49220.2505  0.63650.2100  0.51500.2019 
Hepatitis  0.79420.0666  0.78080.0446  0.81730.0408  0.80960.0492  0.79810.0187  0.80000.0317 
STAGGER  0.71680.1956  0.54250.1100  0.51500.0314  0.50420.0176  0.52040.0287  0.51680.0360 
adult  0.76710.0234  0.78920.0243  0.75210.0821  0.54670.2901  0.60060.2756  0.64970.2323 
From Table 5, it is known that the number of neurons in hidden layer has a significant impact on the performance of RELM. If the numbers of neurons in hidden layer are different, the test results will be also different. Because the sizes of the experimental data sets are not very large, most data sets get the best results on a small L. By analyzing the test results, it can conclude that the performance of the proposed algorithm does not increase with the increase of the number of neurons in hidden layer. If the number of neurons in hidden layer is too large or too small, the performance of the algorithm will be decreased. The reason for this phenomenon is that if L is too large, it will cause the structure of RELM is too complex and RLEM may be overfitting for training data; if L is too small, it will cause the target classification model cannot be effectively approximated by RELM and underfitting may appear. When combining Tables 2 and 5, it can be found that RELM has a trend to choose a small L under the condition of guaranteeing its performance; it indicates that the method determining the number of neurons in hidden layer is efficient.
4.5 The research about the reduction mechanism of RELM
In order to test the reduction mechanism of RELM, we execute the proposed algorithm and RELM without reduction mechanism (denoted URELM) on 14 data sets; The number of neurons in hidden layer L is 150, the activation function is hardlim and C is set as 1000. RELM and URELM are tested 10 times and the results are showed in Tables 6,7 and Fig.3.
From Fig.3, it is found that the performances of RELM and URELM have a fluctuation, and the main reason is that the input weights and the biases are randomly generated but the number of neurons in hidden layer L is fixed; randomness results in the fluctuation of RELM’s performance. After analysing of the curves in Fig.3, it can see RELM is better than URELM in most cases. The results in Table 6 are the average accuracies of 10 executing results. From the results in Table 6, it is obvious that the performance of RELM is significantly better than that of URELM on all experimental data sets and the results indicate that the reduction mechanism of RELM can improve the performance of RELM. The data in Table 7 is the reduction results of RELM. From Table 7, RELM can remove redundant attributes without changing the distinguish ability of condition attributes; it can conclude that the reduction mechanism is effective for RELM. However, the reduction mechanism gets an abnormal reduction result on Hepatitis data set. For Hepatitis data set, there are 19 attributes, but all condition attributes are removed as redundant attributes. The reason why the reduction mechanism produces this result is that the dependence of condition attributes on decision attributes is not very large. In other words, relative positive region does not decrease when removing any condition attribute; so the reduction algorithm removed all condition attributes.
Algorithms  RELM  URELM 

Horse  0.6940.1625  0.54600.2375 
glass  0.86560.0528  0.80950.2398 
haberman  0.62280.1476  0.57620.1881 
votes  0.72970.1336  0.60550.1600 
Echocardiogram  0.67960.1195  0.65910.1496 
Germany  0.61020.1702  0.59160.1911 
parkinsons  0.64920.1714  0.52920.2032 
yeast  0.66230.1245  0.60890.1513 
vehicle  0.67310.1624  0.62460.2179 
pima  0.56590.0974  0.51250.0947 
segment  0.76530.1893  0.70060.2297 
STAGGER  1.00000.0000  0.98860.0302 
adult  0.75630.0233  0.67550.2278 
Hepatitis  0.78850.0395  0.53460.2457 
Data sets  Before reduction  After reduction  Reduced dimensions  Reduced dimensions/Befor reduction 

Horse  26  3  23  0.8846 
glass  9  4  5  0.5556 
haberman  3  2  1  0.3333 
votes  16  9  7  0.4375 
Echocardiogram  12  9  3  0.2500 
Germany  24  7  17  0.7083 
parkinsons  22  7  15  0.6818 
yeast  8  8  0  0.0000 
vehicle  18  13  5  0.2778 
pima  8  8  0  0.0000 
segment  19  11  8  0.4211 
STAGGER  3  2  1  0.3333 
adult  13  12  1  0.0769 
Hepatitis  19  19  19  1.0000 
4.6 The impact of data dimensions on time overheard
In order to test the impact of data dimensions on time overheard, we choose Horse, biodeg, lungcancer, Germany, parkinsons, vehicle, segement, Hypelane and Waveform as experimental data sets; the activation function is radbas; the number of neurons in hidden layer L is 650 and C is 1000. BP algorithm (2) with rough set reduction method (remarked as RS+BP) is as comparison algorithm. The time overheard is showed as Fig.412.
Fig.4Fig.12 are the time overhead of RELM and RS+BP. The subgraph in each graph is the local graph of the change trend of RELM’s time overhead. From Fig.4Fig.12, it can be seen that the time overhead of RELM and RS+BP increases with the increase of data dimensions. RS+BP is a very timeconsuming algorithm, and the time overhead of RS+BP is much more than that of RELM. Because RELM has utilized the ELM mechanism, the cost of RELM Keeps at a lower level although the time expenditure is also showing a significant increasing trend. In a word, the time complexity of RELM is less sensitive to data dimension than RS+BP.
5 Conclusion and future work
In this paper, we proposed a new extreme learning machine algorithm with rough set method called RELM. RELM utilizes the data division result of rough set to train upper approximation neurons and lower approximation neurons; and the output weights can be analytically determined. The final classification result is decided by the two kinds of neurons. In addition, the attribute reduction method is introduced to remove redundant attributes. The experimental results showed that RELM is an effective algorithm. However, in some experiments, it can be found the performance of RELM seems unstable; the variances of accuracies is somewhat large on some data sets. From Fig.4Fig.12, it can be seen that the time cost of RELM almost increases exponentially. So how to improve the performance’s stability of RELM and further reduce the time complexity will be research directions in our future work.
Acknowledgments
This work was supported by National Natural Science Fund of China (Nos.61672130, 61602082, 61370200), the Open Program of State Key Laboratory of Software Architecture (No. SKLSAOP1701), China Postdoctoral Science Foundation (No.2015M581331), Foundation of LiaoNing Educational Committee (Nos.201602151) and MOE Research Center for Online Education (2016YB121).
References
References
 (1) X. Wu, X. Zhu, G. Q. Wu, W. Ding, Data mining with big data, IEEE Transactions on Knowledge Data Engineering 26 (1) (2013) 97–107.
 (2) J. Han, M. Kamber, J. Pei, Data Mining Concept and Techniques, Morgan Kaufmann Publishers, 2012.
 (3) D. S. Parker, D. S. Parker, Empirical comparisons of various voting methods in bagging, in: ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2003, pp. 595–600.
 (4) G. B. Huang, Q. Y. Zhu, C. K. Siew, Extreme learning machine: Theory and applications, Neurocomputing 70 (2006) 489–501.
 (5) G. B. Huang, H. Zhou, X. Ding, R. Zhang, Extreme learning machine for regression and multiclass classification, IEEE Transactions on Systems Man Cybernetics Part B 42 (2) (2012) 513–529.
 (6) G. B. Huang, X. Ding, H. Zhou, Optimization method based extreme learning machine for classification, Neurocomputing 74 (2010) 155–163.
 (7) G. B. Huang, L. Chen, Enhanced random search based incremental extreme learning machine, Neurocomputing 71 (2008) 3460–3468.

(8)
J. Zhang, S. Ding, N. Zhang, Z. Shi, Incremental extreme learning machine based on deep feature embedded, International Journal of Machine Learning
Cybernetics 7 (1) (2016) 111–120. 
(9)
J. Tang, C. Deng, G. B. Huang, Extreme learning machine for multilayer perceptron, IEEE Transactions on Neural Networks
Learning Systems 27 (4) (2017) 809–821. 
(10)
L. CornejoBueno, A. AybarRuiz, S. Jim nezFern ndez, E. Alexandre, J. C. NietoBorge, S. SalcedoSanz, A grouping genetic algorithm extreme learning machine approach for optimal wave energy prediction, in: Evolutionary Computation, 2016, pp. 3817–3823.
 (11) W. Zhang, H. Ji, G. Liao, Y. Zhang, A novel extreme learning machine using privileged information, Neurocomputing 168 (2015) 823–828.
 (12) M. Uzair, A. Mian, Blind domain adaptation with augmented extreme learning machine features, IEEE Transactions on Cybernetics 47 (3) (2017) 651–660.
 (13) W.Y. Deng, Y.S. Ong, Q.H. Zheng, A fast reduced kernel extreme learning machine, Neural Networks 76 (2016) 29 – 38.
 (14) J. Cao, K. Zhang, M. Luo, C. Yin, X. Lai, Extreme learning machine and adaptive sparse representation for image classification, Neural Networks 81 (2016) 91–102.
 (15) X. Li, W. Mao, W. Jiang, Extreme learning machine based transfer learning for data classification, Neurocomputing 174 (2016) 203–210.
 (16) Z. Wang, J. Xin, S. Tian, G. Yu, A grouping genetic algorithm extreme learning machine approach for optimal wave energy prediction, in: Proceedings of ELM2015 Volume 1: Theory, Algorithms and Applications (I), Springer International Publishing, 2016, pp. 319–332.
 (17) B. Mirza, Z. Lin, Metacognitive online sequential extreme learning machine for imbalanced and conceptdrifting data classification, Neural Networks 80 (2016) 79–94.
 (18) G. B. Huang, N. Y. Liang, H. J. Rong, P. Saratchandran, N. Sundararajan, Online sequential extreme learning machine., in: Iasted International Conference on Computational Intelligence, Calgary, Alberta, Canada, July, 2005, pp. 232–237.
 (19) A. Bifet, M. Pechenizkiy, A. Bouchachia, A survey on concept drift adaptation, Acm Computing Surveys 46 (4) (2014) 44:1–44:37.
 (20) S. Xu, J. Wang, A fast incremental extreme learning machine algorithm for data streams classification, Expert Systems with Applications 65 (2016) 332–344.

(21)
N. Lu, G. Zhang, J. Lu, Concept drift detection via competence models, Artificial Intelligence 209 (1) (2014) 11–28.
 (22) S. Xu, J. Wang, Dynamic extreme learning machine for data stream classification, Neurocomputing 238 (2017) 433–449.
 (23) N. V. Chawla, K. W. Bowyer, L. O. Hall, W. P. Kegelmeyer, Smote: synthetic minority oversampling technique, Journal of Artificial Intelligence Research 16 (1) (2002) 321–357.
 (24) Z. Pawlak, Rough set, International Journal of Computer Information Sciences 11 (5).
 (25) S. An, Q. Hu, W. Pedrycz, P. Zhu, E. C. C. Tsang, Datadistributionaware fuzzy rough set model and its application to robust classification, IEEE Transactions on Cybernetics 46 (12) (2016) 3073–3085.
 (26) J. Meng, J. Zhang, Y. Luan, Gene selection integrated with biological knowledge for plant stress response using neighborhood system and rough set theory, IEEE/ACM Transactions on Computational Biology Bioinformatics 12 (2) (2015) 433–444.
 (27) Y. Kim, D. Enke, Developing a rule change trading system for the futures market using rough set analysis, Expert Systems with Applications 59 (2016) 165–173.
 (28) P. Lingras, Comparison of neofuzzy and rough neural networks, Information Sciences 110 (1998) 207–215.
 (29) A. Kothari, A. Keskar, R. Chalasani, S. Srinath, Rough neuron based neural classifier, in: First International Conference on Emerging Trends in Engineering and Technology, 2008, pp. 624–628.
 (30) A. Azadeh, M. Saberi, R. T. Moghaddam, L. Javanmardi, An integrated data envelopment analysis cartificial neural network crough set algorithm for assessment of personnel efficiency, Expert Systems with Applications An International Journal 38 (2011) 1364–1373.
 (31) B. S. Ahn, S. S. Cho, C. Y. Kim, The integrated methodology of rough set theory and artificial neural network for business failure prediction, Expert Systems with Applications 18 (2000) 65–74.

(32)
X. Xu, G. Wang, S. Ding, X. Jiang, Z. Zhao, A new method for constructing granular neural networks based on rule extraction and extreme learning machine, Pattern Recognition Letters 67 (2015) 138–144.
 (33) G. Feng, G. B. Huang, Q. Lin, R. Gay, Error minimized extreme learning machine with growth of hidden nodes and incremental learning, IEEE Transactions on Neural Networks 20 (8) (2009) 1352–1357.
 (34) F. Sun, G. B. Huang, Q. M. J. Wu, S. Song, D. C. W. Ii, Efficient and rapid machine learning algorithms for big data and dynamic varying systems, IEEE Transactions on Systems Man Cybernetics Systems 47 (10) (2017) 2625–2626.
 (35) G. G. Wang, M. Lu, Y. Q. Dong, X. J. Zhao, Selfadaptive extreme learning machine, Neural Computing Applications 27 (2) (2016) 291–303.
 (36) G. B. Huang, M. B. Li, L. Chen, C. K. Siew, Incremental extreme learning machine with fully complex hidden nodes, Neurocomputing 71 (2008) 576–583.
 (37) X. Zhang, Matix Analysis and Applictions, Tsinghua University Press, 2008.
 (38) E. Cambria, G. B. Huang, L. L. C. Kasun, H. Zhou, C. M. Vong, J. Lin, J. Yin, Z. Cai, Q. Liu, K. Li, Extreme learning machines, IEEE Intelligent Systems 28 (6) (2013) 30–59.
 (39) G. B. Huang, D. H. Wang, Y. Lan, Extreme learning machines: a survey, International Journal of Machine Learning Cybernetics 2 (2) (2011) 107–122.
 (40) G. Huang, G. B. Huang, S. Song, K. You, Trends in extreme learning machines, Neural Networks the Official Journal of the International Neural Network Society 61 (2015) 32–48.
 (41) H. Chen, T. Li, C. Luo, S. J. Horng, G. Wang, A decisiontheoretic rough set approach for dynamic data mining, IEEE Transactions on Fuzzy Systems 23 (6) (2015) 1958–1970.
 (42) Z. Pawlakab, Rough set approach to knowledgebased decision support, European Journal of Operational Research 99 (1997) 48–57.

(43)
W. Shu, H. Shen, Incremental feature selection based on rough set in dynamic incomplete data, Pattern Recognition 47 (2014) 3890–3906.
 (44) M. S. LazoCort s, J. F. Mart nezTrinidad, J. A. CarrascoOchoa, G. SanchezDiaz, On the relation between rough set reducts and typical testors, Information Sciences 294 (2015) 152–163.
 (45) Y. Yao, Y. She, Rough set models in multigranulation spaces, Information Sciences 327 (2016) 40 – 56.
 (46) X. Y. Luan, Z. P. Li, T. Z. Liu, A novel attribute reduction algorithm based on rough set and improved artificial fish swarm algorithm, Neurocomputing 174 (2016) 522–529.
 (47) Y. Chen, Y. Xue, Y. Ma, F. Xu, Measures of uncertainty for neighborhood rough sets, KnowledgeBased Systems 120 (2017) 226 – 235.
 (48) Z. Ma, J. Li, J. Mi, Some minimal axiom sets of rough sets, Information Sciences 312 (2015) 40 – 54.
 (49) Q. Zhang, Q. Xie, G. Wang, A survey on rough set theory and its applications, CAAI Transactions on Intelligence Technology 1 (4) (2016) 323 – 333.
 (50) Y. Yao, X. Zhang, Classspecific attribute reducts in rough set theory, Information Sciences 418 (2017) 601 – 618.
 (51) A. Bifet, G. Holmes, R. B. Kirkby, B. Pfahringer, Moa: Massive online analysis, Journal of Machine Learning Research 11 (2010) 1601–1604.
 (52) W. Zhu, J. Miao, L. Qing, Constrained extreme learning machine: A novel highly discriminative random feedforward neural network, in: International Joint Conference on Neural Networks, 2014, pp. 800–807.
 (53) W. Zhu, J. Miao, L. QingarXiv:arXiv:1501.06115v2.