A Graph-Based Semi-Supervised k Nearest-Neighbor Method for Nonlinear Manifold Distributed Data Classification

06/03/2016 ∙ by Enmei Tu, et al. ∙ 0

k Nearest Neighbors (kNN) is one of the most widely used supervised learning algorithms to classify Gaussian distributed data, but it does not achieve good results when it is applied to nonlinear manifold distributed data, especially when a very limited amount of labeled samples are available. In this paper, we propose a new graph-based kNN algorithm which can effectively handle both Gaussian distributed data and nonlinear manifold distributed data. To achieve this goal, we first propose a constrained Tired Random Walk (TRW) by constructing an R-level nearest-neighbor strengthened tree over the graph, and then compute a TRW matrix for similarity measurement purposes. After this, the nearest neighbors are identified according to the TRW matrix and the class label of a query point is determined by the sum of all the TRW weights of its nearest neighbors. To deal with online situations, we also propose a new algorithm to handle sequential samples based a local neighborhood reconstruction. Comparison experiments are conducted on both synthetic data sets and real-world data sets to demonstrate the validity of the proposed new kNN algorithm and its improvements to other version of kNN algorithms. Given the widespread appearance of manifold structures in real-world problems and the popularity of the traditional kNN algorithm, the proposed manifold version kNN shows promising potential for classifying manifold-distributed data.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 13

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Nearest Neighbors (NN) [9, 52, 37, 53] is one of the most popular classification algorithms and has been widely used in many fields, such as intrusion detection [27], gene classification [26], semiconductor fault detection [18], very large database manipulation [23], nuclear magnetic resonance spectral interpretation [24] and the prediction of basal area diameter [31], because it is simple but effective, and can generally obtain good results in many tasks. One main drawback of the traditional NN is that it does not take the manifold distribution information into account and this can cause bias which results in bad performance. It becomes even worse when there are only a very small amount of labeled samples available. To address this, an example is shown in figure 1(a), in which there are two one-dimensional manifolds (the outer arch and the interior reflected S) which correspond to two classes, respectively. Each class has only 3 labeled samples, indicated by the colored triangles and circles. Black dots are unlabeled samples. Figure 1(b) - (d) show the 1NN, 2NN and 3NN classification results produced by the traditional NN, respectively. We can see that although the data have apparent manifold distribution, the traditional NN incorrectly classifies many samples due to ignoring the manifold information.

(a) Toy manifold data set
(b) =1
(c) =2
(d) =3
Figure 1: Results of traditional NN classification with =1, 2, 3 on manifold distributed data, in which the colored shapes (red triangles and green dots) are labeled samples and the black dots are unlabeled samples.

To improve the performance of the traditional NN, some new NN algorithms have been proposed. Hastie et. al. [17] proposed an adaptive NN algorithm which computes a local metric for each sample and uses Mahalanobis distance to find the nearest neighbors of a query point. Hechenbichler and Schliep [19] introduced a weight scheme to attach different importance to the nearest neighbors with respect to their distances to the query point. To reduce the effect of unbalanced training set sizes of different classes, Tan [39] used different weights for different classes regarding the number of labeled samples in each class. There are also some other improvements and weight schemes for different tasks [25, 8, 32, 51, 10, 16, 12].

However, none of these new

NN algorithms takes manifold structure into consideration explicitly. For high-dimensional data, such as face images, documents and video sequences, the nearest neighbors of a point found by traditional

NN algorithms can be very far in terms of the geodesic distance between them, because the dimension of the underlying manifold is usually much lower than that of the data space [36, 41, 2]. There have also been attempts to make NN adaptive to manifold data. In Turaga and Chellappa’s paper [46], geodesic distance is used to directly replace standard Euclidean distance in traditional NN, but geodesic distance can be computed with good accuracy only if the manifold is sampled with sufficient points. Furthermore, geodesic distance tends to be very sensitive to short-circuit phenomenon. Li [30] proposed a weighted manifold NN using Local Linear Embedding (LLE) techniques, but LLE tends to be unstable due to local changes on the manifold. Percus and Olivier [34] studied the general nearest neighbor distance metric on close manifold, but their method needs to know exactly the analytical form of the manifold and thus is unsuitable for most real-world applications.

In this paper, we propose a novel graph-based NN algorithm which can effectively handle both traditional Gaussian distributed data and nonlinear manifold distributed data. To do so, we first present a manifold similarity measure method, the constrained tired random walk, and then we modify the traditional NN algorithm to adopt the new measuring method. To deal with online situations, we also propose a new algorithm to handle sequential samples based on a local neighborhood reconstruction method. Experimental results on both synthetic and real-world data sets are presented to demonstrate the validity of the proposed method.

The remainder of this paper is organized as follows: Section 2 reviews the tired random walk model. Section 3 presents a new constrained tired walk random walk model and Section 4 describes the graph-based NN algorithm. Section 5 proposes a sequential algorithm for online samples. The simulation and comparison results are presented in Section 6, followed by conclusions in Section 7.

2 Review of tired random walk

Assume a training set contains labeled samples and the class label of is , , where is feature length and is class number. There are also unlabeled samples to be classified, . Denote and . We also use for the sample matrix, whose columns are the samples in . We use to denote the Frobenius norm.

In differential geometry studies, a manifold can be defined from an intrinsic or extrinsic point of view [38, 3]. But in data processing studies, such as dimension reduction [5, 35, 33] and manifold learning [48, 1, 47], it is helpful to consider a manifold as a distribution which is embedded in a higher Euclidean space, i.e. adopting an extrinsic view in its ambient space. Borrowing the concept of intrinsic dimension from [4], we give a formal definition of a manifold data set as follows.
Definition: A data set is considered to be manifold distributed if its intrinsic dimension is less than its data space dimension.

For more information on the intrinsic dimensions of a data set and how it can be estimated from data samples, we refer readers to

[4, 35]

. To determine whether a data set has manifold distribution, one can simply estimate its intrinsic dimension and compare it with the data space dimension, i.e. the length of a sample vector.

Similarity measure is an important factor while processing manifold distributed data, because on manifolds traditional distance metrics (such as Euclidean distance) are not a proper measure [57, 54, 56]. Recent studies have also proved that classical random walk is not a useful measure for large sample cases or high-dimensional data because it does not take any global properties of the data into account [29]. The tired random walk (TRW) model was proposed in Tu’s paper [42] and has been demonstrated to be an effective measure of nonlinear manifold [49, 55], because it takes global geometrical structure information into consideration.

Recall that on a weighted undirected graph, the classical random walk transition matrix is , where is the graph adjacency matrix and is a diagonal matrix with entries

. Now imagine that a tired random walker walks continuously through edges in a graph, but it becomes more tired after each walk and finally stops after all energy is exhausted, i.e. the transition probability of the random walk reduces with a fixed ratio (e.g. 0.01) after each walk and finally approaches 0. After

steps the tired random walk transition probability matrix becomes . Now considering figure 2, the tired random walker starts from vertex and its destination is vertex on the graph, walking with a strength reduction rate . Then it may walk through any path that connects vertices and , with an arbitrary number of steps before its strength is used up. For example, the tired random walker can walk through path , or . It can also walk through , or even , because while at vertex , the probability of walking to is much larger than that to . But all these walking paths reflect the underlying geometrical structure of the graph, hence the distribution of the data. Therefore, a good similarity measure between vertex to should take (globally) all possible paths and an arbitrary number of steps (can potentially be infinite) into consideration, rather than only considering (locally) a single path or a single step as the classical random walk [29] does. This makes a fundamental difference between the tired random walk and classical random walk and entitles the tired random walk to be more robust and effective, especially for manifold distributed data, as will be demonstrated in the experimental section. Mathematically, the accumulated transition probability of the tired random walk between vertex and is . For all vertices, the accumulated transition probability matrix becomes

. As the eigenvalue of

is in and , the series converges and the tired random walk matrix is

(1)
Figure 2: Tired random walk on weighted undirected graph.

Because takes all the possible paths into account, it captures the global geometrical structure information of the underlying manifold and thus is demonstrated to be more effective and robust to describe manifold similarity.

3 A constrained tired random walk model

In this section, we further extend the model into a constrained situation. For classification purposes, we find that labeled samples provide not only class distribution information, but also constraint information, i.e., samples which have the same class labels are must-link pairs and samples which have different class labels are cannot-link pairs. In most of the existing supervised learning algorithms, only class information is utilized but constraint information is discarded. Here we include constraint information into the TRW model by modifying the weights of graph edges between the labeled samples, because constraint information has been demonstrated to be useful for performance improvement [45, 15, 11, 58]. Class information will be utilized in the next section for the proposed new NN algorithm.

We first construct an -level nearest-neighbor strengthened tree for each labeled sample as follows:

  1. Set as the first level node (tree root node, ) and its nearest neighbors as the second level nodes.

  2. For each node in level , set its nearest neighbors as its level descendants. If any node in level appears in its ancestor level, remove it from level .

  3. If , go to (ii).

where is a user-specified parameter to define the depth of the tree. Then for each pair of samples , the corresponding graph edge weight is set according to the rules in Table 1.

Steps
1 if and have same class label.
2 if and have different class labels.
3 if at least one of , is unlabeled.
4 , if is a node in level and
is a child of in level in the strengthened tree.
Table 1: Constrained graph construction

is the Gaussian kernel width parameter and is the strengthening parameter. Note that for the Gaussian kernel, step 1 is equivalent to merging the two same-label samples together (0 distance between them) and step 2 is equivalent to separating the two different-label samples from each other to infinitely far (infinite distance between them). The connections from a labeled sample to its nearest neighbors are strengthened by in step 4. This can spread the ‘hard’ constraints in steps 1 and 2 to farther neighborhoods on the graph in a form of soft

constraints and thus causes these constraints to have a wider influence. The motivation of constructing the strengthened tree is inspired by the neural network reservoir structure analysis techniques, in which information has been shown to spread out from input neurons to interior neurons in the reservoir following a tree-structure path

[43].

The selection of parameter is based the following conditions

  • the strengthened weight should be positive and less than the weight of the must-link constraint.

  • the strengthening effect should be positive and decays along the strengthened tree level.

Mathematically, the conditions are

(2)

As a result, should be

(3)

In all our experiments, we used a single value of , which gives good results for both synthetic and real-world data.

4 A new graph-based NN classification algorithm on nonlinear manifold

Here we present a graph-based NN algorithm for nonlinear manifold data classification. The procedure of the algorithm is summarized in Table 2.

Steps
1 Input , and .
2 Construct a constrained graph according to Table 1
3 Compute the using equation (1)
4 Evaluate samples’ similarity according to equation (4)
5 Find nearest neighbors of an unlabeled sample using equation (5)
6 Determine the class label according to equation (6)
Table 2: A graph-based NN algorithm

Specifically, given matrix, the TRW weight between sample and is defined as

(4)

Note that while the similarity measure defined by matrix between two samples is not necessarily symmetric ( is not symmetric and thus its matrix series is also not symmetric), the weight defined in equation (4) is indeed a symmetric measure. For each unlabeled sample , we could find its nearest neighbors from by

(5)

Instead of counting the number of labeled samples from each class in the classical NN, we sum the TRW weights of the labeled samples of each class and the class label of the unlabeled sample is determined by

(6)

It is worth mentioning that because the proposed semi-supervised mNN utilizes class label information only in the classifying stage and it uses the same value equally for all classes, mNN naturally lacks the so-called class bias problem in many semi-supervised algorithms (such as [59, 28, 44, 20]) that is due to the influence of unbalanced labeled samples of each class in and needs to be re-balanced by various weighting schemes [50, 59]. Furthermore, with the algorithm described in the next section, mNN immediately becomes a supervised classifier and enjoys the advantages of both semi-supervised learning (classifying abundant unlabeled samples with only a tiny number of labeled samples) and supervised learning (classifying new samples immediately without repeating the whole learning process).

5 A sequential method to handle online samples

For one single unlabeled sample, if we compute its TRW weights using equation (1) and (4), we have to add it to and recompute the matrix . As a result, the computational cost for one sample is too high. Actually this is the so-called transductive learning problem111Transductive learning is an opposite concept to inductive learning. Inductive learning means the learning algorithm, such as SVM, learns a model explicitly in data space that partitions the data space into several different regions. Then the model can be applied directly to unseen samples to obtain the class labels. On the other side, transductive learning does not build any model. It performs one-time learning only on a fixed data set.Whenever the data set changes (for example, existing samples are changed or new samples are added.), the whole learning process has to be repeated again to assign new class labels., a common drawback of many existing algorithms [21, 22, 7, 13]. To attack this problem, we propose a new method based on rapid neighborhood reconstruction, in which a local neighborhood is first constructed in sample space and then the TRW weights can be reconstructed in the same local neighborhood with very trivial computational cost.

Given a new sample , it has been shown that can be well reconstructed by its nearest neighbors on the manifold if there are sufficient data points sampled from the manifold [36, 6, 40]. Thus, it is also reasonable to assume that the neighborhood relationships, hence the weights of sample in equation (4

), have the same geometrical distribution as the sample distribution. So, to compute the weights of

without explicitly recomputing matrix in equation (4), we first find ’s nearest neighbors222One should note that finding nearest neighbors in is quite different from that in . The former is the basis of many nearest-neighbor operations (such as constructing nearest-neighbor graph in [41, 36] and the -level strengthened tree in Section 3 of this paper) and the latter is the basis of the classical NN classifier. Because contains many instances, which are sampled densely from the underlying data distribution, an instances’s local neighborhood in is usually very small and thus Euclidean distance is still valid in this small range for that any manifold can be locally well approximated by Euclidean space [3]. However, instances in are very few and usually not densely sampled from the manifold and nearest neighbors in can be very far. Thus Euclidean distance is no longer suitable for measuring the closeness of the points in . This is why we need other new similarity (or closeness) measure methods, which is one of the main contributions of this paper. in , written as which contains these nearest neighbors in its columns, and then minimize the local reconstruction error by solving the following constraint quadratic optimization problem

(7)

where is a vector with all entries being 1. Note that the entries of are nonnegative and the sum of all entries must be 1, so is expected to be sparse [28]. Problem (7) is a constrained quadratic optimization problem and can be solved very efficiently with many publicly available toolboxes. We use the OPTI optimization toolbox333OPTI TOOLBOX: http://www.i2c2.aut.ac.nz/Wiki/OPTI/ to solve this. After obtaining the optimal , the TRW weights between and its nearest labeled samples in can be computed by solving the following optimization problem

(8)

where and contains the weights between sample in and its nearest neighbors in .

This process is illustrated in Figure 3. Figure 3(a) corresponds to equation (7) which computes and Figure 3(b) corresponds to equation (8) which computes .

(a) Computing reconstruction weight using equation (7)
(b) Computing TRW weight using equation (8)
Figure 3: The process of computing sequential TRW weight.

It is easy to see that the optimal solution of problem (8) is simply the result of a nonnegative projection operation

(9)

The second equation holds because both and are nonnegative and therefore their multiplication result is also nonnegative.

One should note that with this sequential learning strategy, the proposed mNN can be treated as an inductive classifier, whose model consists of both the matrix and the data samples that have been classified so far. Whenever a new sample arrives, the model can quickly give its class label by the three steps in Table 3, without repeating the whole learning process in Table 2.

Steps
1 Input , , and .
2 Find ’s nearest neighbors in
3 Use equations (7) and (8) to compute its TRW weights
4 Use equation (6) to classify it
Table 3: The procedure of sequential manifold NN algorithm

6 Experimental results

In this section, we report the experimental results on both the synthetic data sets and real-world data sets. The comparison algorithms include traditional nearest neighbors (NN), the weighted nearest neighbors (wNN) and the geodesic NN (gNN) proposed by Pavan and Rama[46], as well as our manifold nearest neighbors (mNN). For NN and wNN, the only parameter is . For gNN and mNN, there is one more parameter for each, i.e. the number of nearest neighbors for computing geodesic distance in gNN and the kernel width in mNN. We tune these two parameters by grid-search and choose their values to produce the minimal 2-fold cross validation error rate.

6.1 Experimental results on synthetic data sets

We first conduct experiments on three synthetic data sets shown in Figure 4 to demonstrate the superiority of over other algorithms. For each data set in Figure 4, the three red triangles and the three green dots are the labeled samples of the two classes, respectively. Note that all three data sets contain some ambiguous points (or bridging points) in the gap between two classes, making the classification even more challenging. Experimental results on these data sets are shown in Figures 5 to 7.

(a) data set 1
(b) data set 2
(c) data set 3
Figure 4: Three synthetic data sets, in which the colored shapes are labeled samples and the black dots are unlabeled samples
(a) NN
(b) wNN
(c) gNN
(d) mNN
Figure 5: Experimental results on synthetic data sets. (a)-(d): results of NN, wNN, gNN and mNN on data set 1.
(a) NN
(b) wNN
(c) gNN
(d) mNN
Figure 6: Experimental results on synthetic data sets. (a)-(d): results of NN, wNN, gNN and mNN on data set 2.
(a) NN
(b) wNN
(c) gNN
(d) mNN
Figure 7: Experimental results on synthetic data sets. (a)-(d): results of NN, wNN, gNN and mNN on data set 3.

From these results, we can see that because NN uses Euclidean distance to determine the class label and Euclidean distance is not a proper similarity measure on the manifold, the results given by traditional NN are quite erroneous. By introducing a weight scheme, wNN can perform better than NN, but the improvement is still quite limited. gNN has much better results because geodesic distance on the manifold is a valid similarity measure. However, as mentioned in Section 1, graph distance tends to be sensitive to short-circuit phenomenon (i.e. the noise points lying between the two classes), thus gNN still misclassifies many samples due to the existence of noisy points. In contrast, mNN achieves the best results, because TRW takes all the possible graph paths into consideration, thus it embodies the global geometrical structure information of the manifold and can be much more effective and robust to noisy points.

Figure 8 plots the mean error rate of each algorithm over 10 runs on these data sets, as changes from 1 to 10. In each run, labeled samples are randomly selected from each class and the rest of the samples in the data set are treated as unlabeled samples to form the testing set.

(a) Dataset 1
(b) Dataset 2
(c) Dataset 3
Figure 8: Experimental results on synthetic data sets. Mean error rate on the ordinate and on the abscissa.

From the results in Figure 8 we can see that mNN performs significantly better than other version of NN algorithms. It is interesting to note that on these manifold distributed data sets, while the mean error rates of NN and wNN have no obvious reduction as increases, the mean error rates of gNN and mNN decrease quickly. This indicates that gNN and mNN are able to exploit the information contained in the labeled samples in a more “efficient” way, because they take the manifold structure information into account. Again, mNN achieves the lowest error rate.

In order to demonstrate the effectiveness of the weight reconstruction method, we run the algorithm on these data sets to reconstruct each sample and its TRW weight using its nearest neighbours with equation (1.7) and (1.8), respectively. The relative mean square error (RMSE) of the reconstruction result is computed by

where is the ground truth and is the reconstructed result. The results are shown in Table 4.

data set 1 data set 2 data set 3
RMSE(, ) 0.0724 0.2734 0.1606
RMSE(, ) 0.5075 1.0524 1.9061
Table 4: Reconstruction error of the synthetic data sets (%).

From these results we can see that the reconstruction method in equation (7) and (8) can produce a very accurate approximation to the true samples and weights, respectively, on nonlinear manifold data sets.

6.2 Experimental results on real-world data sets

We also conduct experiments on six real-world data sets from the UCI data repository, which contains real application data collected in various fields and is widely used to test the performance of different machine learning algorithms

444UC Irvine Machine Learning Repository: http://archive.ics.uci.edu/ml/.. The information on these six data sets is listed in Table 5 (: number of samples; : feature dimension; : class number).

usps segmentation banknote pendigits multifeature statlog
n 9298 2086 1348 10992 2000 6435
d 256 19 4 16 649 36
C 10 6 2 10 10 6
Table 5: Information on the experimental data sets.

On each data set, we let change from 1 to 10. For each , we run each algorithm 10 times, with a training set containing labeled samples randomly selected from each class and a testing set containing all the rest samples. The final error rate is the mean of the 10 error rates and is shown in Figure 9.

From Figure 9, we can see that the proposed mNN outperforms the other algorithms in terms of both the accuracy and stability of performance. The error curves of mNN decrease much more quickly than that the other algorithms, which indicates that mNN is able to utilize fewer labeled samples to achieve better accuracy. This is very important in applications where there is a very small amount of labeled samples available, because labeled samples are usually more expensive to obtain, i.e. they need to be annotated by human with expert knowledge and/or special equipment and take a lot of time. Therefore, it is of great practical value that a classifier is capable of accurately classifying abundant unlabeled samples, given only a small number of labeled samples.

From Figure 9, we can also conclude that: (1) by introducing a weight scheme in the traditional NN algorithm, weighted NN (wNN) can generally outperform traditional NN; (2) the performance of geodesic NN (gNN) has a relative large improvement to both NN and weighted NN for most cases, because geodesic distance is a valid measure on manifold; (3) the manifold NN (mNN) almost always achieves the smallest error rate, because the constrained TRW is a more effective and robust measure of the global geometrical structure on manifold and, meanwhile, mNN can take both the class information and constraint information into account.

(a) usps
(b) segmentation
(c) banknote
(d) pendigits
(e) multifeature
(f) satlog
Figure 9: Experimental results on six real-world data sets. Mean error rate on the ordinate and on the abscissa.

6.3 Experimental results of the comparison with other traditional supervised classifiers

We conduct experiments to compare the performance of the proposed manifold

NN algorithm with other popular supervised classifiers on the six real-world data sets. The baseline algorithms are: Support Vector Machine (SVM)

555For SVM we use the libsvm toolbox: https://www.csie.ntu.edu.tw/~cjlin/libsvm/. Other algorithms we use MATAB toolbox.

, Artificial Neural Networks (ANN), Naive Bayes (NB) and Decision Tree (DT). The configurations of the baseline algorithms are: radial basis kernel for SVM; three-layer networks trained with back-propagation for ANN; kernel smoothing density estimation for NB; binary classification tree and merging-pruning strategy according to validation error for DT. We adopt a grid-search strategy to tune the parameters of each algorithm and the parameters are set to produce the lowest 2-fold cross validation error rate.

To examine the capability of classifying plenty of unlabeled samples with only a few labeled samples, we randomly choose three labeled samples from each class to form the training set and the remaining samples are treated as unlabeled samples to form the testing set. Each algorithm runs 10 times and the final result is the average of these 10 error rates. The experimental results (mean error standard deviation) are shown in Table 6.

SVM ANN NB DT mNN
usps 77.354.6 40.964.3 34.734.9 73.968.4 19.373.9
segmentation 66.548.7 42.543.8 50.834.3 49.552.6 24.184.5
banknote 70.0114.7 23.036.2 28.377.8 44.430.4 9.736.9
pendigits 49.4112.3 44.0227.4 42.366.2 80.8311.3 12.512.3
multifeature 82.896.4 50.5324.2 26.892.9 76.142.9 15.362.3
satlog 57.2617.1 30.154.9 28.815.9 72.5513.5 21.078.0
Table 6: Experimental results on real-world data sets (error rate: %)

From these results, we can see that mNN significantly outperform these traditional supervised classifiers, given only three labeled samples per class. Similar to traditional

NN, traditional supervised classifiers are incapable of exploiting the manifold structure information of the underlying data distribution, thus their accuracies are very low while the labeled sample number is very small. Furthermore, when the labeled samples are randomly selected, their positions vary greatly in data space. Sometime they are not uniformly distributed and thus cannot well cover the whole data distribution. As a result, the performance of traditional classifiers also varies greatly. In contrast, because m

NN adopts tired random walk to measure manifold similarity which reflects the global geometrical information of the underlying manifold structure and is robust to local changes. [42], it can achieve much better results.

To further investigate the performance of these algorithms under the condition of providing different number of training samples, we carry out experiments with different training set sizes. For each data set, we let varies from 1 to 40 () and randomly choose labeled samples from each class to form the training set. The rest of the data set are treated as unlabeled samples to test each algorithm’s performance. For each , every algorithm runs 10 times on each data set. The mean error rate and standard deviation of the 10-run results are shown in Figures 10 and 11, respectively.

From Figure 10 we can see that while the labeled sample number is small, the error rates of the traditional algorithms are very large (as also indicated in Table 6 which contains the results of the three labeled samples per class). As the number of labeled sample increases, the error rates decrease. This trend becomes slower after 15 labeled samples per class. The proposed mNN achieves the lowest error rate for both situations of small and large number of labeled samples. The improvement is especially obvious and significant when the labeled samples number is small. From Figure 11, we can also see that the performance of mNN is also relatively steady for most cases. One should note that on the left of each plot in Figure 11, the small standard deviation values of decision tree (DT) and SVM are due to the fact that their error rates are always very high in each run (e.g. the error rates of SVM on data set usps concentrate closely around 95).

(a) usps
(b) segmentation
(c) banknote
(d) pendigits
(e) multifeature
(f) satlog
Figure 10: Experimental results on six real-world data sets. Mean error rate on the ordinate and labeled samples number per class on the abscissa.
(a) usps
(b) segmentation
(c) banknote
(d) pendigits
(e) multifeature
(f) satlog
Figure 11: Experimental results on six real-world data sets. Standard deviation of error rates on the ordinate and labeled samples number per class on the abscissa.

6.4 Experimental results of time complexity

To examine the effectiveness and efficiency of the proposed sequential learning strategy in Section 5, we conduct experiments on three real-world data sets banknote,satlog and to show the difference of mNN’s performance with and without sequential learning algorithm. Each data set is divided into three subsets: training set, validation set and online set. The training set is fixed to contain 10 labeled samples per class. We conduct 10 experiments, with the online set size changes from 100 to 1000 with step size 100. The validation set contains the rest of the samples. First, mNN runs on training set and validation set to learn the class distribution. Thereafter, at each time a sample is drawn from the online set as a new coming sample. For sequential mNN, the previous learning result is used to classify the new sample according to equation (7) and equation (8). For standard mNN, the new sample is added to the validation set and the whole learning process is repeated to classify the new sample. The experimental results666The configuration of our computer: 16GB RAM, double-core 3.7GHz Intel Xeon CPU and MATLAB 2015 academic version. are shown in Figure 12.

From these results, we can see that while the classification accuracy of standard mNN and sequential mNN are comparable, the time cost of sequential mNN reduces dramatically for online classification (e.g., for satlog data set, to classify 1000 sequentially coming samples, standard mNN takes about 9100 seconds but sequential mNN uses only about 14 seconds to achieve a similar result). Therefore, the sequential algorithm has great merit in solving the online classification problem and can be potentially applied to a wide range of transductive learning algorithms to make them inductive.

We also conduct experiments to compare the time complexity of all the baseline algorithms with the proposed mNN on these three data sets. Each data set is split into training and testing parts three times independently, with splitting ratios 10%, 25% and 50%, respectively. For each split, every algorithm runs 10 times and the mean time cost is recorded. Table 7 shows the experimental results (the second column shows the splitting ratio).

(a) Satlog data set
(b) Banknote data set
(c) Pendigits data set
Figure 12: Experimental results of the comparison between standard mNN and sequential mNN to classify online samples real-world data sets.
data (%) SVM DT ANN NB NN wNN gNN mNN
10 0.01 0.14 0.23 0.09 0.03 0.02 1.23 0.19
banknote 25 0.01 0.19 0.24 0.09 0.04 0.03 1.23 0.20
50 0.02 0.29 0.27 0.08 0.05 0.05 1.26 0.20
10 0.47 0.43 0.59 7.14 0.13 0.14 31.99 6.01
satlog 25 1.78 1.16 1.58 7.56 0.24 0.29 32.55 6.08
50 5.12 3.11 4.01 7.61 0.31 0.40 32.82 6.30
10 0.44 0.78 0.70 8.70 0.22 0.28 149.14 25.27
pendigits 25 0.86 2.28 2.65 8.46 0.41 0.59 146.89 26.11
50 1.55 6.09 5.06 6.93 0.54 0.80 145.71 25.35
Table 7: Comparison of overall time for training and testing on three data sets (sec)

From this table we can see that the algorithms fall into two categories according to their time costs: one category contains SVM, DT, ANN, NN and wNN, whose time costs are closely related with the training data set size. Another category includes NB, gNN and mNN, whose time costs depend more upon the overall data set size. SVM, NN and wNN are generally faster than others. Although mNN is the second slowest one, it is still much faster than gNN and its overall time does not prolong as the training set size increases. It should be mentioned that although the classification accuracy of mNN is much better than NN and other traditional supervised classifiers, the computational complexity of mNN is also higher () than traditional NN (). Directly computing matrix is time cost, since it is not symmetrical, positive definite matrix and thus a LU decomposition has to be used. One way to speed-up matrix inverse is to convert to a symmetrical, positive definite matrix and then adopt the Cholesky decomposition, whose computational cost is just a half of the LU decomposition [14]. Noting that , where is a symmetrical, positive definite matrix777 is apparently symmetrical because is symmetrical (Let if is not symmetrical). We prove its positive definiteness. Because the spectral radius of is in (0, 1) and is similar to , so spectral radius of is in (0, 1). So all eigenvalues of are positive. Therefore is a positive definite, symmetrical matrix., we can first inverse matrix using Cholesky decomposition and then compute with very small computational cost. Our following work will be focused on the further reduction of computational complexity.

7 Conclusions

In this paper we proposed a new nearest-neighbor algorithm, mNN, to classify nonlinear manifold distributed data as well as traditional Gaussian distributed data, given a very small amount of labeled samples. We also presented an algorithm to attack the problem of high computational cost for classifying online data with mNN and other transductive algorithms. The superiority of the mNN has been demonstrated by substantial experiments on both synthetic data sets and real-world data sets. Given the widespread appearance of manifold structures in real-world problems and the popularity of the traditional NN algorithm, the proposed manifold version NN shows promising potential for classifying manifold-distributed data.

Acknowledgements: The authors wish to thank the anonymous reviewers for reading the entire manuscript and offering many useful suggestions. This research is partly supported by NSFC, China (No: 61572315) and 973 Plan,China (No. 2015CB856004). The research is also partly supported by YangFan Project (Grant No. 14YF1411000) of Shanghai Municipal Science and Technology Commission, the Innovation Program (Grant No. 14YZ131) and the Excellent Youth Scholars (Grant No. sdl15101) of Shanghai Municipal Education Commission, the Science Research Foundation of Shanghai University of Electric Power (Grant No. K2014-032).

References

  • Bengio et al. [2005] Bengio, Y., Larochelle, H., Vincent, P., 2005. Non-local manifold parzen windows. In: Advances in neural information processing systems. pp. 115–122.
  • Beyer et al. [1999] Beyer, K., Goldstein, J., Ramakrishnan, R., Shaft, U., 1999. When is “nearest neighbor” meaningful? In: Database Theory—ICDT’99. Springer, pp. 217–235.
  • Boothby [2003] Boothby, W. M., 2003. An introduction to differentiable manifolds and Riemannian geometry. Vol. 120. Gulf Professional Publishing.
  • Camastra and Staiano [2016] Camastra, F., Staiano, A., 2016. Intrinsic dimension estimation: Advances and open problems. Information Sciences 328, 26–41.
  • Chahooki and Charkari [2014] Chahooki, M. A. Z., Charkari, N. M., 2014. Shape classification by manifold learning in multiple observation spaces. Information Sciences 262, 46–61.
  • Chen et al. [2013] Chen, Y., Zhang, J., Cai, D., Liu, W., He, X., 2013. Nonnegative local coordinate factorization for image representation. Image Processing, IEEE Transactions on 22 (3), 969–979.
  • Collobert et al. [2006] Collobert, R., Sinz, F., Weston, J., Bottou, L., 2006. Large scale transductive svms. The Journal of Machine Learning Research 7, 1687–1712.
  • Cost and Salzberg [1993] Cost, S., Salzberg, S., 1993. A weighted nearest neighbor algorithm for learning with symbolic features. Machine learning 10 (1), 57–78.
  • Cover and Hart [1967] Cover, T. M., Hart, P. E., 1967. Nearest neighbor pattern classification. Information Theory, IEEE Transactions on 13 (1), 21–27.
  • Dudani [1976] Dudani, S. A., 1976. The distance-weighted k-nearest-neighbor rule. Systems, Man and Cybernetics, IEEE Transactions on (4), 325–327.
  • Fu et al. [2015] Fu, Z., Lu, Z., Ip, H. H., Lu, H., Wang, Y., 2015. Local similarity learning for pairwise constraint propagation. Multimedia Tools and Applications 74 (11), 3739–3758.
  • Gao et al. [2016] Gao, Y., Liu, Q., Miao, X., Yang, J., 2016. Reverse k-nearest neighbor search in the presence of obstacles. Information Sciences 330, 274–292.
  • Goldberg et al. [2010] Goldberg, A., Recht, B., Xu, J., Nowak, R., Zhu, X., 2010. Transduction with matrix completion: Three birds with one stone. In: Advances in neural information processing systems. pp. 757–765.
  • Golub and Van Loan [2012] Golub, G. H., Van Loan, C. F., 2012. Matrix computations. Vol. 3. JHU Press.
  • Gong et al. [2014] Gong, C., Fu, K., Wu, Q., Tu, E., Yang, J., 2014. Semi-supervised classification with pairwise constraints. Neurocomputing 139, 130–137.
  • Han et al. [2001] Han, E.-H. S., Karypis, G., Kumar, V., 2001. Text categorization using weight adjusted k-nearest neighbor classification. Springer.
  • Hastie and Tibshirani [1996] Hastie, T., Tibshirani, R., 1996. Discriminant adaptive nearest neighbor classification. Pattern Analysis and Machine Intelligence, IEEE Transactions on 18 (6), 607–616.
  • He and Wang [2007] He, Q. P., Wang, J., 2007. Fault detection using the k-nearest neighbor rule for semiconductor manufacturing processes. Semiconductor manufacturing, IEEE transactions on 20 (4), 345–354.
  • Hechenbichler and Schliep [2004] Hechenbichler, K., Schliep, K., 2004. Weighted k-nearest-neighbor techniques and ordinal classification.
  • Ji et al. [2014]

    Ji, P., Zhao, N., Hao, S., Jiang, J., 2014. Automatic image annotation by semi-supervised manifold kernel density estimation. Information Sciences 281, 648–660.

  • Joachims [1999] Joachims, T., 1999. Transductive inference for text classification using support vector machines. In: ICML. Vol. 99. pp. 200–209.
  • Joachims et al. [2003] Joachims, T., et al., 2003. Transductive learning via spectral graph partitioning. In: ICML. Vol. 3. pp. 290–297.
  • Kolahdouzan and Shahabi [2004] Kolahdouzan, M., Shahabi, C., 2004. Voronoi-based k nearest neighbor search for spatial network databases. In: Proceedings of the Thirtieth international conference on Very large data bases-Volume 30. VLDB Endowment, pp. 840–851.
  • Kowalski and Bender [1972] Kowalski, B., Bender, C., 1972. k-nearest neighbor classification rule (pattern recognition) applied to nuclear magnetic resonance spectral interpretation. Analytical Chemistry 44 (8), 1405–1411.
  • Li et al. [2003] Li, B., Yu, S., Lu, Q., 2003. An improved k-nearest neighbor algorithm for text categorization. arXiv preprint cs/0306099.
  • Li et al. [2001]

    Li, L., Darden, T. A., Weingberg, C., Levine, A., Pedersen, L. G., 2001. Gene assessment and sample classification for gene expression data using a genetic algorithm/k-nearest neighbor method. Combinatorial chemistry & high throughput screening 4 (8), 727–739.

  • Liao and Vemuri [2002] Liao, Y., Vemuri, V. R., 2002. Use of k-nearest neighbor classifier for intrusion detection. Computers & Security 21 (5), 439–448.
  • Liu et al. [2010] Liu, W., He, J., Chang, S.-F., 2010. Large graph construction for scalable semi-supervised learning. In: Proceedings of the 27th international conference on machine learning (ICML-10). pp. 679–686.
  • Luxburg et al. [2010] Luxburg, U. V., Radl, A., Hein, M., 2010. Getting lost in space: Large sample analysis of the resistance distance. In: Advances in Neural Information Processing Systems. pp. 2622–2630.
  • Ma et al. [2010] Ma, L., Crawford, M. M., Tian, J., 2010. Local manifold learning-based-nearest-neighbor for hyperspectral image classification. Geoscience and Remote Sensing, IEEE Transactions on 48 (11), 4099–4109.
  • Maltamo and Kangas [1998] Maltamo, M., Kangas, A., 1998. Methods based on k-nearest neighbor regression in the prediction of basal area diameter distribution. Canadian Journal of Forest Research 28 (8), 1107–1115.
  • Nene et al. [1997] Nene, S., Nayar, S. K., et al., 1997. A simple algorithm for nearest neighbor search in high dimensions. Pattern Analysis and Machine Intelligence, IEEE Transactions on 19 (9), 989–1003.
  • Nowakowska et al. [2016] Nowakowska, E., Koronacki, J., Lipovetsky, S., 2016. Dimensionality reduction for data of unknown cluster structure. Information Sciences 330, 74–87.
  • Percus and Martin [1998] Percus, A. G., Martin, O. C., 1998. Scaling universalities ofkth-nearest neighbor distances on closed manifolds. advances in applied mathematics 21 (3), 424–436.
  • Petraglia et al. [2015] Petraglia, A., et al., 2015. Dimensional reduction in constrained global optimization on smooth manifolds. Information Sciences 299, 243–261.
  • Roweis and Saul [2000] Roweis, S. T., Saul, L. K., 2000. Nonlinear dimensionality reduction by locally linear embedding. Science 290 (5500), 2323–2326.
  • Samanthula et al. [2015] Samanthula, B. K., Elmehdwi, Y., Jiang, W., 2015. K-nearest neighbor classification over semantically secure encrypted relational data. Knowledge and Data Engineering, IEEE Transactions on 27 (5), 1261–1273.
  • Spivak [1970] Spivak, M., 1970. A comprehensive introduction to differential geometry, vol. 2. I (Boston, Mass., 1970).
  • Tan [2005] Tan, S., 2005. Neighbor-weighted k-nearest neighbor for unbalanced text corpus. Expert Systems with Applications 28 (4), 667–671.
  • Tao et al. [2015]

    Tao, D., Cheng, J., Lin, X., Yu, J., 2015. Local structure preserving discriminative projections for rgb-d sensor-based scene classification. Information Sciences 320, 383–394.

  • Tenenbaum et al. [2000] Tenenbaum, J. B., De Silva, V., Langford, J. C., 2000. A global geometric framework for nonlinear dimensionality reduction. Science 290 (5500), 2319–2323.
  • Tu et al. [2014]

    Tu, E., Cao, L., Yang, J., Kasabov, N., 2014. A novel graph-based k-means for nonlinear manifold clustering and representative selection. Neurocomputing 143, 109–122.

  • Tu et al. [2016] Tu, E., Kasabov, N., Yang, J., 2016. Mapping temporal variables into the neucube for improved pattern recognition, predictive modeling, and understanding of stream data.
  • Tu et al. [2013] Tu, E., Yang, J., Fang, J., Jia, Z., Kasabov, N., 2013. An experimental comparison of semi-supervised learning algorithms for multispectral image classification. Photogrammetric Engineering & Remote Sensing 79 (4), 347–357.
  • Tu et al. [2015] Tu, E., Yang, J., Kasabov, N., Zhang, Y., 2015. Posterior distribution learning (pdl): A novel supervised learning framework using unlabeled samples to improve classification performance. Neurocomputing 157, 173–186.
  • Turaga and Chellappa [2010]

    Turaga, P., Chellappa, R., 2010. Nearest-neighbor search algorithms on non-euclidean manifolds for computer vision applications. In: Proceedings of the Seventh Indian Conference on Computer Vision, Graphics and Image Processing. ACM, pp. 282–289.

  • Vemulapalli et al. [2013] Vemulapalli, R., Pillai, J. K., Chellappa, R., 2013. Kernel learning for extrinsic classification of manifold features. In: Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on. IEEE, pp. 1782–1789.
  • Vincent and Bengio [2002] Vincent, P., Bengio, Y., 2002. Manifold parzen windows. In: Advances in neural information processing systems. pp. 825–832.
  • Wang et al. [2015] Wang, H., Wu, J., Yuan, S., Chen, J., 2015. On characterizing scale effect of chinese mutual funds via text mining. Signal Processing.
  • Wang et al. [2013] Wang, J., Jebara, T., Chang, S.-F., 2013. Semi-supervised learning using greedy max-cut. The Journal of Machine Learning Research 14 (1), 771–800.
  • Weinberger and Saul [2009] Weinberger, K. Q., Saul, L. K., 2009. Distance metric learning for large margin nearest neighbor classification. The Journal of Machine Learning Research 10, 207–244.
  • Wu et al. [2008] Wu, X., Kumar, V., Quinlan, J. R., Ghosh, J., Yang, Q., Motoda, H., McLachlan, G. J., Ng, A., Liu, B., Philip, S. Y., et al., 2008. Top 10 algorithms in data mining. Knowledge and information systems 14 (1), 1–37.
  • Xie et al. [2016] Xie, J., Gao, H., Xie, W., Liu, X., Grant, P. W., 2016. Robust clustering by detecting density peaks and assigning points based on fuzzy weighted k-nearest neighbors. Information Sciences 354, 19–40.
  • Yi et al. [2016] Yi, S., Jiang, N., Feng, B., Wang, X., Liu, W., 2016. Online similarity learning for visual tracking. Information Sciences.
  • Yin and Zaki [2015] Yin, H., Zaki, S. M., 2015. A self-organising multi-manifold learning algorithm. In: Bioinspired Computation in Artificial Systems. Springer, pp. 389–398.
  • Yu and Kim [2016] Yu, J., Kim, S. B., 2016. Density-based geodesic distance for identifying the noisy and nonlinear clusters. Information Sciences 360, 231–243.
  • Yu et al. [2014] Yu, J., Tao, D., Li, J., Cheng, J., 2014. Semantic preserving distance metric learning and applications. Information Sciences 281, 674–686.
  • Zhou et al. [2015] Zhou, Y., Liu, B., Xia, S., Liu, B., 2015. Semi-supervised extreme learning machine with manifold and pairwise constraints regularization. Neurocomputing 149, 180–186.
  • Zhu et al. [2003] Zhu, X., Ghahramani, Z., Lafferty, J., et al., 2003. Semi-supervised learning using gaussian fields and harmonic functions. In: ICML. Vol. 3. pp. 912–919.