Nearest Neighbors (NN) [9, 52, 37, 53] is one of the most popular classification algorithms and has been widely used in many fields, such as intrusion detection , gene classification , semiconductor fault detection , very large database manipulation , nuclear magnetic resonance spectral interpretation  and the prediction of basal area diameter , because it is simple but effective, and can generally obtain good results in many tasks. One main drawback of the traditional NN is that it does not take the manifold distribution information into account and this can cause bias which results in bad performance. It becomes even worse when there are only a very small amount of labeled samples available. To address this, an example is shown in figure 1(a), in which there are two one-dimensional manifolds (the outer arch and the interior reflected S) which correspond to two classes, respectively. Each class has only 3 labeled samples, indicated by the colored triangles and circles. Black dots are unlabeled samples. Figure 1(b) - (d) show the 1NN, 2NN and 3NN classification results produced by the traditional NN, respectively. We can see that although the data have apparent manifold distribution, the traditional NN incorrectly classifies many samples due to ignoring the manifold information.
To improve the performance of the traditional NN, some new NN algorithms have been proposed. Hastie et. al.  proposed an adaptive NN algorithm which computes a local metric for each sample and uses Mahalanobis distance to find the nearest neighbors of a query point. Hechenbichler and Schliep  introduced a weight scheme to attach different importance to the nearest neighbors with respect to their distances to the query point. To reduce the effect of unbalanced training set sizes of different classes, Tan  used different weights for different classes regarding the number of labeled samples in each class. There are also some other improvements and weight schemes for different tasks [25, 8, 32, 51, 10, 16, 12].
However, none of these new
NN algorithms takes manifold structure into consideration explicitly. For high-dimensional data, such as face images, documents and video sequences, the nearest neighbors of a point found by traditionalNN algorithms can be very far in terms of the geodesic distance between them, because the dimension of the underlying manifold is usually much lower than that of the data space [36, 41, 2]. There have also been attempts to make NN adaptive to manifold data. In Turaga and Chellappa’s paper , geodesic distance is used to directly replace standard Euclidean distance in traditional NN, but geodesic distance can be computed with good accuracy only if the manifold is sampled with sufficient points. Furthermore, geodesic distance tends to be very sensitive to short-circuit phenomenon. Li  proposed a weighted manifold NN using Local Linear Embedding (LLE) techniques, but LLE tends to be unstable due to local changes on the manifold. Percus and Olivier  studied the general nearest neighbor distance metric on close manifold, but their method needs to know exactly the analytical form of the manifold and thus is unsuitable for most real-world applications.
In this paper, we propose a novel graph-based NN algorithm which can effectively handle both traditional Gaussian distributed data and nonlinear manifold distributed data. To do so, we first present a manifold similarity measure method, the constrained tired random walk, and then we modify the traditional NN algorithm to adopt the new measuring method. To deal with online situations, we also propose a new algorithm to handle sequential samples based on a local neighborhood reconstruction method. Experimental results on both synthetic and real-world data sets are presented to demonstrate the validity of the proposed method.
The remainder of this paper is organized as follows: Section 2 reviews the tired random walk model. Section 3 presents a new constrained tired walk random walk model and Section 4 describes the graph-based NN algorithm. Section 5 proposes a sequential algorithm for online samples. The simulation and comparison results are presented in Section 6, followed by conclusions in Section 7.
2 Review of tired random walk
Assume a training set contains labeled samples and the class label of is , , where is feature length and is class number. There are also unlabeled samples to be classified, . Denote and . We also use for the sample matrix, whose columns are the samples in . We use to denote the Frobenius norm.
In differential geometry studies, a manifold can be defined from an intrinsic or extrinsic point of view [38, 3]. But in data processing studies, such as dimension reduction [5, 35, 33] and manifold learning [48, 1, 47], it is helpful to consider a manifold as a distribution which is embedded in a higher Euclidean space, i.e. adopting an extrinsic view in its ambient space. Borrowing the concept of intrinsic dimension from , we give a formal definition of a manifold data set as follows.
Definition: A data set is considered to be manifold distributed if its intrinsic dimension is less than its data space dimension.
For more information on the intrinsic dimensions of a data set and how it can be estimated from data samples, we refer readers to[4, 35]
. To determine whether a data set has manifold distribution, one can simply estimate its intrinsic dimension and compare it with the data space dimension, i.e. the length of a sample vector.
Similarity measure is an important factor while processing manifold distributed data, because on manifolds traditional distance metrics (such as Euclidean distance) are not a proper measure [57, 54, 56]. Recent studies have also proved that classical random walk is not a useful measure for large sample cases or high-dimensional data because it does not take any global properties of the data into account . The tired random walk (TRW) model was proposed in Tu’s paper  and has been demonstrated to be an effective measure of nonlinear manifold [49, 55], because it takes global geometrical structure information into consideration.
Recall that on a weighted undirected graph, the classical random walk transition matrix is , where is the graph adjacency matrix and is a diagonal matrix with entries
. Now imagine that a tired random walker walks continuously through edges in a graph, but it becomes more tired after each walk and finally stops after all energy is exhausted, i.e. the transition probability of the random walk reduces with a fixed ratio (e.g. 0.01) after each walk and finally approaches 0. Aftersteps the tired random walk transition probability matrix becomes . Now considering figure 2, the tired random walker starts from vertex and its destination is vertex on the graph, walking with a strength reduction rate . Then it may walk through any path that connects vertices and , with an arbitrary number of steps before its strength is used up. For example, the tired random walker can walk through path , or . It can also walk through , or even , because while at vertex , the probability of walking to is much larger than that to . But all these walking paths reflect the underlying geometrical structure of the graph, hence the distribution of the data. Therefore, a good similarity measure between vertex to should take (globally) all possible paths and an arbitrary number of steps (can potentially be infinite) into consideration, rather than only considering (locally) a single path or a single step as the classical random walk  does. This makes a fundamental difference between the tired random walk and classical random walk and entitles the tired random walk to be more robust and effective, especially for manifold distributed data, as will be demonstrated in the experimental section. Mathematically, the accumulated transition probability of the tired random walk between vertex and is . For all vertices, the accumulated transition probability matrix becomes
. As the eigenvalue ofis in and , the series converges and the tired random walk matrix is
Because takes all the possible paths into account, it captures the global geometrical structure information of the underlying manifold and thus is demonstrated to be more effective and robust to describe manifold similarity.
3 A constrained tired random walk model
In this section, we further extend the model into a constrained situation. For classification purposes, we find that labeled samples provide not only class distribution information, but also constraint information, i.e., samples which have the same class labels are must-link pairs and samples which have different class labels are cannot-link pairs. In most of the existing supervised learning algorithms, only class information is utilized but constraint information is discarded. Here we include constraint information into the TRW model by modifying the weights of graph edges between the labeled samples, because constraint information has been demonstrated to be useful for performance improvement [45, 15, 11, 58]. Class information will be utilized in the next section for the proposed new NN algorithm.
We first construct an -level nearest-neighbor strengthened tree for each labeled sample as follows:
Set as the first level node (tree root node, ) and its nearest neighbors as the second level nodes.
For each node in level , set its nearest neighbors as its level descendants. If any node in level appears in its ancestor level, remove it from level .
If , go to (ii).
where is a user-specified parameter to define the depth of the tree. Then for each pair of samples , the corresponding graph edge weight is set according to the rules in Table 1.
|1||if and have same class label.|
|2||if and have different class labels.|
|3||if at least one of , is unlabeled.|
|4||, if is a node in level and|
|is a child of in level in the strengthened tree.|
is the Gaussian kernel width parameter and is the strengthening parameter. Note that for the Gaussian kernel, step 1 is equivalent to merging the two same-label samples together (0 distance between them) and step 2 is equivalent to separating the two different-label samples from each other to infinitely far (infinite distance between them). The connections from a labeled sample to its nearest neighbors are strengthened by in step 4. This can spread the ‘hard’ constraints in steps 1 and 2 to farther neighborhoods on the graph in a form of soft
constraints and thus causes these constraints to have a wider influence. The motivation of constructing the strengthened tree is inspired by the neural network reservoir structure analysis techniques, in which information has been shown to spread out from input neurons to interior neurons in the reservoir following a tree-structure path.
The selection of parameter is based the following conditions
the strengthened weight should be positive and less than the weight of the must-link constraint.
the strengthening effect should be positive and decays along the strengthened tree level.
Mathematically, the conditions are
As a result, should be
In all our experiments, we used a single value of , which gives good results for both synthetic and real-world data.
4 A new graph-based NN classification algorithm on nonlinear manifold
Here we present a graph-based NN algorithm for nonlinear manifold data classification. The procedure of the algorithm is summarized in Table 2.
|1||Input , and .|
|2||Construct a constrained graph according to Table 1|
|3||Compute the using equation (1)|
|4||Evaluate samples’ similarity according to equation (4)|
|5||Find nearest neighbors of an unlabeled sample using equation (5)|
|6||Determine the class label according to equation (6)|
Specifically, given matrix, the TRW weight between sample and is defined as
Note that while the similarity measure defined by matrix between two samples is not necessarily symmetric ( is not symmetric and thus its matrix series is also not symmetric), the weight defined in equation (4) is indeed a symmetric measure. For each unlabeled sample , we could find its nearest neighbors from by
Instead of counting the number of labeled samples from each class in the classical NN, we sum the TRW weights of the labeled samples of each class and the class label of the unlabeled sample is determined by
It is worth mentioning that because the proposed semi-supervised mNN utilizes class label information only in the classifying stage and it uses the same value equally for all classes, mNN naturally lacks the so-called class bias problem in many semi-supervised algorithms (such as [59, 28, 44, 20]) that is due to the influence of unbalanced labeled samples of each class in and needs to be re-balanced by various weighting schemes [50, 59]. Furthermore, with the algorithm described in the next section, mNN immediately becomes a supervised classifier and enjoys the advantages of both semi-supervised learning (classifying abundant unlabeled samples with only a tiny number of labeled samples) and supervised learning (classifying new samples immediately without repeating the whole learning process).
5 A sequential method to handle online samples
For one single unlabeled sample, if we compute its TRW weights using equation (1) and (4), we have to add it to and recompute the matrix . As a result, the computational cost for one sample is too high. Actually this is the so-called transductive learning problem111Transductive learning is an opposite concept to inductive learning. Inductive learning means the learning algorithm, such as SVM, learns a model explicitly in data space that partitions the data space into several different regions. Then the model can be applied directly to unseen samples to obtain the class labels. On the other side, transductive learning does not build any model. It performs one-time learning only on a fixed data set.Whenever the data set changes (for example, existing samples are changed or new samples are added.), the whole learning process has to be repeated again to assign new class labels., a common drawback of many existing algorithms [21, 22, 7, 13]. To attack this problem, we propose a new method based on rapid neighborhood reconstruction, in which a local neighborhood is first constructed in sample space and then the TRW weights can be reconstructed in the same local neighborhood with very trivial computational cost.
Given a new sample , it has been shown that can be well reconstructed by its nearest neighbors on the manifold if there are sufficient data points sampled from the manifold [36, 6, 40]. Thus, it is also reasonable to assume that the neighborhood relationships, hence the weights of sample in equation (4
), have the same geometrical distribution as the sample distribution. So, to compute the weights ofwithout explicitly recomputing matrix in equation (4), we first find ’s nearest neighbors222One should note that finding nearest neighbors in is quite different from that in . The former is the basis of many nearest-neighbor operations (such as constructing nearest-neighbor graph in [41, 36] and the -level strengthened tree in Section 3 of this paper) and the latter is the basis of the classical NN classifier. Because contains many instances, which are sampled densely from the underlying data distribution, an instances’s local neighborhood in is usually very small and thus Euclidean distance is still valid in this small range for that any manifold can be locally well approximated by Euclidean space . However, instances in are very few and usually not densely sampled from the manifold and nearest neighbors in can be very far. Thus Euclidean distance is no longer suitable for measuring the closeness of the points in . This is why we need other new similarity (or closeness) measure methods, which is one of the main contributions of this paper. in , written as which contains these nearest neighbors in its columns, and then minimize the local reconstruction error by solving the following constraint quadratic optimization problem
where is a vector with all entries being 1. Note that the entries of are nonnegative and the sum of all entries must be 1, so is expected to be sparse . Problem (7) is a constrained quadratic optimization problem and can be solved very efficiently with many publicly available toolboxes. We use the OPTI optimization toolbox333OPTI TOOLBOX: http://www.i2c2.aut.ac.nz/Wiki/OPTI/ to solve this. After obtaining the optimal , the TRW weights between and its nearest labeled samples in can be computed by solving the following optimization problem
where and contains the weights between sample in and its nearest neighbors in .
It is easy to see that the optimal solution of problem (8) is simply the result of a nonnegative projection operation
The second equation holds because both and are nonnegative and therefore their multiplication result is also nonnegative.
One should note that with this sequential learning strategy, the proposed mNN can be treated as an inductive classifier, whose model consists of both the matrix and the data samples that have been classified so far. Whenever a new sample arrives, the model can quickly give its class label by the three steps in Table 3, without repeating the whole learning process in Table 2.
6 Experimental results
In this section, we report the experimental results on both the synthetic data sets and real-world data sets. The comparison algorithms include traditional nearest neighbors (NN), the weighted nearest neighbors (wNN) and the geodesic NN (gNN) proposed by Pavan and Rama, as well as our manifold nearest neighbors (mNN). For NN and wNN, the only parameter is . For gNN and mNN, there is one more parameter for each, i.e. the number of nearest neighbors for computing geodesic distance in gNN and the kernel width in mNN. We tune these two parameters by grid-search and choose their values to produce the minimal 2-fold cross validation error rate.
6.1 Experimental results on synthetic data sets
We first conduct experiments on three synthetic data sets shown in Figure 4 to demonstrate the superiority of over other algorithms. For each data set in Figure 4, the three red triangles and the three green dots are the labeled samples of the two classes, respectively. Note that all three data sets contain some ambiguous points (or bridging points) in the gap between two classes, making the classification even more challenging. Experimental results on these data sets are shown in Figures 5 to 7.
From these results, we can see that because NN uses Euclidean distance to determine the class label and Euclidean distance is not a proper similarity measure on the manifold, the results given by traditional NN are quite erroneous. By introducing a weight scheme, wNN can perform better than NN, but the improvement is still quite limited. gNN has much better results because geodesic distance on the manifold is a valid similarity measure. However, as mentioned in Section 1, graph distance tends to be sensitive to short-circuit phenomenon (i.e. the noise points lying between the two classes), thus gNN still misclassifies many samples due to the existence of noisy points. In contrast, mNN achieves the best results, because TRW takes all the possible graph paths into consideration, thus it embodies the global geometrical structure information of the manifold and can be much more effective and robust to noisy points.
Figure 8 plots the mean error rate of each algorithm over 10 runs on these data sets, as changes from 1 to 10. In each run, labeled samples are randomly selected from each class and the rest of the samples in the data set are treated as unlabeled samples to form the testing set.
From the results in Figure 8 we can see that mNN performs significantly better than other version of NN algorithms. It is interesting to note that on these manifold distributed data sets, while the mean error rates of NN and wNN have no obvious reduction as increases, the mean error rates of gNN and mNN decrease quickly. This indicates that gNN and mNN are able to exploit the information contained in the labeled samples in a more “efficient” way, because they take the manifold structure information into account. Again, mNN achieves the lowest error rate.
In order to demonstrate the effectiveness of the weight reconstruction method, we run the algorithm on these data sets to reconstruct each sample and its TRW weight using its nearest neighbours with equation (1.7) and (1.8), respectively. The relative mean square error (RMSE) of the reconstruction result is computed by
where is the ground truth and is the reconstructed result. The results are shown in Table 4.
|data set 1||data set 2||data set 3|
6.2 Experimental results on real-world data sets
We also conduct experiments on six real-world data sets from the UCI data repository, which contains real application data collected in various fields and is widely used to test the performance of different machine learning algorithms444UC Irvine Machine Learning Repository: http://archive.ics.uci.edu/ml/.. The information on these six data sets is listed in Table 5 (: number of samples; : feature dimension; : class number).
On each data set, we let change from 1 to 10. For each , we run each algorithm 10 times, with a training set containing labeled samples randomly selected from each class and a testing set containing all the rest samples. The final error rate is the mean of the 10 error rates and is shown in Figure 9.
From Figure 9, we can see that the proposed mNN outperforms the other algorithms in terms of both the accuracy and stability of performance. The error curves of mNN decrease much more quickly than that the other algorithms, which indicates that mNN is able to utilize fewer labeled samples to achieve better accuracy. This is very important in applications where there is a very small amount of labeled samples available, because labeled samples are usually more expensive to obtain, i.e. they need to be annotated by human with expert knowledge and/or special equipment and take a lot of time. Therefore, it is of great practical value that a classifier is capable of accurately classifying abundant unlabeled samples, given only a small number of labeled samples.
From Figure 9, we can also conclude that: (1) by introducing a weight scheme in the traditional NN algorithm, weighted NN (wNN) can generally outperform traditional NN; (2) the performance of geodesic NN (gNN) has a relative large improvement to both NN and weighted NN for most cases, because geodesic distance is a valid measure on manifold; (3) the manifold NN (mNN) almost always achieves the smallest error rate, because the constrained TRW is a more effective and robust measure of the global geometrical structure on manifold and, meanwhile, mNN can take both the class information and constraint information into account.
6.3 Experimental results of the comparison with other traditional supervised classifiers
We conduct experiments to compare the performance of the proposed manifold
NN algorithm with other popular supervised classifiers on the six real-world data sets. The baseline algorithms are: Support Vector Machine (SVM)555For SVM we use the libsvm toolbox: https://www.csie.ntu.edu.tw/~cjlin/libsvm/. Other algorithms we use MATAB toolbox.
, Artificial Neural Networks (ANN), Naive Bayes (NB) and Decision Tree (DT). The configurations of the baseline algorithms are: radial basis kernel for SVM; three-layer networks trained with back-propagation for ANN; kernel smoothing density estimation for NB; binary classification tree and merging-pruning strategy according to validation error for DT. We adopt a grid-search strategy to tune the parameters of each algorithm and the parameters are set to produce the lowest 2-fold cross validation error rate.
To examine the capability of classifying plenty of unlabeled samples with only a few labeled samples, we randomly choose three labeled samples from each class to form the training set and the remaining samples are treated as unlabeled samples to form the testing set. Each algorithm runs 10 times and the final result is the average of these 10 error rates. The experimental results (mean error standard deviation) are shown in Table 6.
From these results, we can see that mNN significantly outperform these traditional supervised classifiers, given only three labeled samples per class. Similar to traditional
NN, traditional supervised classifiers are incapable of exploiting the manifold structure information of the underlying data distribution, thus their accuracies are very low while the labeled sample number is very small. Furthermore, when the labeled samples are randomly selected, their positions vary greatly in data space. Sometime they are not uniformly distributed and thus cannot well cover the whole data distribution. As a result, the performance of traditional classifiers also varies greatly. In contrast, because mNN adopts tired random walk to measure manifold similarity which reflects the global geometrical information of the underlying manifold structure and is robust to local changes. , it can achieve much better results.
To further investigate the performance of these algorithms under the condition of providing different number of training samples, we carry out experiments with different training set sizes. For each data set, we let varies from 1 to 40 () and randomly choose labeled samples from each class to form the training set. The rest of the data set are treated as unlabeled samples to test each algorithm’s performance. For each , every algorithm runs 10 times on each data set. The mean error rate and standard deviation of the 10-run results are shown in Figures 10 and 11, respectively.
From Figure 10 we can see that while the labeled sample number is small, the error rates of the traditional algorithms are very large (as also indicated in Table 6 which contains the results of the three labeled samples per class). As the number of labeled sample increases, the error rates decrease. This trend becomes slower after 15 labeled samples per class. The proposed mNN achieves the lowest error rate for both situations of small and large number of labeled samples. The improvement is especially obvious and significant when the labeled samples number is small. From Figure 11, we can also see that the performance of mNN is also relatively steady for most cases. One should note that on the left of each plot in Figure 11, the small standard deviation values of decision tree (DT) and SVM are due to the fact that their error rates are always very high in each run (e.g. the error rates of SVM on data set usps concentrate closely around 95).
6.4 Experimental results of time complexity
To examine the effectiveness and efficiency of the proposed sequential learning strategy in Section 5, we conduct experiments on three real-world data sets banknote,satlog and to show the difference of mNN’s performance with and without sequential learning algorithm. Each data set is divided into three subsets: training set, validation set and online set. The training set is fixed to contain 10 labeled samples per class. We conduct 10 experiments, with the online set size changes from 100 to 1000 with step size 100. The validation set contains the rest of the samples. First, mNN runs on training set and validation set to learn the class distribution. Thereafter, at each time a sample is drawn from the online set as a new coming sample. For sequential mNN, the previous learning result is used to classify the new sample according to equation (7) and equation (8). For standard mNN, the new sample is added to the validation set and the whole learning process is repeated to classify the new sample. The experimental results666The configuration of our computer: 16GB RAM, double-core 3.7GHz Intel Xeon CPU and MATLAB 2015 academic version. are shown in Figure 12.
From these results, we can see that while the classification accuracy of standard mNN and sequential mNN are comparable, the time cost of sequential mNN reduces dramatically for online classification (e.g., for satlog data set, to classify 1000 sequentially coming samples, standard mNN takes about 9100 seconds but sequential mNN uses only about 14 seconds to achieve a similar result). Therefore, the sequential algorithm has great merit in solving the online classification problem and can be potentially applied to a wide range of transductive learning algorithms to make them inductive.
We also conduct experiments to compare the time complexity of all the baseline algorithms with the proposed mNN on these three data sets. Each data set is split into training and testing parts three times independently, with splitting ratios 10%, 25% and 50%, respectively. For each split, every algorithm runs 10 times and the mean time cost is recorded. Table 7 shows the experimental results (the second column shows the splitting ratio).
From this table we can see that the algorithms fall into two categories according to their time costs: one category contains SVM, DT, ANN, NN and wNN, whose time costs are closely related with the training data set size. Another category includes NB, gNN and mNN, whose time costs depend more upon the overall data set size. SVM, NN and wNN are generally faster than others. Although mNN is the second slowest one, it is still much faster than gNN and its overall time does not prolong as the training set size increases. It should be mentioned that although the classification accuracy of mNN is much better than NN and other traditional supervised classifiers, the computational complexity of mNN is also higher () than traditional NN (). Directly computing matrix is time cost, since it is not symmetrical, positive definite matrix and thus a LU decomposition has to be used. One way to speed-up matrix inverse is to convert to a symmetrical, positive definite matrix and then adopt the Cholesky decomposition, whose computational cost is just a half of the LU decomposition . Noting that , where is a symmetrical, positive definite matrix777 is apparently symmetrical because is symmetrical (Let if is not symmetrical). We prove its positive definiteness. Because the spectral radius of is in (0, 1) and is similar to , so spectral radius of is in (0, 1). So all eigenvalues of are positive. Therefore is a positive definite, symmetrical matrix., we can first inverse matrix using Cholesky decomposition and then compute with very small computational cost. Our following work will be focused on the further reduction of computational complexity.
In this paper we proposed a new nearest-neighbor algorithm, mNN, to classify nonlinear manifold distributed data as well as traditional Gaussian distributed data, given a very small amount of labeled samples. We also presented an algorithm to attack the problem of high computational cost for classifying online data with mNN and other transductive algorithms. The superiority of the mNN has been demonstrated by substantial experiments on both synthetic data sets and real-world data sets. Given the widespread appearance of manifold structures in real-world problems and the popularity of the traditional NN algorithm, the proposed manifold version NN shows promising potential for classifying manifold-distributed data.
Acknowledgements: The authors wish to thank the anonymous reviewers for reading the entire manuscript and offering many useful suggestions. This research is partly supported by NSFC, China (No: 61572315) and 973 Plan，China (No. 2015CB856004). The research is also partly supported by YangFan Project (Grant No. 14YF1411000) of Shanghai Municipal Science and Technology Commission, the Innovation Program (Grant No. 14YZ131) and the Excellent Youth Scholars (Grant No. sdl15101) of Shanghai Municipal Education Commission, the Science Research Foundation of Shanghai University of Electric Power (Grant No. K2014-032).
- Bengio et al.  Bengio, Y., Larochelle, H., Vincent, P., 2005. Non-local manifold parzen windows. In: Advances in neural information processing systems. pp. 115–122.
- Beyer et al.  Beyer, K., Goldstein, J., Ramakrishnan, R., Shaft, U., 1999. When is “nearest neighbor” meaningful? In: Database Theory—ICDT’99. Springer, pp. 217–235.
- Boothby  Boothby, W. M., 2003. An introduction to differentiable manifolds and Riemannian geometry. Vol. 120. Gulf Professional Publishing.
- Camastra and Staiano  Camastra, F., Staiano, A., 2016. Intrinsic dimension estimation: Advances and open problems. Information Sciences 328, 26–41.
- Chahooki and Charkari  Chahooki, M. A. Z., Charkari, N. M., 2014. Shape classification by manifold learning in multiple observation spaces. Information Sciences 262, 46–61.
- Chen et al.  Chen, Y., Zhang, J., Cai, D., Liu, W., He, X., 2013. Nonnegative local coordinate factorization for image representation. Image Processing, IEEE Transactions on 22 (3), 969–979.
- Collobert et al.  Collobert, R., Sinz, F., Weston, J., Bottou, L., 2006. Large scale transductive svms. The Journal of Machine Learning Research 7, 1687–1712.
- Cost and Salzberg  Cost, S., Salzberg, S., 1993. A weighted nearest neighbor algorithm for learning with symbolic features. Machine learning 10 (1), 57–78.
- Cover and Hart  Cover, T. M., Hart, P. E., 1967. Nearest neighbor pattern classification. Information Theory, IEEE Transactions on 13 (1), 21–27.
- Dudani  Dudani, S. A., 1976. The distance-weighted k-nearest-neighbor rule. Systems, Man and Cybernetics, IEEE Transactions on (4), 325–327.
- Fu et al.  Fu, Z., Lu, Z., Ip, H. H., Lu, H., Wang, Y., 2015. Local similarity learning for pairwise constraint propagation. Multimedia Tools and Applications 74 (11), 3739–3758.
- Gao et al.  Gao, Y., Liu, Q., Miao, X., Yang, J., 2016. Reverse k-nearest neighbor search in the presence of obstacles. Information Sciences 330, 274–292.
- Goldberg et al.  Goldberg, A., Recht, B., Xu, J., Nowak, R., Zhu, X., 2010. Transduction with matrix completion: Three birds with one stone. In: Advances in neural information processing systems. pp. 757–765.
- Golub and Van Loan  Golub, G. H., Van Loan, C. F., 2012. Matrix computations. Vol. 3. JHU Press.
- Gong et al.  Gong, C., Fu, K., Wu, Q., Tu, E., Yang, J., 2014. Semi-supervised classification with pairwise constraints. Neurocomputing 139, 130–137.
- Han et al.  Han, E.-H. S., Karypis, G., Kumar, V., 2001. Text categorization using weight adjusted k-nearest neighbor classification. Springer.
- Hastie and Tibshirani  Hastie, T., Tibshirani, R., 1996. Discriminant adaptive nearest neighbor classification. Pattern Analysis and Machine Intelligence, IEEE Transactions on 18 (6), 607–616.
- He and Wang  He, Q. P., Wang, J., 2007. Fault detection using the k-nearest neighbor rule for semiconductor manufacturing processes. Semiconductor manufacturing, IEEE transactions on 20 (4), 345–354.
- Hechenbichler and Schliep  Hechenbichler, K., Schliep, K., 2004. Weighted k-nearest-neighbor techniques and ordinal classification.
Ji et al. 
Ji, P., Zhao, N., Hao, S., Jiang, J., 2014. Automatic image annotation by semi-supervised manifold kernel density estimation. Information Sciences 281, 648–660.
- Joachims  Joachims, T., 1999. Transductive inference for text classification using support vector machines. In: ICML. Vol. 99. pp. 200–209.
- Joachims et al.  Joachims, T., et al., 2003. Transductive learning via spectral graph partitioning. In: ICML. Vol. 3. pp. 290–297.
- Kolahdouzan and Shahabi  Kolahdouzan, M., Shahabi, C., 2004. Voronoi-based k nearest neighbor search for spatial network databases. In: Proceedings of the Thirtieth international conference on Very large data bases-Volume 30. VLDB Endowment, pp. 840–851.
- Kowalski and Bender  Kowalski, B., Bender, C., 1972. k-nearest neighbor classification rule (pattern recognition) applied to nuclear magnetic resonance spectral interpretation. Analytical Chemistry 44 (8), 1405–1411.
- Li et al.  Li, B., Yu, S., Lu, Q., 2003. An improved k-nearest neighbor algorithm for text categorization. arXiv preprint cs/0306099.
Li et al. 
Li, L., Darden, T. A., Weingberg, C., Levine, A., Pedersen, L. G., 2001. Gene assessment and sample classification for gene expression data using a genetic algorithm/k-nearest neighbor method. Combinatorial chemistry & high throughput screening 4 (8), 727–739.
- Liao and Vemuri  Liao, Y., Vemuri, V. R., 2002. Use of k-nearest neighbor classifier for intrusion detection. Computers & Security 21 (5), 439–448.
- Liu et al.  Liu, W., He, J., Chang, S.-F., 2010. Large graph construction for scalable semi-supervised learning. In: Proceedings of the 27th international conference on machine learning (ICML-10). pp. 679–686.
- Luxburg et al.  Luxburg, U. V., Radl, A., Hein, M., 2010. Getting lost in space: Large sample analysis of the resistance distance. In: Advances in Neural Information Processing Systems. pp. 2622–2630.
- Ma et al.  Ma, L., Crawford, M. M., Tian, J., 2010. Local manifold learning-based-nearest-neighbor for hyperspectral image classification. Geoscience and Remote Sensing, IEEE Transactions on 48 (11), 4099–4109.
- Maltamo and Kangas  Maltamo, M., Kangas, A., 1998. Methods based on k-nearest neighbor regression in the prediction of basal area diameter distribution. Canadian Journal of Forest Research 28 (8), 1107–1115.
- Nene et al.  Nene, S., Nayar, S. K., et al., 1997. A simple algorithm for nearest neighbor search in high dimensions. Pattern Analysis and Machine Intelligence, IEEE Transactions on 19 (9), 989–1003.
- Nowakowska et al.  Nowakowska, E., Koronacki, J., Lipovetsky, S., 2016. Dimensionality reduction for data of unknown cluster structure. Information Sciences 330, 74–87.
- Percus and Martin  Percus, A. G., Martin, O. C., 1998. Scaling universalities ofkth-nearest neighbor distances on closed manifolds. advances in applied mathematics 21 (3), 424–436.
- Petraglia et al.  Petraglia, A., et al., 2015. Dimensional reduction in constrained global optimization on smooth manifolds. Information Sciences 299, 243–261.
- Roweis and Saul  Roweis, S. T., Saul, L. K., 2000. Nonlinear dimensionality reduction by locally linear embedding. Science 290 (5500), 2323–2326.
- Samanthula et al.  Samanthula, B. K., Elmehdwi, Y., Jiang, W., 2015. K-nearest neighbor classification over semantically secure encrypted relational data. Knowledge and Data Engineering, IEEE Transactions on 27 (5), 1261–1273.
- Spivak  Spivak, M., 1970. A comprehensive introduction to differential geometry, vol. 2. I (Boston, Mass., 1970).
- Tan  Tan, S., 2005. Neighbor-weighted k-nearest neighbor for unbalanced text corpus. Expert Systems with Applications 28 (4), 667–671.
Tao et al. 
Tao, D., Cheng, J., Lin, X., Yu, J., 2015. Local structure preserving discriminative projections for rgb-d sensor-based scene classification. Information Sciences 320, 383–394.
- Tenenbaum et al.  Tenenbaum, J. B., De Silva, V., Langford, J. C., 2000. A global geometric framework for nonlinear dimensionality reduction. Science 290 (5500), 2319–2323.
Tu et al. 
Tu, E., Cao, L., Yang, J., Kasabov, N., 2014. A novel graph-based k-means for nonlinear manifold clustering and representative selection. Neurocomputing 143, 109–122.
- Tu et al.  Tu, E., Kasabov, N., Yang, J., 2016. Mapping temporal variables into the neucube for improved pattern recognition, predictive modeling, and understanding of stream data.
- Tu et al.  Tu, E., Yang, J., Fang, J., Jia, Z., Kasabov, N., 2013. An experimental comparison of semi-supervised learning algorithms for multispectral image classification. Photogrammetric Engineering & Remote Sensing 79 (4), 347–357.
- Tu et al.  Tu, E., Yang, J., Kasabov, N., Zhang, Y., 2015. Posterior distribution learning (pdl): A novel supervised learning framework using unlabeled samples to improve classification performance. Neurocomputing 157, 173–186.
Turaga and Chellappa 
Turaga, P., Chellappa, R., 2010. Nearest-neighbor search algorithms on non-euclidean manifolds for computer vision applications. In: Proceedings of the Seventh Indian Conference on Computer Vision, Graphics and Image Processing. ACM, pp. 282–289.
- Vemulapalli et al.  Vemulapalli, R., Pillai, J. K., Chellappa, R., 2013. Kernel learning for extrinsic classification of manifold features. In: Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on. IEEE, pp. 1782–1789.
- Vincent and Bengio  Vincent, P., Bengio, Y., 2002. Manifold parzen windows. In: Advances in neural information processing systems. pp. 825–832.
- Wang et al.  Wang, H., Wu, J., Yuan, S., Chen, J., 2015. On characterizing scale effect of chinese mutual funds via text mining. Signal Processing.
- Wang et al.  Wang, J., Jebara, T., Chang, S.-F., 2013. Semi-supervised learning using greedy max-cut. The Journal of Machine Learning Research 14 (1), 771–800.
- Weinberger and Saul  Weinberger, K. Q., Saul, L. K., 2009. Distance metric learning for large margin nearest neighbor classification. The Journal of Machine Learning Research 10, 207–244.
- Wu et al.  Wu, X., Kumar, V., Quinlan, J. R., Ghosh, J., Yang, Q., Motoda, H., McLachlan, G. J., Ng, A., Liu, B., Philip, S. Y., et al., 2008. Top 10 algorithms in data mining. Knowledge and information systems 14 (1), 1–37.
- Xie et al.  Xie, J., Gao, H., Xie, W., Liu, X., Grant, P. W., 2016. Robust clustering by detecting density peaks and assigning points based on fuzzy weighted k-nearest neighbors. Information Sciences 354, 19–40.
- Yi et al.  Yi, S., Jiang, N., Feng, B., Wang, X., Liu, W., 2016. Online similarity learning for visual tracking. Information Sciences.
- Yin and Zaki  Yin, H., Zaki, S. M., 2015. A self-organising multi-manifold learning algorithm. In: Bioinspired Computation in Artificial Systems. Springer, pp. 389–398.
- Yu and Kim  Yu, J., Kim, S. B., 2016. Density-based geodesic distance for identifying the noisy and nonlinear clusters. Information Sciences 360, 231–243.
- Yu et al.  Yu, J., Tao, D., Li, J., Cheng, J., 2014. Semantic preserving distance metric learning and applications. Information Sciences 281, 674–686.
- Zhou et al.  Zhou, Y., Liu, B., Xia, S., Liu, B., 2015. Semi-supervised extreme learning machine with manifold and pairwise constraints regularization. Neurocomputing 149, 180–186.
- Zhu et al.  Zhu, X., Ghahramani, Z., Lafferty, J., et al., 2003. Semi-supervised learning using gaussian fields and harmonic functions. In: ICML. Vol. 3. pp. 912–919.