Clustering by the way of atomic fission

06/27/2019 ∙ by Shizhan Lu, et al. ∙ 0

Cluster analysis which focuses on the grouping and categorization of similar elements is widely used in various fields of research. Inspired by the phenomenon of atomic fission, a novel density-based clustering algorithm is proposed in this paper, called fission clustering (FC). It focuses on mining the dense families of a dataset and utilizes the information of the distance matrix to fissure clustering dataset into subsets. When we face the dataset which has a few points surround the dense families of clusters, K-nearest neighbors local density indicator is applied to distinguish and remove the points of sparse areas so as to obtain a dense subset that is constituted by the dense families of clusters. A number of frequently-used datasets were used to test the performance of this clustering approach, and to compare the results with those of algorithms. The proposed algorithm is found to outperform other algorithms in speed and accuracy.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Cluster analysis is widely used in a number of different areas, such as climate research PT , computational biology, biophysics and bioinformatics EMB ; HW , economics and finance HJD ; LG , and neuroscience GS ; AD . The basic task of clustering is to divide data into distinct groups on the basis of their similarity. Clustering methods can be categorized as density-based BD ; RA ; MR , grid-based PM , model-based CT ; MAK , partitioning MA ; LK , and hierarchical MF ; JD approaches.

Initial methods of clustering tend to focus on finding the center point of every category, and then assigning the other points to the nearest center. To make computer cluster data faster, some researchers, such as Schikuta SE , Ma and Chow ME et al, apply the grid-clustering method to divide objects part by part. The grid-clustering method does not need to cluster data point by point; however, this method is influenced by the size of grid cells and can not easily determine the number of categories.

Inspired by the phenomenon and rapid process of atomic fission, this paper proposed a fast and effective clustering algorithm, which we call fission clustering (FC). If the distances between every pair of clusters are large enough, two maximal value are applied to determine the number of categories, “the maximal crack of the distance matrix and the maximal value of all the distances between objects and their nearest neighbors”. Otherwise, the K-nearest neighbors method is applied to obtain a local density indicator for every object in the clustering dataset. Then the objects that have a small indicator value will be removed, and a dense subset with large distances between every two clusters is obtained.

2 Related work

Clustering is a classical issue in data mining. In recent decades, a number of typical clustering algorithms have been proposed, such as DBSCAN EM , OPTICS AM (density-based); STING WW , CLIQUE AGR

(grid-based); Gaussian mixture models

FRC , COBWEB FDH

(model-based); K-means

MJ , CLARANS NRT (partitioning) and DIANA KAL , BIRCH ZHT ( hierarchical).

Of the earlier methods found, the most representative clustering method is K-means MJ , which focuses on dividing data points into K clusters by using Euclidean distance as the distance metric. K-means has many variants (see SR ; TG

). It also is applied as a useful tool for other method, for instance spectral clustering

ZD , the spectral clustering maps the data points to a low-dimensional feature space to replace the Euclidean space in conventional K-means clustering. It can be reformulated as the weighted kernel K-means clustering method.

More recently, an fast algorithm that finds density peaks (DP) was proposed RA and widely used. It combines the advantages of both density-based and centroid-based clustering methods. Many variants have since been developed by using DP, such as gravitation-based density peaks clustering algorithm JJH , FKNN-DPC XJY and SNN-DPC LIUR . As a local density-based method, DP can obtain good results in most instances. But as a centroid-based method, DP and its variants are unable to cluster points correctly when a category has more than one centers.

The centroid-based methods focus on mining the centers and then assign the other points. This kind of method clusters data point by point. However, as intelligent humans, we prefer methods that can classify data cluster by cluster or part by part.

Schikuta SE designed a grid in the data distribution area to partition data into blocks. The points in grid cells of greater density are considered to be members of the same cluster. Grid-based clustering had developed many extensions in recent years, such as grid ranking strategy based on local density and priority-based anchor expansion DSQ , density peaks clustering algorithm based on grid XUX and shifting grid clustering algorithm ME . However, these grid-based methods cannot be applied on high-dimensional datasets as the number of cells in the grid grows exponentially with the dimensionality of data.

3 Proposed methods

In general, the cluster centers are surrounded by neighbors with lower local density, and a cluster center is at a relatively large distance from other cluster centers. Base on this idea, we can make this algorithm assumption: there are neighbourhoods composed of higher local density points in the dataset of categories. This assumption is satisfied in many existing simulation and real datasets.

Border points are distributed in two cases: () the border points of th cluster are far away from the border points of th cluster (); () the border points of different clusters are close together. We first apply local density indicator for denoising, and then cluster objects for case ().

3.1 Fission clustering algorithm

In this section, we will deal with the case () first.

To develop the algorithm further, we present the following definition that will be used throughout this article.

Definition. is a distance (similarity) function, where is a sample set, is the real number set. For all , if (or ), we call a crack of , where .

Obviously, the maximal crack () of exists for a finite dataset.

The key steps of the FC algorithm are to fissure a dataset into two subsets and to stop fissuring subsets when all the clusters are obtained. These two key steps are presented as follows.

3.1.1 Dividing datasets

Suppose if the relationship of and is closer than the relationship of and . The distance (similarity) matrix of can be obtained easily, and is denoted as . is obtained by sorting every row of the distance matrix . The th column of is subtracted from the th column to acquire the th column of , . Suppose , if , then ; otherwise, , set is fissured into two subsets.

If there are categories of objects in , the categories can be obtained step by step using the above fissuring method.

3.1.2 Stop dividing datasets

The fundamental and difficult task of clustering analysis is determining the number of clusters. The number of categories is known as an assumption in the initial clustering research. A clustering approach with few input parameters is expected when we face increasing numbers of poorly information datasets (scant or incomplete data). Many studies have addressed this difficult issue in recent decades. In this paper, the characteristics of the distance matrix are investigated, and then the useful information in the matrix is applied to determine the number of categories.

We use the following formulae as illustration: let and , where is the nearest neighbor of . Suppose there is a path such that and are connected for all , and the distance of every pair of connection points on the path is less than or equal to . This path is denoted as -path. The following Theorem is an effective indicator to determine the number of categories.

Theorem. If the distance function satisfies triangle inequality and has a -path, then , where is the of .

Proof. If there is a crack , then (suppose ). Thus, if and are not adjacent points on the -path, there must be a point on the path from to ().

If , then one of and holds. Assuming that , then there is on the path to (). Because the path of to is a part of the -path, and can be connected by some points, and the distance between two connection points is less than or equal to . If , then there must be a point on the path from to such that holds in the finite set .

However, is a crack, for all . It is a contradiction. Hence, all the cracks must be less than or equal to .

If the distance of every pair of clusters is much greater than , and every cluster has a -path, the inequation can be considered as the condition under which stop fissuring a subset . If all the subsets that fissured from are satisfied by the inequation , the process of fissuring subsets will be stopped. The number of clusters will also be determined at the same time.

Numerous common distance functions satisfy triangle inequality, such as the Manhattan distance, Euclidean distance, and Minkowski distance. If the densities of clusters are not extremely different in the same dataset, the inequation is effective.

The details of FC algorithm are shown as follows, where is the th column of .

 

Algorithm 1: FC algorithm.

 

Input: Distance matrix .

Output: Clusters of .

1. .

2. (initial value).

3. While There is a subset such that do

4. repeat

5. Pick the subset if .

6. Sort every row of to obtain .

7. , .

8. .

9. .

10. If then ; otherwise, .

11. until .

12. end while

 

3.2 The fission clustering algorithm with k-nearest neighbors local density indicator (FC-KNN)

The main idea of this section is to obtain a dense subset in the case () that can make the distances between every pair of clusters in large enough and the distances between every pair of nearest neighbors small enough, and then apply Algorithm 1 to split the subset .

3.2.1 Obtaining the local density indicator

The aim of this subsection is to obtain a local density indicator for every object , and then distinguish the dense area objects from the sparse area objects.

A relatively straightforward method is utilized to obtain the local density indicator , shown as follow equation,

(1)

where is the k-nearest neighbor set of .

To compare with the objects of sparse area, the objects in dense area have a spherical neighborhood with a smaller radius which contains the same number of neighbors. The object in dense areas is to obtain a larger local density indicator by using equation (1). The sample will be considered to belong to the dense subset if it has a larger .

3.2.2 The processes of FC-KNN algorithm

The main steps of FC-KNN algorithm are shown as follows.

Step 1. Use Algorithm 2 to obtain a dense subset .

Step 2. Cluster the subset by using Algorithm 1.

Step 3. Assign the objects of to their nearest cluster.

A simple denoising method is shown as Algorithm 2.

 

Algorithm 2: Denoising.

 

Input: Distance matrix , parameters and .

Output: The dense subset .

Initialize: r=0.4 (In general , ).

1. Apply equation (1) to obtain for every object.

2. Remove objects of that have smaller , retain the other objects in .

3. .

4. While do

5. repeat

6. .

7. Remove objects of entire dataset that have smaller , retain the other points in .

8. Update and of the new subset .

9. until or .

10. end while.

 

When the fission of dense subset is complete after Step 2 of the FC-KNN processes, the remaining objects in set need to be assigned to their right category. A simple method is applied to assign the objects of : let be the subset that contains the already classified points and be the subset of unclassified points. If , then is assigned to the category that contains .

Figure 1: Results for tuning course with parameter .

Shown as FIGURE 1, can be considered as a tuning parameter. Algorithm 2 increases the value of to remove more border points (sparse area points). For , when the dense subset , the families and are connected by some points of , so the dense families of categories are , and . When the dense subset , the points of are considered as the sparse area points and removed, so the dense families of categories are , , and . When the dense subset , the points of are considered as the sparse area points and removed, the dense families of categories are and .

Equation (1) takes operations. Algorithm 1 splits the set (or dense subset ) into subsets . Since , data processing will be faster and faster accompanied by the dividing courses of subsets. The dividing course only need to implement times to obtain clusters.

4 Experiments

In this section, we evaluate the performance of the proposed method on both simulation data and real data, and then compare it with some state-of-the-art methods that do not need the number of clusters to be input. All the experiments are implemented based on the same software and hardware: MATLAB R2014a in Win7 operating system with Intel Core I5-3230M 2.6GHz and 12G Memory.

The Euclidean function was applied to obtain the distance matrix in all experiments. We selected the following methods for our comparisons with the proposed method: the affinity propagation algorithm (AP) FBJ , fast search and find of density algorithm (DP) RA

, NK hybrid genetic algorithm (NKGA)

TR and Grid-clustered rough set (GCRS) model SML .

4.1 Descriptions of Experiment data

4.1.1 Simulation data

Figure 2: The clustering results for different methods used on the Imbalance dataset. ((a), (b) and (d) were clustered by FC, AP and NKGA, respectively. DP and GCRS obtained the same result as in (c)).
Figure 3: Synthesis

First, some frequently-used datasets obtained from different references are applied to test the algorithms, such as R15 VCJ , A1 IK , S1 FP , Dim2 FP1 and Dimond SS etc. And then two datasets, Imbalance (FIGURE 2) and Synthesis (FIGURE 3), are constructed for the supplementary tests. All the simulation data are points of two-dimensional Euclidean space.

Dataset Details Number of clusters Accuracy
objects clusters AP DP NKGA GCRS FC-KNN AP DP NKGA GCRS FC-KNN
D31 3100 31 8 31 19 31 31 0.2045 0.3474 0.3442 0.9335 0.9677
Flame 240 2 3 2 2 2 2 0.7167 0.3000 0.6583 0.8625 0.9958
R15 600 15 5 15 15 15 15 0.1867 0.9800 0.8983 0.9800 0.9933
Dimond 2999 9 15 9 5 9 9 0.3178 0.6889 0.5552 1.0000 1.0000
Imbalance 101 2 4 1 6 1 2 0.6931 0.5545 0.5941 0.5545 1.0000
Synthesis 2461 4 16 10 21 3 4 0.2751 0.2670 0.3946 0.5095 1.0000
Table 1: The description of data and the results comparison of different methods
Dataset Instances Features Clusters Detail
Iris 150 4 3 50 Iris Setosa, 50 Iris Versicolour and 50 Iris Virginica
Seeds 210 7 3 seeds from Kama, Rosa and Canadian, 70 seeds each place
Vertebral 310 6 2 210 abnormal and 100 normal
Wifi 2000 7 4 2000 times of signals records in 4 rooms, 500 records each room
Adenoma 6 12488 2 6 genes: 3 ADE and 3 N1
Myeloid 21 22283 3 21 genes: 12 acute promyelocytic leukemia (APL) genes, 3 polyploidy genes of APL and 6 Acute myeloid leukemia genes
Table 2: The simple description of real data

4.1.2 Real data

Several real-world datasets are applied to test the performance of the proposed method, two datasets for plant shape recognition: Irisaaahttp://archive.ics.uci.edu/ml/datasets.php FRA ; HF and Seeds CM ; a wireless signal dataset: Wifi RJG ; a human vertebral column dataset: Vertebral BEE ; and two gene datasets: Adenomabbbhttp://portals.broadinstitute.org/cgi-bin/cancer/datasets.cgi SCA and Myeloid STK . Two gene datsets were taken from Cancer Program Datasets and others were taken from the UCI repository. Simple descriptions of these real datasets are provided in Table 2.

4.2 Comparisons and Discussions

Dataset Number of clusters Accuracy F-Score
AP DP NKGA GCRS FC-KNN AP DP NKGA GCRS FC-KNN AP DP NKGA GCRS FC-KNN
Iris 2 2 11 3 3 0.5333 0.6667 0.4533 0.9467 0.9067 0.4329 0.5714 0.5883 0.9503 0.9168
Seeds 2 3 2 3 3 0.6048 0.6857 0.3286 0.8524 0.8857 0.5102 0.8007 0.1649 0.8575 0.8900
Vertebral 1 1 2 2 2 0.6774 0.6774 0.6645 0.5258 0.7710 0.4038 0.4038 0.3992 0.6752 0.7976
Wifi 5 4 1 3 4 0.1405 0.1025 0.2500 0.7450 0.9355 0.1671 0.1859 0.1000 0.6777 0.9402
Adenoma 1 2 2 4 2 0.5000 1.0000 0.6667 0.6667 1.0000 0.3333 1.0000 0.7273 0.8000 1.0000
Myeloid 1 2 3 1 2 0.5714 0.6190 0.7143 0.5714 0.7143 0.2424 0.4896 0.7430 0.2424 0.6061
Table 3: The results comparison of different methods

The clustering results for the simulation data and real data are shown in Table 1 and Table 3, respectively.

In FIGURE 2 and 3, no single point can be considered as the geometrical centroid of the annulus in the Synthesis dataset, the densities of the two clusters in the Imbalance dataset have a large difference. It is difficult to determine the number of categories with these AP, DP, NKGA and GCRS algorithms. DP and GCRS can not find the second center point of Imbalance dataset, then these two algorithms consider it as one cluster dataset. The proposed method is aimed at mining the dense family of every category, not the center points, so the proposed method can correctly determine the number of clusters for the Imbalance and Synthesis datasets.

To evaluate and compare the performance of the clustering methods, we apply the evaluation metrics: Accuracy and F-score

DUL in our experiments to do a comprehensive evaluation. The higher the value, the better the clustering performance for the two measures. Comparing with the best results of other algorithms indicates that our method has relative advantages of 0.19 and 0.2625 (TABLE 3) with respect to Accuracy and F-Score for Wifi dataset, respectively.

In the description of algorithms, the distance matrix is a significant input. The distance matrix depends on the correct selection of attributes, correct value of selected attributes and a good distance (recognition) function. The recognition function is better (stronger) than if for all and , where and are two clusters of .

The simulation data are Euclidean space points, the Euclidean function is a strong recognition function for them, then the parameter can be set to a large value. If the Euclidean function is a weak recognition function for some real datasets, such as the dataset Vertebral, the AP algorithm classifies Vertebral as one cluster. FC-KNN can determine the right clusters after tuning parameter with a smaller value when the recognition function is week.

The dataset Adenoma is a great challenge in clustering analysis, with especially small sample size and extremely high sample dimensionality. It is very difficult to determine the center point of every cluster, but it is easy to distinguish between the points with a small value of , hence, FC-KNN can obtain the correct clusters.

The proposed method is robust, it can obtain a same clustering result when we select values for parameters and with wide intervals and , respectively. When we face poor information datasets, the methods needed to input the number of clusters are infeasible, our method still works. The parameters of our algorithm are easy to set, is established by the indicator of sample number, and can be tuned to obtain the results needed. The stronger the distance (recognition) function the easier the selection of parameters. All simulation data obtained from different references use the value or and .

In a word, our method obtain a better results with respect to the estimation of cluster number, Accuracy and F-Score, compared with other methods.

5 Conclusion

The data clustering courses of many current methods are similar to the courses of atomic fusion, we have proposed a method for data clustering based on atomic fission patterns. Different from existing clustering methods which focus on seeking one center point of every category, the proposed algorithm acquires dense families of categories. The idea of our method is to apply spherical neighborhood instead of grid cells to cover the distribution space of objects, and the method needs to determine only spherical neighborhoods for a categories dataset. Hence, it will not be influenced by the data dimension, unlike grid-based clustering. Experimental results on simulation data and real data reveal the effectiveness of the proposed method. In future research, we aim to extend the proposed algorithm in order to cluster more kinds of datasets that overstep the scope of preamble assumptions.

References

  • (1) T. Parsons, “Persistent earthquake clusters and gaps from slip on irregular faults,” Nat. Geosci., vol. 1, pp. 59-63, Dec 2007.
  • (2) M. B. Eisen et al, “Cluster analysis and display of genome-wide expression patterns,” Proc. Natl. Acad. Sci. U.S.A., vol. 95, pp. 14863-14868, Dec 1998.
  • (3) W. Huang et al, “Time-variant clustering model for understanding cell fate decisions,” Proc. Natl. Acad. Sci. U.S.A., vol. 111, no. 44, pp. 4797-4806, Oct 2014.
  • (4) J. D. Hamilton, “A new approach to the economic analysis of nonstationary time series and the business cycle,” Econometrica vol., vol. 57, no. 2, pp. 357-384, Mar 1989.
  • (5) G. Leibon, S. Pauls, D. Rockmore, R. Savell, “Topological structures in the equities market network,” Proc. Natl. Acad. Sci. U.S.A., vol. 105, no. 52, pp. 20589-20594, Dec 2008.
  • (6) S. Galbraith, J. A. Daniel, B. Vissel, “A study of clustered data and approaches to its analysis,” J. Neurosci., vol. 30, no. 32, pp. 10601-10608, Aug 2010.
  • (7) D. Allen, G. Goldstein, “Cluster Analysis in Neuropsychological Research Recent Applications,” (Springer, New York, 2013).
  • (8) D. Birant, S. T. Alp Kut, “ST-DBSCAN: an algorithm for clustering spatial-temporal data,” Data Knowl. Eng., vol. 60, no. 1, pp. 208-221, Jan 2007.
  • (9) A. Rodriguez, A. Laio, “Clustering by fast search and find of density peaks,” Science, vol. 344, no. 6191, pp. 1492-1496, Jun 2014.
  • (10) R. Mehmood et al, “Clustering by fast search and find of density peaks via heat diffusion,” Neurocomputing, vol. 208, pp. 210-217, Oct 2016.
  • (11) M. Parikh, T. Varma, “Survey on different grid based clustering algorithms,” Int. J. Adv. Res. Comput. Sci. Manag. Stud., vol. 2, no. 2, pp. 427-430, Feb 2014.
  • (12) T. Chen, N. L. Zhang, T. Liu, K. M. Poon, Y. Wang, “Model-based multidimensional clustering of categorical data,” Artif. Intell., vol. 176, no. 1, pp. 2246-2269, Jan 2012.
  • (13) A. K. Mann, N. Kaur, “Survey paper on clustering techniques,” Int. J. Sci. Eng. Technol. Res. (IJSETR), vol. 2, no. 4, pp. 803-806, Apr 2013.
  • (14)

    A. Mukhopadhyay et al, “Survey of multiobjective evolutionary algorithms for data mining: part II,” IEEE Trans. Evolut. Comput., vol. 18, no. 1, pp. 20-35, Feb 2014.

  • (15) K. Lahari, M. R. Murty, S. C. Satapathy, “Partition based clustering using genetic algorithm and teaching learning based optimization: performance analysis,” Adv. Intell. Syst. Comput., vol. 338, pp. 191-200, Mar 2015.
  • (16)

    F. Murtagh, P. Contreras, “Algorithms for hierarchical clustering: an overview,” Wiley Interdiscip. Rev. Data Min. Knowl. Discov., vol. 2, no. 1, pp. 86-97, Jan 2012 .

  • (17) D. Jaeger, J. Barth, A. Niehues, C. Fufezan, “pyGCluster, a novel hierarchical clustering approach,” Bioinformatics, vol. 30, no. 6, pp. 896-898, Mar 2014.
  • (18)

    E. Schikuta, “Grid-clustering: a fast hierarchical clustering method for very large data sets,” In Proceedings 13th International Conference on Pattern Recognition, pp. 101-105, Aug 1996.

  • (19) E. W. M. Ma, T. W.S. Chow, “A new shifting grid clustering algorithm,” Pattern Recognition, vol. 37, no. 3, pp. 503-514, Mar 2004.
  • (20) M. Ester, H.P. Kriegel, J. Sander, and X. Xu, “A density-based algorithm for discovering clusters in large spatial databases with noise,” Data Mining Knowl. Discovery, vol. 96, no. 34, pp. 226-231, Aug 1996.
  • (21) M. Ankerst, M. M. Breunig, H.-P. Kriegel, and J. Sander, “OPTICS: Ordering points to identify the clustering structure,” in Proc. ACM SIGMOD Int. Conf. Manage. Data (SIGMOD), Philadelphia, PA, USA, pp. 49-60, May/Jun. 1999.
  • (22) W. Wang, J. Yang, and R. R. Muntz, “STING: A statistical information grid approach to spatial data mining,” in Proc. 23rd Int. Conf. Very Large Data Bases (VLDB), Athens, Greece, pp. 186-195, Aug 1997.
  • (23)

    R. Agrawal, J. E. Gehrke, D. Gunopulos, and P. Raghavan, “Automatic subspace clustering of high dimensional data for data mining applications,” in Proc. ACM SIGMOD Int. Conf. Manage. Data (SIGMOD), Seattle, WA, USA, pp. 94-105, Jun 1998.

  • (24) C. Fraley and A. E. Raftery, “Model-based clustering, discriminant analysis, and density estimation,” J. Amer. Statist. Assoc., vol. 97, no. 458, pp. 611-631, Jun 2002.
  • (25) D. H. Fisher, “Improving inference through conceptual clustering,” in Proc. 6th Nat. Conf. Artif. Intell. (AAAI), Seattle, WA, USA, pp. 461-465, Jul 1987.
  • (26)

    J. MacQueen, “Some Methods for Classification and Analysis of Multivariate Observations,” in Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, L. M. Le Cam, J. Neyman, Eds. (Univ. California Press, Berkeley, CA), vol. 1, pp. 281-297, Jan 1967.

  • (27) R. T. Ng and J. Han, “CLARANS: A method for clustering objects for spatial data mining,” IEEE Trans. Knowl. Data Eng., vol. 14, no. 5, pp. 1003-1016, Sep 2002.
  • (28) L. Kaufman and P. J. Rousseeuw, “Finding Groups in Data: An Introduction to Cluster Analysis,” Hoboken, NJ, USA: Wiley, Mar 1990.
  • (29) T. Zhang, R. Ramakrishnan, and M. Livny, “BIRCH: An efficient data clustering method for very large databases,” in Proc. ACM SIGMOD Int. Conf. Manage. Data (SIGMOD), Montreal, QC, Canada, vol. 25, no. 2, pp. 103-114, Jun 1996.
  • (30) R. Scitovski and K. Sabo, “Analysis of the k-means algorithm in the case of data points occurring on the border of two or more clusters,” Knowl. Based Syst., vol. 57, pp. 1-7, Feb 2014.
  • (31) G. Tzortzis and A. Likas, “The MinMax k-means clustering algorithm,” Pattern Recognit., vol. 47, pp. 2505-2516, Jul 2014.
  • (32) D. Zhou and C. J. C. Burges, “Spectral clustering and transductive learning with multiple views,” in Proc. 24th Int. Conf. Mach. Learn., pp. 1159-1166, Jun 2007.
  • (33) J. Jiang et al, “GDPC: Gravitation-based Density Peaks Clustering algorithm,” Physica A, vol. 502, pp. 345-355, Feb 2018.
  • (34) J. Xie et al, “Robust clustering by detecting density peaks and assigning points based on fuzzy weighted K-nearest neighbors,” Information Sciences, vol. 354, pp. 19-40, Aug 2016.
  • (35) R. Liu, H. Wang and X. Yu, “Shared-nearest-neighbor-based clustering by fast search and find of density peaks,” Information Sciences, vol. 450, pp. 200-226, Mar 2018.
  • (36) S. Dong et al, “Clustering based on grid and local density with priority-based expansion for multi-density data,” Information Sciences, vol. 468, pp. 103-116, Aug 2018.
  • (37) X. Xu, “DPCG: an efficient density peaks clustering algorithm based on grid,” Int. J. Mach. Learn. & Cyber., vol. 9, pp. 743-754, Sep 2018.
  • (38)

    C. J. Veenman, M. J. T. Reinders, E. Backer, “A maximum variance cluster algorithm” IEEE Trans.Pattern Analysis and Machine Intelligence, vol. 24, no. 9, pp. 1273-1280, Sep 2002.

  • (39) L. Fu, E. F. Medico, “A novel fuzzy clustering method for the analysis of DNA microarray data,” BMC Bioinform, vol. 8, no. 3, pp. 1-15, Jan 2007.
  • (40) A. Gionis, H. Mannila, P. Tsaparas, “Clustering aggregation,” ACM Transactions on Knowledge Discovery from Data (TKDD), vol. 1, no. 1, pp. 1-30, Mar 2007.
  • (41) B. J. Frey, D. Dueck, “Clustering by passing messages between data points,” Science, vol. 315, no. 5814, pp. 972-976, Feb 2007.
  • (42)

    R. Tinós, L. Zhao, F. Chicano, D. Whitley, “NK Hybrid Genetic Algorithm for Clustering,” IEEE Transactions on Evolutionary Computation, vol. 22, no. 5, pp. 748-761, Apr 2018.

  • (43) M. Suo et al, “Grid-clustered rough set model for self-learning and fast reduction,” Pattern Recognition Letters, vol. 106, pp. 61-68, Feb 2018.
  • (44) K. Ismo, P. Franti, “Dynamic local search for clustering with unknown number of clusters,” in: Proceedings of International Conference on Pattern Recognition, vol. 2, no. 16, pp. 240-243, Aug 2002.
  • (45) P. Franti, O. Virmajoki, “Iterative shrinking method for clustering problems,” Pattern Recognit., vol. 39, no. 5, pp. 761-775, May 2006.
  • (46) P. Franti, O. Virmajoki, V. Hautamaki, “Fast agglomerative clustering using a k-nearest neighbor graph,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 28, no. 11, pp. 1875-1881, Nov 2006.
  • (47)

    S. Salvador, P. Chan, “Determining the number of clusters/segments in hierarchical clustering/segmentation algorithms,” in: Proceedings of International Conference on Tools with Artificial Intelligence, ICTAI, pp. 576-584, Nov 2004.

  • (48) R. A. Fisher, “The use of multiple measurements in taxonomic problems,” Annual Eugenics, vol. 7, no. 2, pp. 179-188, Sep 1936.
  • (49) F. Huang, X. Li, S. Zhang, J. Zhang, “Harmonious Genetic Clustering,” IEEE TRANSACTIONS ON CYBERNETICS, vol. 48, no. 1, pp. 199-214, Jan 2018.
  • (50) M. Charytanowicz, J. Niewczas, P. Kulczycki, P. A. Kowalski, S. Lukasik, S. Zak, “A Complete Gradient Clustering Algorithm for Features Analysis of X-ray Images,” in: Information Technologies in Biomedicine, Ewa Pietka, Jacek Kawa (eds.), Springer-Verlag, Berlin-Heidelberg, pp. 15-24, Jan 2010.
  • (51)

    J. G. Rohra et al, “User Localization in an Indoor Environment Using Fuzzy Hybrid of Particle Swarm Optimization & Gravitational Search Algorithm with Neural Networks,” In Proceedings of Sixth International Conference on Soft Computing for Problem Solving, pp. 286-295, Feb 2017.

  • (52) E. Berthonnaud et al, “Analysis of the sagittal balance of the spine and pelvis using shape and orientation parameters,” Journal of Spinal Disorders & Techniques, vol. 18, no. 1, pp. 40-47, Feb 2005.
  • (53) A. Sweet-Cordero et al, “An oncogenic expression signature identified by cross-species gene-expression analysis,” Nature Genetics, vol. 37, no. 1, pp. 48-55, Dec 2004.
  • (54) K. Stegmaier et al, “Gene expression-based high-throughput screening (GE-HTS) and application to leukemia differentiation,” Nature Genetics, vol. 36, no. 3, pp. 257-263, Feb 2004.
  • (55) L. Du, Y. Pan, X. Luo, “Robust spectral clustering via matrix aggregation,” IEEE Access, Vol. 6, pp. 53661-53670, Sep 2018.