An ensemble technique is characterized by the mechanism that generates the components and by the mechanism that combines them. A common way to achieve the consensus is to enable each component to equally participate in the process. A problem with this approach is that poor components are likely to negatively affect the quality of the consensus result. To address this issue, alternatives have been explored in the literature to build selective classifier and cluster ensembles, where only a subset of the components contributes to the computation of the consensus. Typically, in classifier ensembles the selection is driven by the trade-off between accuracy and diversity 
. Boosting, perhaps the most well-known example, achieves the consensus by weighing the components based on their accuracy. In an unsupervised scenario, such as clustering and outlier detection, defining the selection mechanism is more challenging due to the lack of ground truth. Quality and diversity have been used as measure to drive the selection of components for cluster ensembles.
Of the family of ensemble methods, outlier ensembles are the least studied [3, 4, 5, 6, 7, 8, 9, 10, 11]. In particular, only recently the selection problem for outlier ensembles has been discussed [4, 5], and its potential positive effect on event detection has been shown . In this work, we further explore the selection issue for outlier ensembles, and define a new graph-based class of ranking selection methods, of which we detail specific instances.
To better understand the nature of the problem we want to tackle, let’s consider Figure 1. Plots (a)-(f) show six ranking components generated from the WDBC data using the LOF algorithm under different conditions (see Section 5 for details). Each row corresponds to a ranking. The horizontal axis captures the data points (in a fixed order across all six rows), and the vertical axis measures the LOF scores assigned to each point. The 10 leftmost points are the actual outliers, and the red vertical bars highlight the top-10 LOF score values, in the respective rankings. The four rankings (a)-(d) identify many of the outliers among the top-10 ranked points, while the rankings (e) and (f) have at most one outlier among the top-10 ranked points. Figure 1(g) shows the area under the PR curve (AUCPR) for an ensemble of 20 rankings, of which six are the ones illustrated. Rankings (a)-(d) correspond to the most accurate ones, and (e)-(f) are the two least accurate. As also observed in , the best rankings tend to agree on the high scored points, but the actual scores change. As a consequence, their aggregation may produce an improved ranking. On the other hand, rankings (e) and (f) largely rank non-outliers as the highest, and including them in the aggregation process may affect the consensus ranking negatively. We aim at identifying such poor rankings and remove them from further consideration. Our technique, called Core (described in Section 4), was able to select the five top rankings from the ensemble of 20 components given in Figure 1. The consensus ranking achieved via averaging the selected five components gives an AUCPR of 0.8, while the consensus ranking achieved via averaging the entire 20 components gives an AUCPR of 0.2. This result is indicative of the great potential our graph-based approach to selective outlier ensembles has to offer.
The paper is organized as follows. Section 2 discusses related work. We introduce our framework and methodology in Sections 3 and 4, and in Section 5 we present our experiments, results, and analysis. Section 7 concludes the paper with thoughts for future work.
2 Related Work
Ensemble methods have been exploited in the literature to boost the performance of classifiers, e.g. [12, 13, 14, 15, 16, 17], and more recently clustering ensembles have emerged as a technique for overcoming problems with clustering algorithms, e.g. [18, 19, 20]
. It is well known that off-the-shelf clustering methods may discover different patterns in a given set of data. This is because each clustering algorithm has its own bias resulting from the optimization of different criteria. Furthermore, there is no ground truth against which the clustering result can be validated. Thus, no cross-validation technique can be carried out to tune input parameters involved in the clustering process. Clustering ensembles offer a solution to challenges inherent to clustering arising from its ill-posed nature: they can provide more robust and stable solutions by making use of the consensus across multiple clustering results, while averaging out emergent spurious structures that arise due to the various biases to which each participating algorithm is tuned, or to the variance induced by different data samples.
Anomaly (or outlier) detection is another unsupervised problem that suffers from many of the same challenges as clustering. As such, many different anomaly detection techniques (e.g., density-based and distance-based, global vs. local), and multiple variations of each have been studied[21, 22, 23, 24, 25, 26, 27, 28, 29, 30]. A comprehensive survey of these methods can be found in . Invariably, all anomaly detection algorithms involve parameters that are problematic to set. Ensemble techniques can provide a framework to address these issues for anomaly detection algorithms, in a way similar to clustering ensembles. Nevertheless, the discussion on outlier ensembles has started only recently [3, 4, 5, 6, 7, 8, 9, 10, 11], and the avenue remains largely unexplored.
In this paper we focus on the problem of component selection for outlier ensembles. As discussed above with the example shown in Figure 1, poor components can negatively affect the consensus ranking. Only recently the selection problem for outlier ensembles has been discussed [4, 5], and its potential positive effect on event detection has been shown . The selection models presented in  (DivE) and in 
(SelectV) are both greedy selective strategies based upon a target ranking treated as pseudo ground-truth. Rankings are selected in sequence if they increase the weighted Pearson correlation of the current ensemble prediction with the target vector. SelectH
is also based on a pseudo ground-truth, for anomalies this time. Components that do not rank the estimated anomalies sufficiently high, are candidate to be discarded. To compute the pseudo ground-truth, a mixture modeling approach is used to convert each component ranking into a binary vector. A majority vote applied to these binary lists identifies the anomalies.
In this work, we take a different approach. We tackle the problem of selecting outlier ensemble components by mapping it onto a graph mining problem, which does not use the concept of pseudo ground-truth. The main idea is to capture high quality rankings as nodes forming patterns in a graph. We advocate that such transformation can lead to a family of new effective approaches for the selective outlier ensembles problem. The presentation of our framework follows.
3 Selective Outlier Ensembles Framework
Let , , be a collection of data. From the data, a collection of outlier score rankings is generated. Each ranking is a sequence of outlier scores, one for each data point, sorted in non-decreasing order: . The collection constitutes the ensemble.
In Algorithm 1, the Selective Outlier Ensembles framework (Soul) is presented. The Soul framework takes in input the collection of rankings (however generated), and applies a two-phase algorithm. The first step is the Ranking Selection phase, which allows to plug in any selection function that specifies which rankings to retain. The Ranking Aggregation phase enables the use of a variety of aggregators to compute a consensus ranking . The Soul framework does not require access to the original features of the data, and is transparent to the process that generates the ensemble. Soul can therefore be used with any outlier detection algorithm that produces a ranking, and any combination thereof.
The two steps Ranking Selection and Ranking Aggregation can also be merged to produce the consensus ranking . In this work, though, we focus on the design of an effective Ranking Selection function of the components. As such, we make a distinction between the two phases, and apply only commonly used consensus functions, i.e. Maximum, Average, and Minimum [26, 27, 32], for Ranking Aggregation.
4 Ranking Selection
We define a graph-based class of ranking selection methods. A method in this class is characterized by two major steps:
Mapping the rankings onto a graph structure
Mining the resulting graph to identify a subset of rankings.
Several approaches in the literature formulate the ensemble consensus function as a graph partitioning problem [18, 33, 34]. Our aim here is different, since we are not concerned with the aggregation step. Instead, we organize the components (rankings) in a graph, with the goal of removing poor rankings from further consideration. The challenge is to relate the quality of the rankings with the structure of the graph, in absence of supervision.
We define two instances of the graph-based ranking selection class, called Core and Cull. In the Core approach, we map the problem of selecting ensemble components onto a community detection problem in a graph. To this end, given the ranking ensemble , we construct a complete weighted graph , where the vertices in correspond to the rankings, and . In connecting the vertices with one another, we want the good rankings to form strongly connected components, so that they can emerge as a dense community. Looking at the rankings in Figure 1, we observe that pairs of good rankings are similar in their top ranked objects, while a good and a poor rankings will be dissimilar in the way they rank objects at the top. As such, a similarity measure that emphasizes the top ranked points will in general consider two good rankings as more similar than a good and a poor rankings. The weighted Kendal tau correlation measure  is a measure of similarity that satisfies this property. Let’s consider, as an example, Figure 1(b), which shows the weighted Kendall tau similarity values between all pairs of rankings given in Figure 1. We observe high values for all pairs between 1 and 4 (the good rankings), and significantly smaller values for any ranking 1 through 4 and rankings 19 and 20 (the poor rankings). This suggests the following definition of .
For every two distinct rankings and , , where is the weighted Kendall tau measure. , for all . In order to enable the strongly connected components to emerge, we then prune the edges in by retaining only the edges corresponding to the largest values, where is equal to the number of vertices. We observe that, a connected graph with vertices, has at least edges. In pruning the edges we wanted to enable graph connectivity, thus the choice of as threshold on the number of edges. Figure 1(a) shows the graph obtained for the ensemble of 20 rankings of Figure 1. Notably, the five best components form a 5-clique in this case, which is also the -core subgraph for the resulting , with the largest (). A -core subgraph is the maximal connected subgraph of in which all vertices have degree at least . Nodes 19 and 20 (the poorest rankings) end up being isolated nodes.
We also observe that the three poor rankings 15, 16, and 17 in Figure 1(a) form a 3-clique, revealing pair-wise correlations superior to the threshold. Under the assumption that the ranking components are affected by diverse errors, cliques of “poor” rankings will stay small, and the -core subgraph, with the largest , can identify the subset of rankings of good quality to provide as input to the aggregation function. We call this ranking selection algorithm Core, and summarize its steps in Algorithm 2.
The Cull ranking selection technique takes a different approach to prune the complete weighted graph . The Core technique typically retains a minority of the ensemble components (25% on average in our experiments). In an effort of selecting a larger number of (good) components, rather than keeping the components in the largest -core, we discard the ones deemed as poor, and keep the remaining. To estimate the poor components we proceed as follows. For each vertex in , we compute its weighted degree . Under the assumption that rankings make diverse errors, we expect poor components to have small weights associated to the incident edges, and therefore a low weighted degree. For example, considering the adjacency matrix in Figure 1(b), the weighted degrees of the vertices are: , , , , , ; hence, the poor components (19 and 20) have the lowest weighted degrees. In our experiments we discard 20% of the total number of vertices with the lowest values. We call the resulting algorithm Cull. Cull strikes to preserve a larger pool of components in comparison to Core. A summary of the steps is given in Algorithm 3.
Hierarchical versions of both Core and Cull can also be adopted. One can run Core on independent ensembles, and then aggregate all the selected components in a new ensemble, and run Core again on it. We can proceed similarly for Cull. If poor components are sifted at each level, improvements upon the Core (Cull) technique are expected. The depth of the hierarchy can be extended beyond two as well. In our experiments, we test the two-level hierarchy, and call the respective techniques and .
To evaluate outlier methods, typically, data for classification is used and adapted to the task of anomaly detection. The majority class, or a combination of different large classes, is considered as the inliers. The rest of the data, mostly downsampled, plays the role of outliers. For our experiments, we used datasets from two publicly available repositories. In particular, Lymphography, Shuttle, SpamBase, Waveform, WDBC, Wilt, and WPBC were generated as described in 111Data available at: http://www.dbs.ifi.lmu.de/research/outlier-evaluation/DAMI/; Ecoli4, Pima, Segment0, Yeast2v4, and Yeast05679v4 were generated as described in [37, 38]222Data available at: http://sci2s.ugr.es/keel/imbalanced.php#sub2A. SatImage is taken from the UCI repository : the majority class is used as inliers, and of the rest of the data is subsampled to derive the outliers. A summary of the datasets is available in Table 1.
5.2 Ensemble construction
To evaluate ranking selection algorithms, we first need to construct an ensemble of outlier rankings. We construct homogeneous ensembles, where all components are generated using the LOF algorithm  as the base detector. In order to generate diverse components, we perform subsampling. It has been shown that subsampling can create diverse outlier components, and under specific conditions can improve the overall performance, compared to using the entire data . To encourage diversity, for each component, we randomly select the subsampling rate from the (%) values . We also select the value of the MinPts parameter of LOF in the set . Once the subsample is selected, for each point in the dataset, the nearest neighbors, their distances, and the relative densities required in the LOF algorithm are calculated only with respect to the points in the subsample and using the selected MinPts value. In our experiments, we fix the ensemble size to .
5.3 Consensus functions
Various methods exist in the literature to unify outlier scores obtained from different base detectors [40, 41], and to merge different rank lists [42, 43]. However, since in our setting each ensemble component is generated by the LOF algorithm, no unification is needed.
Our methods focus on the design of the selection mechanism of the ensemble components. As such, we use simple consensus functions across all approaches being compared, i.e. Maximum, Average, and Minimum functions. The use of more sophisticated aggregation functions is out of scope for the current study, and will be considered in the future. The Maximum function assigns to each point the largest score among those received by the various rankings. Average and Minimum work accordingly in the same fashion. Maximum and Average are among the most commonly used consensus functions to aggregate rankings [26, 27, 32].
5.4 Methods and Evaluation
We compare our techniques against state-of-the-art selective outlier ensemble methods, namely SelectV and SelectH (we used the code available from the author’s website) , and DivE  (we used the implementation provided by the authors of ). The code of Core and Cull is publicly available333Code available at: https://github.com/HamedSarvari/Graph-Based-Selective-Outlier-Ensembles. In our experiments, we also include the baseline that selects all the components (called All). Moreover, to assess the effectiveness of the ensemble, we apply the simple LOF algorithm  on the whole data. We ran all methods (Core, Cull, SelectV, SelectH, DivE, and All) on multiple independent ensembles, and report the average performance of each method. The LOF baseline was also applied the same number of times on each data set, with a random choice of the MinPts parameter from the set . Performance is measured using the area under the Precision-Recall (PR) curve, namely average precision . This measure was also used by the authors of SelectV and SelectH to assess their methods . We observe that, although the area under ROC curve is widely used to evaluate outlier detection methods , it has been shown that the Precision-Recall curve is more informative than ROC plots when evaluating imbalanced datasets .
To run the hierarchical version of Core and Cull ( and , respectively), we consider batches of 20 ensembles. For each batch, we run Core (Cull) on each ensemble; we then assemble the outputs of 20 selections in a new ensemble, and run Core (Cull) again on it. The process is repeated for multiple independent batches of 20 ensembles, and average performance is reported for each method. We also run a variant in which, for each batch of 20 ensembles, we just assemble the 20 outputs of Core (Cull) in a new ensemble and directly apply the consensus function. These variants are called Core.U and Cull.U.
For a fair comparison, we also set up runs of SelectV, SelectH, DivE, and All, where we enable the techniques to have access to all the ensembles in each batch. That is, we generate a single ensemble of components from a given batch, and run each competitor method on it. The techniques in this setting are denoted as SelectV.U, SelectH.U, DivE.U, and All.U, respectively.
Table 2 gives the average performances (areas under the PR curve) of Core, Cull, DivE, SelectV, SelectH, and All across all datasets and for the three consensus functions. For WDBC, WPBC, Pima,Yeast05679v4, Ecoli4, Shuttle, and SpamBase averages are computed over 400 independent ensembles. For the remaining datasets averages are computed over 200 independent ensembles.
Table 3 gives the average performances (areas under the PR curve) of , , Core.U, Cull.U, DivE.U, SelectV.U, SelectH.U, and All.U across all datasets and for the three consensus functions. For WDBC, WPBC, Pima,Yeast05679v4, Ecoli4, Shuttle, and SpamBase averages are computed over 20 batches (of 20 ensembles each). For the remaining datasets, averages are computed over 10 batches.
For both tables, statistical significance is assessed using a one-way ANOVA with a post-hoc Tukey HSD test with a p-value threshold equal to . For each dataset, boldface indicates the technique with the best performance score, and any technique which is not statistically significantly inferior to it. For each dataset, the best performance score is also underlined.
Table 2 shows that, out of the 16 datasets, Core and Cull are ranked among the top performers in 12 datasets; SelectV and SelectH in 6; DivE and All in 9, and LOF in 3. Overall, our selective techniques are superior against the state-of-the-art approaches for selective outlier ensembles (SelectV, SelectH, and DivE), and against All. In particular, Core and Cull give the best performance scores (underlined values) in 10 datasets; DivE in 1; SelectH and SelectV in 2; All in 3; and LOF in 3. Core and Cull win by a large margin.
It’s known that the All technique, especially when combined with average, is a strong baseline and hard to defeat . Our results confirm this fact. In particular, SelectV and SelectH are not competitive against All on the wide range of problems considered in our experiments. We also observe that DivE often selects all the components, and therefore reduces to All. Core and Cull emerge as the strongest competitors against All. Single LOF is among the top performers in only three cases; this supports the overall effectiveness of the constructed ensemble. It’s interesting to observe that in two out of these three cases (Glass and Segment0), LOF is the only top performer, indicating that the constructed ensemble did not work well for these two problems, regardless of the selective or consensus techniques used.
The results reveal an interesting fact about Core and Cull: they manifest their best behavior under different scenarios. They are among the best performing methods in 7 and 11 datasets, respectively, of which only 6 are in common. On the other hand, SelectH and SelectV seem highly correlated. They both become competitive in the same 6 datasets. The complementary nature of Core and Cull enables them to succeed in a wide range of problems, since they seem to induce a different learning bias. This opens a new research path to investigate characteristics of the datasets to which each of these methods is tuned, and suggests the potential for a hybrid approach that leverages their diversity.
Table 3 shows that and are ranked among the top performers in 15 datasets; Core.U and Cull.U in 15 as well; DivE.U in 4; SelectV.U and SelectH.U in 12; and All.U in 12. Again, the hierarchical versions of our techniques emerge as the strongest competitors. In particular, and give the best performance scores (underlined values) in 7 datasets; Core.U and Cull.U in 5; DivE.U in 1; SelectH.U and SelectV.U in 2; and All.U in 5.
The two-level pruning mechanisms of and effectively prune poor components among a large pool of rankings. On the other hand, All.U deals with large ensembles, which are likely to contain a fair number of poor components, thus hurting the relative performance against the competitors. Overall, the behavior of SelectV.U and SelectH.U is comparable to All.U, while DivE.U gives the worst performance.
An insightful observation from the results in Table 3 is the strong performance of Core.U. It’s among the best performers in 14 (out of 16) datasets, and its overall performance is superior to . In a way, Core.U achieves the best-of-both-worlds: it first uses Core to discard poor components across different ensembles; then it aggregates all selected rankings, acting like All, but on a ”boosted” pool of components. Core.U is superior to (or tied with) All.U in almost all scenarios (15 out of 16), and therefore a very promising candidate for outlier ensemble selection.
We finally observe that the best performing consensus functions depend on the dataset, and to a less extent on the method. A deeper understanding of this behavior is in our agenda for future work.
6 Complexity Analysis
We analyze the theoretical complexity for all the considered methods. For simplicity, we omit the cost of sampling, the cost needed to compute the anomaly scores, and the cost of running the aggregation function. These steps are common to all methods. Let be the size of each ranking and the size of the ensemble.
All: The cost of selecting all the components is simply equal to the size of the ensemble: .
Core and Cull: The cost of the graph construction is , obtained by multiplying the number of edges with the cost of computing the weighted tau, which is as reported in . The rest of the computation is linear in the number of edges, which is . So the total cost is: , which is dominated by the factor .
DivE: As reported in , the first operation performed by DivE is the Union of the top-k outliers, with a cost of when . The next step consists in sorting the converted rankings using the weighted Pearson correlation, with a cost of . This includes the cost of Pearson’s coefficients (), and the cost of sorting the rankings according to the elaborated coefficients. The computation of sorting the converted rankings using the weighted Pearson correlation is repeated two times before the loop that contain the rankings. As a result, the overall cost of DivE is: , which it is dominated by the factor .
Select-V: As reported in , the first operation performed by SelectV is Unification
, which converts scores to probability estimates. Even if we consider as constant the cost of Unification, the total cost of performing Unification for all rankings is. The cost of rank sorting is . The next step consists in sorting the converted rankings using the weighted Pearson correlation, with a cost of , exactly as in DivE. The computation of sorting the converted rankings using the weighted Pearson correlation is repeated times, and the final running cost to perform SelectV is: , which it is dominated by the factor .
Select-H: As reported in , the first expensive procedure performed is the computation of MixtureModel. Its cost depends on the number of iterations , which was set to 100 as suggested in , on the size of the ensemble, and on the length of the score vectors; the resulting cost is . The second expensive procedure is RobustRankAggregation, which costs . The subsequent loop is dominated by the number of estimated outliers, and in the worst case its cost is
. The algorithm concludes with a clustering phase (k-means is the used algorithm), which costs . The final running cost to perform SelectH is: , which is dominated by the factor .
The theoretical analysis shows that SelectH is dominated by , our methods Core and Cull by , and DivE and SelectV by . In real world scenarios, as the number of data grows large, SelectH may become prohibitively expensive.
We have also computed the empirical running times. For each dataset, the average running time of all the runs for each method is recorded in Table 4. Experiments were run on a laptop with an Intel Core i7 Processor @2.80GHz and 16GB RAM. The empirical running times are consistent with the complexity analysis given above. In particular, we observe that the running times of DivE and SelectV have the same order of magnitude. Core and Cull are faster then SelectH but slower than DivE and SelectV. When the number of components increases, the running times of Core.U, Cull.U, Dive.U, and SelectV.U are almost identical; in contrast, the runnning time of SelectH.U increases much more rapidly with respect to the other methods.
We have introduced a new graph-based class of ranking selection methods for outlier ensembles. In particular, we have defined two specific approaches, Core and Cull, and hierarchical extensions of the same. Our extensive evaluation on a variety of heterogeneous data and methods shows that our approach outperforms state-of-the-art selective outlier ensemble techniques in a number of cases. Interesting and challenging questions are open for future investigation, including a characterization of the scenarios when Core outperforms Cull, or viceversa; studying how our selective techniques affect the accuracy/diversity tradeoffs; exploring hybrid methods, different outlier detection techniques and alternative consensus functions, and analyze their effects in more depth.
-  N. Li and Z.-H. Zhou, “Selective ensemble of classifier chains,” in Proceedings of the International Workshop on Multiple Classifier Systems, pp. 146–156, Springer, 2013.
-  X. Z. Fern and W. Lin, “Cluster ensemble selection,” Stat. Anal. Data Min., vol. 1, pp. 128–141, Nov. 2008.
-  A. Zimek, R. J. Campello, and J. Sander, “Ensembles for unsupervised outlier detection: challenges and research questions a position paper,” ACM Sigkdd Explorations Newsletter, vol. 15, no. 1, pp. 11–22, 2014.
-  R. Schubert, R. Wojdanowski, A. Zimek, and H.-P. Kriegel, “On evaluation of outlier rankings and outlier scores,” in Proceedings of the SIAM International Conference on Data Mining, pp. 1047–1058, SIAM, 2012.
-  S. Rayana and L. Akoglu, “Less is more: Building selective anomaly ensembles with application to event detection in temporal graphs,” in Proceedings of the 2015 SIAM International Conference on Data Mining, pp. 622–630, SIAM, 2015.
-  C. C. Aggarwal and S. Sathe, “Theoretical foundations and algorithms for outlier ensembles,” SIGKDD Explor. Newsl., vol. 17, pp. 24–47, Sept. 2015.
-  C. C. Aggarwal, “Outlier ensembles: Position paper,” SIGKDD Explor. Newsl., vol. 14, pp. 49–58, Apr. 2013.
-  A. Zimek, M. Gaudet, R. J. Campello, and J. Sander, “Subsampling for efficient and effective unsupervised outlier detection ensembles,” in Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 428–436, ACM, 2013.
-  H. V. Nguyen, H. H. Ang, and V. Gopalkrishnan, “Mining outliers with ensemble of heterogeneous detectors on random subspaces,” in Proceedings of the 15th International Conference on Database Systems for Advanced Applications - Volume Part I, DASFAA’10, (Berlin, Heidelberg), pp. 368–383, Springer-Verlag, 2010.
-  B. Micenkova, B. McWilliams, and I. Assent, “Learning representations for outlier detection on a budget,” in CoRR abs/1507.08104, 2015.
-  F. T. Liu, K. M. Ting, and Z.-H. Zhou, “Isolation forest,” in Proceedings of the 2008 Eighth IEEE International Conference on Data Mining, ICDM ’08, (Washington, DC, USA), pp. 413–422, IEEE Computer Society, 2008.
T. G. Dietterich et al.
, “Ensemble methods in machine learning,”Multiple classifier systems, vol. 1857, pp. 1–15, 2000.
-  L. Breiman, “Bagging predictors,” Machine learning, vol. 24, no. 2, pp. 123–140, 1996.
Y. Freund and R. E. Schapire, “A desicion-theoretic generalization of on-line
learning and an application to boosting,” in
European conference on computational learning theory, pp. 23–37, Springer, 1995.
L. Breiman, “Random forests,”Machine learning, vol. 45, no. 1, pp. 5–32, 2001.
-  P. Domingos, “Bayesian averaging of classifiers and the overfitting problem,” in ICML, vol. 2000, pp. 223–230, 2000.
-  S. Džeroski and B. Ženko, “Is combining classifiers with stacking better than selecting the best one?,” Machine learning, vol. 54, no. 3, pp. 255–273, 2004.
-  A. Strehl and J. Ghosh, “Cluster ensembles—a knowledge reuse framework for combining multiple partitions,” Journal of machine learning research, vol. 3, no. Dec, pp. 583–617, 2002.
-  S. Bickel and T. Scheffer, “Multi-view clustering.,” in ICDM, vol. 4, pp. 19–26, 2004.
-  E. Muller, S. Gunnemann, I. Farber, and T. Seidl, “Discovering multiple clustering solutions: Grouping objects in different views of the data,” in Data Engineering (ICDE), 2012 IEEE 28th International Conference on, pp. 1207–1210, IEEE, 2012.
-  E. M. Knorr and R. T. Ng, “A unified notion of outliers: Properties and computation.,” in KDD, pp. 219–222, 1997.
-  S. Ramaswamy, R. Rastogi, and K. Shim, “Efficient algorithms for mining outliers from large data sets,” in ACM Sigmod Record, vol. 29, pp. 427–438, ACM, 2000.
-  K. Zhang, M. Hutter, and H. Jin, “A new local distance-based outlier detection approach for scattered real-world data,” Advances in knowledge discovery and data mining, pp. 813–822, 2009.
-  S. D. Bay and M. Schwabacher, “Mining distance-based outliers in near linear time with randomization and a simple pruning rule,” in Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 29–38, ACM, 2003.
-  W. Jin, A. K. Tung, and J. Han, “Mining top-n local outliers in large databases,” in Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 293–298, ACM, 2001.
-  M. M. Breunig, H.-P. Kriegel, R. T. Ng, and J. Sander, “Lof: identifying density-based local outliers,” in ACM sigmod record, vol. 29, pp. 93–104, ACM, 2000.
-  S. Papadimitriou, H. Kitagawa, P. B. Gibbons, and C. Faloutsos, “Loci: Fast outlier detection using the local correlation integral,” in Data Engineering, 2003. Proceedings. 19th International Conference on, pp. 315–326, IEEE, 2003.
-  W. Jin, A. K. Tung, J. Han, and W. Wang, “Ranking outliers using symmetric neighborhood relationship.,” in PAKDD, vol. 6, pp. 577–593, Springer, 2006.
-  H.-P. Kriegel, P. Kröger, E. Schubert, and A. Zimek, “Loop: local outlier probabilities,” in Proceedings of the 18th ACM conference on Information and knowledge management, pp. 1649–1652, ACM, 2009.
-  E. M. Knox and R. T. Ng, “Algorithms for mining distancebased outliers in large datasets,” in Proceedings of the International Conference on Very Large Data Bases, pp. 392–403, Citeseer, 1998.
-  V. Chandola, A. Banerjee, and V. Kumar, “Outlier detection: A survey,” ACM Computing Surveys, 2007.
-  A. Lazarevic and V. Kumar, “Feature bagging for outlier detection,” in Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining, pp. 157–166, ACM, 2005.
-  X. Z. Fern and C. E. Brodley, “Solving cluster ensemble problems by bipartite graph partitioning,” in Proceedings of the Twenty-first International Conference on Machine Learning, ICML ’04, (New York, NY, USA), pp. 36–, ACM, 2004.
-  C. Domeniconi and M. Al-Razgan, “Weighted cluster ensembles: Methods and analysis,” ACM Trans. Knowl. Discov. Data, vol. 2, pp. 17:1–17:40, Jan. 2009.
-  S. Vigna, “A weighted correlation index for rankings with ties,” in Proceedings of the 24th international conference on World Wide Web, pp. 1166–1176, ACM, 2015.
-  G. O. Campos, A. Zimek, J. Sander, R. J. Campello, B. Micenková, E. Schubert, I. Assent, and M. E. Houle, “On the evaluation of unsupervised outlier detection: measures, datasets, and an empirical study,” Data Mining and Knowledge Discovery, vol. 30, no. 4, pp. 891–927, 2016.
J. Alcalá-Fdez, L. Sanchez, S. Garcia, M. J. del Jesus, S. Ventura, J. M.
Garrell, J. Otero, C. Romero, J. Bacardit, V. M. Rivas, et al.
, “Keel: a software tool to assess evolutionary algorithms for data mining problems,”Soft Computing-A Fusion of Foundations, Methodologies and Applications, vol. 13, no. 3, pp. 307–318, 2009.
-  J. Alcalá-Fdez, A. Fernández, J. Luengo, J. Derrac, S. García, L. Sánchez, and F. Herrera, “Keel data-mining software tool: data set repository, integration of algorithms and experimental analysis framework.,” Journal of Multiple-Valued Logic & Soft Computing, vol. 17, 2011.
-  C. L. Blake and C. J. Merz, “Uci repository of machine learning databases [http://www. ics. uci. edu/~ mlearn/mlrepository. html]. irvine, ca: University of california,” Department of Information and Computer Science, vol. 55, 1998.
-  J. Gao and P.-N. Tan, “Converting output scores from outlier detection algorithms into probability estimates,” in Data Mining, 2006. ICDM’06. Sixth International Conference on, pp. 212–221, IEEE, 2006.
-  H.-P. Kriegel, P. Kroger, E. Schubert, and A. Zimek, “Interpreting and unifying outlier scores,” in Proceedings of the 2011 SIAM International Conference on Data Mining, pp. 13–24, SIAM, 2011.
-  J. G. Kemeny, “Mathematics without numbers,” Daedalus, vol. 88, no. 4, pp. 577–591, 1959.
-  R. Kolde, S. Laur, P. Adler, and J. Vilo, “Robust rank aggregation for gene list integration and meta-analysis,” Bioinformatics, vol. 28, no. 4, pp. 573–580, 2012.
-  T. Saito and M. Rehmsmeier, “The precision-recall plot is more informative than the roc plot when evaluating binary classifiers on imbalanced datasets,” PloS one, vol. 10, no. 3, p. e0118432, 2015.
-  A. Chiang and Y.-R. Yeh, “Anomaly detection ensembles: In defense of the average,” in Web Intelligence and Intelligent Agent Technology (WI-IAT), 2015 IEEE/WIC/ACM International Conference on, vol. 3, pp. 207–210, IEEE, 2015.
-  S. Lloyd, “Least squares quantization in pcm,” IEEE Trans. Inf. Theor., vol. 28, pp. 129–137, Sept. 2006.