A Practioner's Guide to Evaluating Entity Resolution Results

09/14/2015 ∙ by Matt Barnes, et al. ∙ Carnegie Mellon University 0

Entity resolution (ER) is the task of identifying records belonging to the same entity (e.g. individual, group) across one or multiple databases. Ironically, it has multiple names: deduplication and record linkage, among others. In this paper we survey metrics used to evaluate ER results in order to iteratively improve performance and guarantee sufficient quality prior to deployment. Some of these metrics are borrowed from multi-class classification and clustering domains, though some key differences exist differentiating entity resolution from general clustering. Menestrina et al. empirically showed rankings from these metrics often conflict with each other, thus our primary motivation for studying them. This paper provides practitioners the basic knowledge to begin evaluating their entity resolution results.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Entity resolution (ER) is the task of identifying records belonging to the same entity (e.g. individual, group) across one or multiple databases. Ironically, it has multiple names: deduplication and record linkage, among others. In this paper we survey metrics used to evaluate ER results in order to iteratively improve performance and guarantee sufficient quality prior to deployment. Some of these metrics are borrowed from multi-class classification and clustering domains, though some key differences exist differentiating entity resolution from general clustering. Menestrina et al. empirically showed rankings from these metrics often conflict with each other, thus our primary motivation for studying them [1]. This paper provides practitioners the basic knowledge to begin evaluating their entity resolution results.

2. Problem Statement

Our notation follows that of [1]. Consider an input set of records where and are unique records. Let denote an entity resolution clustering output, where denotes a cluster. Let be the true clustering, referred to as the “gold standard.” The goal of any entity resolution metric is to measure error (or similarity) of compared to the gold standard .

3. Pairwise Metrics

Pairwise metrics consider every pair of records as samples for evaluating performance. Let denote all the intra-cluster pairs in the clustering . In our example, . Confusingly, some studies treat pairs only as those where a direct match was made and not matches made through transitive relations [2]. For example, [2] would exclude if the matches leading to were , , and , where denotes a match. We choose the former definition because it is independent of the underlying matching process – it only depends on the final entity resolution results.

Unlike many machine learning classification tasks, we never consider non-matches (i.e. inter-cluster pairs) in entity resolution metrics

[3]. In conventional clustering tasks, the number of clusters is constant or sub-linear with respect to the number of records [4]. However, the number of clusters is in conventional ER tasks. So though the number of intra-cluster pairs is (e.g. true positives), the number of inter-cluster pairs (e.g. true negatives) is . To illustrate, consider our original example with 5 records and 2 clusters. There are 4 intra-cluster pairs and 6 inter-cluster pairs. Now, compare this to a larger database with 50 records and 20 clusters, all of comparable size to the original example. There will be approximately 40 intra-cluster pairs but likely over 2000 inter-cluster pairs. Thus, metrics using inter-cluster pairs (e.g. False Positive Rate) will improve exponentially with respect to the number of records in the database and provide overly optimistic results for large databases.

3.1. Pairwise Precision, Recall, and F

Using

as the samples, the pairwise precision and recall metric functions follow conventional machine learning definitions. The harmonic mean of these metrics leads to the most frequently used entity resolution metric, pairwise

. All these metrics are bound from .

(1)
(2)
(3)

The benefit of pairwise metrics is their intuitive interpretation. Pairwise precision is the percentage of matches in the predicted clustering that are correct. Pairwise recall is the percentage of matches in the true clustering that are also in the predicted clustering. Unfortunately pairwise metrics may convey overly optimistic results, depending on the use case. For example, in many entity resolution tasks the end user only cares about the final entity – not the records it comprises. Mismatching two singleton entities has an insignificant impact on pairwise metrics compared to incorrectly joining or splitting two large clusters.

4. Cluster Metrics

Like the pairwise metrics, all the cluster metrics discussed here are bound by , a convenient property when comparing across datasets and for setting quality standards.

4.1. Cluster Precision, Recall, and

Cluster level metrics attempt to capture a more holistic understanding of the final entities. At the extreme opposite of pairwise metrics, cluster level precision [5] and recall [6] consider exact cluster matches. Mathematically, cluster precision and recall are defined as and , respectively. Now, mismatching two singleton entities will have the same impact as mismatching two larger clusters. Obviously, this metric has the opposite drawback – even one corrupted match in a cluster will cause an entire cluster to mismatch due to the use of exact comparisons. Thus, this metric is rarely used in favor of its predecessor, closest cluster precision, recall, and .

4.2. Closest Cluster Precision, Recall, and

Closest cluster metrics correct for the previous cluster-level drawbacks by incorporating a notion of cluster similarity [7]. Using the Jaccard similarity coefficient to capture cluster similarity, the precision and recall can be expressed as

(4)
(5)

where and are clusters in and , respectively. This metric, and many of the ones following, attempt to balance the tradeoffs of the pairwise and exact cluster metrics.

4.3. Purity and K

Cluster purity was first proposed in 1998 [8] and later extended to Average Cluster Purity (ACP) and Average Author Purity (AAP) (archaically referred to as Average Speaker Purity) [9]. The ACP and AAP are defined as

(6)
(7)

Then the

measure is defined as the geometric mean of these values,

. In many applications only a single purity metric is evaluated, usually something comparable to ACP. For example, [10] considers the dominant class in each cluster by defining purity as . The use of this single metric is misleading and only shows one half of the precision/recall coin. As an extreme example, setting (i.e. each record in its own cluster) would achieve a perfect , yet is clearly far from ideal.

4.4. Homogeneity, Completeness, and V-Measure

Homogeneity and completeness are entropy based metrics, somewhat analogous to precision and recall, respectively [11]. A cluster in has perfect homogeneity if all records belong to the same cluster in . Conversely, a cluster in has perfect completeness if all its records belong to the same cluster in . Entropy and its conditional variation are defined as

(8)
(9)

where is the total number of records. Using these entropies, homogeneity and completeness are defined as:

(10)
(11)

V-Measure is defined analogously to the metric as the harmonic mean of homogeneity and completeness.

(12)

where is a user defined parameter, usually set to as in the metric. Completeness is weighed more importantly if and homogeneity is weighed more importantly if . Some sources use instead of weighting, we chose the latter due to popularity.

4.5. Other Metrics

The natural language processing community uses several other entity resolution metrics, which are rarely using in machine learning and database applications

[12]. We refer the reader to MUC-6 [13], [14], and CEAF [15].

5. Edit Distance Metrics

Edit distance metrics can be thought of similarly to string edit distance functions. They are a measure of the information lost and gained while modifying to . Unfortunately, they do not have the convenient bound and are thus difficult to relate to any notion of a ‘good’ score.

5.1. Variation of Information

VI [16] can conveniently be expressed with the previous conditional entropy metric [11].

(13)

An important property of VI is it does not directly depend on , only the sizes of the clusters. Thus, it is acceptable to add records from new clusters to a database while continuously measuring VI performance.

5.2. Generalized Merge Distance

Generalized Merge Distance (GMD) is perhaps the most comprehensive metric in the sense it can be used to directly calculate several other metrics [1]. is the minimum legal path cost of converting to , where the cost of splitting and merging sets of records are user-defined operation-order-independent functions. Many such functions exist, such as , , where and are the size of the record sets to split or merge and is a constant. We refer the reader to [17] for a background of operation-order-independence functions.

Menestrina et al. not only show can be computed in linear time, but explicitly show how pairwise precision, recall, , and VI can be computed using specific cost functions. Depending on the choice of cost functions, GMD is likely dependent on (the cost functions used in the VI formulation are one exception) and difficult to compare across datasets of different sizes.

5.3. Conclusion

Simple examples show a promising pairwise metric may have poor cluster-level performance [2]. More rigorous analysis shows this is not only possible, but common across a range of applications [1]. At an absolute minimum, we recommend evaluating with pairwise because of its simplicity and popularity. We also recommend the use of a cluster metric and Generalized Merge Distance – which could conveniently be configured to calculate VI and the pairwise in linear time.

All the metrics discussed herein rely on the availability of a “gold standard” . In practice, human-labeled results rarely number beyond several thousand samples. On large datasets, a relative gold standard may be obtained by foregoing blocking efficiency and running an exhaustive ER algorithm on the entire database [1]. We note, however, that doing so on databases larger than even 10,000 records is infeasible for some algorithms [7]

. Further, an exhaustive approach is still only an approximation and carries no guarantees relative to the true clustering. A need exists for semi- and un-supervised evaluation metrics. Some metrics exist for a very specific subset of circumstances, but for the majority of applications the general research problem is still open

[18].

References

  • [1] D. Menestrina, S. E. Whang, and H. Garcia-Molina, “Evaluating entity resolution results,” Proceedings of the VLDB Endowment, vol. 3, no. 1-2, pp. 208–219, 2010.
  • [2] M. Michelson and S. A. Macskassy, “Record linkage measures in an entity centric world,” in Proceedings of the 4th workshop on Evaluation Methods for Machine Learning, 2009.
  • [3] P. Christen and K. Goiser, “Quality and complexity measures for data linkage and deduplication,” in Quality Measures in Data Mining, pp. 127–151, Springer, 2007.
  • [4] L. Getoor and A. Machanavajjhala, “Entity resolution: theory, practice & open challenges,” Proceedings of the VLDB Endowment, vol. 5, no. 12, pp. 2018–2019, 2012.
  • [5] J. Huang, S. Ertekin, and C. L. Giles, “Efficient name disambiguation for large-scale databases,” in Knowledge Discovery in Databases: PKDD 2006, pp. 536–544, Springer, 2006.
  • [6] B. Wellner, A. McCallum, F. Peng, and M. Hay, “An integrated, conditional model of information extraction and coreference with application to citation matching,” in

    Proceedings of the 20th conference on Uncertainty in artificial intelligence

    , pp. 593–601, AUAI Press, 2004.
  • [7] O. Benjelloun, H. Garcia-Molina, D. Menestrina, Q. Su, S. E. Whang, and J. Widom, “Swoosh: a generic approach to entity resolution,” The VLDB Journal—The International Journal on Very Large Data Bases, vol. 18, no. 1, pp. 255–276, 2009.
  • [8] A. Solomonoff, A. Mielke, M. Schmidt, and H. Gish, “Clustering speakers by their voices,” in Acoustics, Speech and Signal Processing, 1998. Proceedings of the 1998 IEEE International Conference on, vol. 2, pp. 757–760, IEEE, 1998.
  • [9] J. Ajmera, H. Bourlard, and I. Lapidot, “Improved unknown-multiple speaker clustering using HMM,” tech. rep., 2002.
  • [10] C. D. Manning, P. Raghavan, and H. Schütze, Introduction to information retrieval, vol. 1. Cambridge university press Cambridge, 2008.
  • [11] A. Rosenberg and J. Hirschberg, “V-Measure: A Conditional Entropy-Based External Cluster Evaluation Measure.,” in EMNLP-CoNLL, vol. 7, pp. 410–420, Citeseer, 2007.
  • [12] H. Maidasani, G. Namata, B. Huang, and L. Getoor, “Entity Resolution Evaluation Measures,” 2012.
  • [13] M. Vilain, J. Burger, J. Aberdeen, D. Connolly, and L. Hirschman, “A model-theoretic coreference scoring scheme,” in Proceedings of the 6th conference on Message understanding, pp. 45–52, Association for Computational Linguistics, 1995.
  • [14] A. Bagga and B. Baldwin, “Algorithms for scoring coreference chains,” in The first international conference on language resources and evaluation workshop on linguistics coreference, vol. 1, pp. 563–566, Citeseer, 1998.
  • [15] X. Luo, “On coreference resolution performance metrics,” in Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, pp. 25–32, Association for Computational Linguistics, 2005.
  • [16] M. Meilă, “Comparing clusterings by the variation of information,” in Learning theory and kernel machines, pp. 173–187, Springer, 2003.
  • [17] M. Hosszú, “On the functional equation F(x+y,z)+F(x,y)=F(x,y+z)+F(y,z),” Periodica Mathematica Hungarica, vol. 1, no. 3, pp. 213–216, 1971.
  • [18] W. E. Winkler, “Overview of record linkage and current research directions,” in Bureau of the Census, Citeseer, 2006.