Evaluating Classifiers Without Expert Labels

12/05/2012
by   Hyun Joon Jung, et al.
0

This paper considers the challenge of evaluating a set of classifiers, as done in shared task evaluations like the KDD Cup or NIST TREC, without expert labels. While expert labels provide the traditional cornerstone for evaluating statistical learners, limited or expensive access to experts represents a practical bottleneck. Instead, we seek methodology for estimating performance of the classifiers which is more scalable than expert labeling yet preserves high correlation with evaluation based on expert labels. We consider both: 1) using only labels automatically generated by the classifiers (blind evaluation); and 2) using labels obtained via crowdsourcing. While crowdsourcing methods are lauded for scalability, using such data for evaluation raises serious concerns given the prevalence of label noise. In regard to blind evaluation, two broad strategies are investigated: combine & score and score & combine methods infer a single pseudo-gold label set by aggregating classifier labels; classifiers are then evaluated based on this single pseudo-gold label set. On the other hand, score & combine methods: 1) sample multiple label sets from classifier outputs, 2) evaluate classifiers on each label set, and 3) average classifier performance across label sets. When additional crowd labels are also collected, we investigate two alternative avenues for exploiting them: 1) direct evaluation of classifiers; or 2) supervision of combine & score methods. To assess generality of our techniques, classifier performance is measured using four common classification metrics, with statistical significance tests. Finally, we measure both score and rank correlations between estimated classifier performance vs. actual performance according to expert judgments. Rigorous evaluation of classifiers from the TREC 2011 Crowdsourcing Track shows reliable evaluation can be achieved without reliance on expert labels.

READ FULL TEXT

page 1

page 2

page 3

page 4

02/16/2021

Evaluating Multi-label Classifiers with Noisy Labels

Multi-label classification (MLC) is a generalization of standard classif...
07/30/2016

Multi-task Learning with Weak Class Labels: Leveraging iEEG to Detect Cortical Lesions in Cryptogenic Epilepsy

Multi-task learning (MTL) is useful for domains in which data originates...
01/13/2013

Crowd Labeling: a survey

Recently, there has been a burst in the number of research projects on h...
11/13/2020

End-to-End Learning from Noisy Crowd to Supervised Machine Learning Models

Labeling real-world datasets is time consuming but indispensable for sup...
02/26/2022

Enhanced Nearest Neighbor Classification for Crowdsourcing

In machine learning, crowdsourcing is an economical way to label a large...
01/10/2021

Using Crowdsourcing to Train Facial Emotion Machine Learning Models with Ambiguous Labels

Current emotion detection classifiers predict discrete emotions. However...
06/03/2021

A Normative Model of Classifier Fusion

Combining the outputs of multiple classifiers or experts into a single p...