Evaluating Classifiers Without Expert Labels

12/05/2012
by   Hyun Joon Jung, et al.
0

This paper considers the challenge of evaluating a set of classifiers, as done in shared task evaluations like the KDD Cup or NIST TREC, without expert labels. While expert labels provide the traditional cornerstone for evaluating statistical learners, limited or expensive access to experts represents a practical bottleneck. Instead, we seek methodology for estimating performance of the classifiers which is more scalable than expert labeling yet preserves high correlation with evaluation based on expert labels. We consider both: 1) using only labels automatically generated by the classifiers (blind evaluation); and 2) using labels obtained via crowdsourcing. While crowdsourcing methods are lauded for scalability, using such data for evaluation raises serious concerns given the prevalence of label noise. In regard to blind evaluation, two broad strategies are investigated: combine & score and score & combine methods infer a single pseudo-gold label set by aggregating classifier labels; classifiers are then evaluated based on this single pseudo-gold label set. On the other hand, score & combine methods: 1) sample multiple label sets from classifier outputs, 2) evaluate classifiers on each label set, and 3) average classifier performance across label sets. When additional crowd labels are also collected, we investigate two alternative avenues for exploiting them: 1) direct evaluation of classifiers; or 2) supervision of combine & score methods. To assess generality of our techniques, classifier performance is measured using four common classification metrics, with statistical significance tests. Finally, we measure both score and rank correlations between estimated classifier performance vs. actual performance according to expert judgments. Rigorous evaluation of classifiers from the TREC 2011 Crowdsourcing Track shows reliable evaluation can be achieved without reliance on expert labels.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/16/2021

Evaluating Multi-label Classifiers with Noisy Labels

Multi-label classification (MLC) is a generalization of standard classif...
research
07/30/2016

Multi-task Learning with Weak Class Labels: Leveraging iEEG to Detect Cortical Lesions in Cryptogenic Epilepsy

Multi-task learning (MTL) is useful for domains in which data originates...
research
01/13/2013

Crowd Labeling: a survey

Recently, there has been a burst in the number of research projects on h...
research
11/13/2020

End-to-End Learning from Noisy Crowd to Supervised Machine Learning Models

Labeling real-world datasets is time consuming but indispensable for sup...
research
02/26/2022

Enhanced Nearest Neighbor Classification for Crowdsourcing

In machine learning, crowdsourcing is an economical way to label a large...
research
08/09/2023

DiVa: An Iterative Framework to Harvest More Diverse and Valid Labels from User Comments for Music

Towards sufficient music searching, it is vital to form a complete set o...
research
08/25/2023

Measuring Spurious Correlation in Classification: 'Clever Hans' in Translationese

Recent work has shown evidence of 'Clever Hans' behavior in high-perform...

Please sign up or login with your details

Forgot password? Click here to reset