CEREAL: Few-Sample Clustering Evaluation

09/30/2022
by   Nihal V. Nayak, et al.
0

Evaluating clustering quality with reliable evaluation metrics like normalized mutual information (NMI) requires labeled data that can be expensive to annotate. We focus on the underexplored problem of estimating clustering quality with limited labels. We adapt existing approaches from the few-sample model evaluation literature to actively sub-sample, with a learned surrogate model, the most informative data points for annotation to estimate the evaluation metric. However, we find that their estimation can be biased and only relies on the labeled data. To that end, we introduce CEREAL, a comprehensive framework for few-sample clustering evaluation that extends active sampling approaches in three key ways. First, we propose novel NMI-based acquisition functions that account for the distinctive properties of clustering and uncertainties from a learned surrogate model. Next, we use ideas from semi-supervised learning and train the surrogate model with both the labeled and unlabeled data. Finally, we pseudo-label the unlabeled data with the surrogate model. We run experiments to estimate NMI in an active sampling pipeline on three datasets across vision and language. Our results show that CEREAL reduces the area under the absolute error curve by up to 57 the best sampling baseline. We perform an extensive ablation study to show that our framework is agnostic to the choice of clustering algorithm and evaluation metric. We also extend CEREAL from clusterwise annotations to pairwise annotations. Overall, CEREAL can efficiently evaluate clustering with limited human annotations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/15/2012

Modeling Multiple Annotator Expertise in the Semi-Supervised Learning Scenario

Learning algorithms normally assume that there is at most one annotation...
research
12/28/2021

GuidedMix-Net: Semi-supervised Semantic Segmentation by Using Labeled Images as Reference

Semi-supervised learning is a challenging problem which aims to construc...
research
01/31/2019

Semi-Supervised Ordinal Regression Based on Empirical Risk Minimization

We consider the semi-supervised ordinal regression problem, where unlabe...
research
02/14/2022

Active Surrogate Estimators: An Active Learning Approach to Label-Efficient Model Evaluation

We propose Active Surrogate Estimators (ASEs), a new method for label-ef...
research
03/11/2022

Active Evaluation: Efficient NLG Evaluation with Few Pairwise Comparisons

Recent studies have shown the advantages of evaluating NLG systems using...
research
03/27/2023

ScarceNet: Animal Pose Estimation with Scarce Annotations

Animal pose estimation is an important but under-explored task due to th...
research
07/03/2022

NP-Match: When Neural Processes meet Semi-Supervised Learning

Semi-supervised learning (SSL) has been widely explored in recent years,...

Please sign up or login with your details

Forgot password? Click here to reset