Sample Efficient Model Evaluation

09/24/2021
by   Emine Yilmaz, et al.
0

Labelling data is a major practical bottleneck in training and testing classifiers. Given a collection of unlabelled data points, we address how to select which subset to label to best estimate test metrics such as accuracy, F_1 score or micro/macro F_1. We consider two sampling based approaches, namely the well-known Importance Sampling and we introduce a novel application of Poisson Sampling. For both approaches we derive the minimal error sampling distributions and how to approximate and use them to form estimators and confidence intervals. We show that Poisson Sampling outperforms Importance Sampling both theoretically and experimentally.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset