Generalized Bayesian Quantification Learning
Quantification Learning is the task of prevalence estimation for a test population using predictions from a classifier trained on a different population. Commonly used quantification methods either assume perfect sensitivity and specificity of the classifier, or use the training data to both train the classifier and also estimate its misclassification rates. These methods are inappropriate in the presence of dataset shift, when the misclassification rates in the training population are not representative of those for the test population. A recent Bayesian quantification model addresses dataset shift, but only allows for single-class (categorical) predictions, and assumes perfect knowledge of the true labels on a small number of instances from the test population. We propose a generalized Bayesian quantification learning (GBQL) approach that uses the entire compositional predictions from probabilistic classifiers and allows for uncertainty in true class labels for the limited labeled test data. We use a model-free Bayesian estimating equation approach to compositional data using Kullback-Liebler loss-functions based only on a first-moment assumption. This estimating equation approach coherently links the loss-functions for labeled and unlabeled test cases. We show how our method yields existing quantification approaches as special cases through different prior choices thereby providing an inferential framework around these approaches. Extension to an ensemble GBQL that uses predictions from multiple classifiers yielding inference robust to inclusion of a poor classifier is discussed. We outline a fast and efficient Gibbs sampler using a rounding and coarsening approximation to the loss functions. For large sample settings, we establish posterior consistency of GBQL. Empirical performance of GBQL is demonstrated through simulations and analysis of real data with evident dataset shift.
READ FULL TEXT