Fisher consistency for prior probability shift

01/19/2017
by   Dirk Tasche, et al.
0

We introduce Fisher consistency in the sense of unbiasedness as a desirable property for estimators of class prior probabilities. Lack of Fisher consistency could be used as a criterion to dismiss estimators that are unlikely to deliver precise estimates in test datasets under prior probability and more general dataset shift. The usefulness of this unbiasedness concept is demonstrated with three examples of classifiers used for quantification: Adjusted Classify & Count, EM-algorithm and CDE-Iterate. We find that Adjusted Classify & Count and EM-algorithm are Fisher consistent. A counter-example shows that CDE-Iterate is not Fisher consistent and, therefore, cannot be trusted to deliver reliable estimates of class probabilities.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/09/2014

Bayesian Fisher's Discriminant for Functional Data

We propose a Bayesian framework of Gaussian process in order to extend F...
research
07/22/2021

One-parameter generalised Fisher information

We introduce the generalised Fisher information or the one-parameter ext...
research
01/07/2014

Key point selection and clustering of swimmer coordination through Sparse Fisher-EM

To answer the existence of optimal swimmer learning/teaching strategies,...
research
12/19/2022

An Extension of Fisher's Criterion: Theoretical Results with a Neural Network Realization

Fisher's criterion is a widely used tool in machine learning for feature...
research
11/26/2020

Nonparametric estimations and the diffeological Fisher metric

In this paper, first, we survey the concept of diffeological Fisher metr...
research
03/02/2020

A Functional EM Algorithm for Panel Count Data with Missing Counts

Panel count data is recurrent events data where counts of events are obs...

Please sign up or login with your details

Forgot password? Click here to reset