Factorizable Joint Shift in Multinomial Classification

07/29/2022
by   Dirk Tasche, et al.
0

Factorizable joint shift (FJS) was recently proposed as a type of dataset shift for which the complete characteristics can be estimated from feature data observations on the test dataset by a method called Joint Importance Aligning. For the multinomial (multiclass) classification setting, we derive a representation of factorizable joint shift in terms of the source (training) distribution, the target (test) prior class probabilities and the target marginal distribution of the features. On the basis of this result, we propose alternatives to joint importance aligning and, at the same time, point out that factorizable joint shift is not fully identifiable if no class label information on the test dataset is available and no additional assumptions are made. Other results of the paper include correction formulae for the posterior class probabilities both under general dataset shift and factorizable joint shift. In addition, we investigate the consequences of assuming factorizable joint shift for the bias caused by sample selection.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/29/2023

Sparse joint shift in multinomial classification

Sparse joint shift (SJS) was recently proposed as a tractable model for ...
research
07/25/2016

A Statistical Test for Joint Distributions Equivalence

We provide a distribution-free test that can be used to determine whethe...
research
07/10/2020

Robust Classification under Class-Dependent Domain Shift

Investigation of machine learning algorithms robust to changes between t...
research
11/21/2022

First Steps Toward Understanding the Extrapolation of Nonlinear Models to Unseen Domains

Real-world machine learning applications often involve deploying neural ...
research
03/01/2021

Improving the output quality of official statistics based on machine learning algorithms

National statistical institutes currently investigate how to improve the...
research
11/03/2021

Shift Happens: Adjusting Classifiers

Minimizing expected loss measured by a proper scoring rule, such as Brie...
research
01/21/2019

Calibration with Bias-Corrected Temperature Scaling Improves Domain Adaptation Under Label Shift in Modern Neural Networks

Label shift refers to the phenomenon where the marginal probability p(y)...

Please sign up or login with your details

Forgot password? Click here to reset