HappyMap: A Generalized Multi-calibration Method

03/08/2023
βˆ™
by   Zhun Deng, et al.
βˆ™
0
βˆ™

Multi-calibration is a powerful and evolving concept originating in the field of algorithmic fairness. For a predictor f that estimates the outcome y given covariates x, and for a function class π’ž, multi-calibration requires that the predictor f(x) and outcome y are indistinguishable under the class of auditors in π’ž. Fairness is captured by incorporating demographic subgroups into the class of functionsΒ π’ž. Recent work has shown that, by enriching the class π’ž to incorporate appropriate propensity re-weighting functions, multi-calibration also yields target-independent learning, wherein a model trained on a source domain performs well on unseen, future, target domains(approximately) captured by the re-weightings. Formally, multi-calibration with respect to π’ž bounds |𝔼_(x,y)βˆΌπ’Ÿ[c(f(x),x)Β·(f(x)-y)]| for all c βˆˆπ’ž. In this work, we view the term (f(x)-y) as just one specific mapping, and explore the power of an enriched class of mappings. We propose HappyMap, a generalization of multi-calibration, which yields a wide range of new applications, including a new fairness notion for uncertainty quantification (conformal prediction), a novel technique for conformal prediction under covariate shift, and a different approach to analyzing missing data, while also yielding a unified understanding of several existing seemingly disparate algorithmic fairness notions and target-independent learning approaches. We give a single HappyMap meta-algorithm that captures all these results, together with a sufficiency condition for its success.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset