HappyMap: A Generalized Multi-calibration Method
Multi-calibration is a powerful and evolving concept originating in the field of algorithmic fairness. For a predictor f that estimates the outcome y given covariates x, and for a function class π, multi-calibration requires that the predictor f(x) and outcome y are indistinguishable under the class of auditors in π. Fairness is captured by incorporating demographic subgroups into the class of functionsΒ π. Recent work has shown that, by enriching the class π to incorporate appropriate propensity re-weighting functions, multi-calibration also yields target-independent learning, wherein a model trained on a source domain performs well on unseen, future, target domains(approximately) captured by the re-weightings. Formally, multi-calibration with respect to π bounds |πΌ_(x,y)βΌπ[c(f(x),x)Β·(f(x)-y)]| for all c βπ. In this work, we view the term (f(x)-y) as just one specific mapping, and explore the power of an enriched class of mappings. We propose HappyMap, a generalization of multi-calibration, which yields a wide range of new applications, including a new fairness notion for uncertainty quantification (conformal prediction), a novel technique for conformal prediction under covariate shift, and a different approach to analyzing missing data, while also yielding a unified understanding of several existing seemingly disparate algorithmic fairness notions and target-independent learning approaches. We give a single HappyMap meta-algorithm that captures all these results, together with a sufficiency condition for its success.
READ FULL TEXT