Simple and near-optimal algorithms for hidden stratification and multi-group learning

12/22/2021
by   Christopher Tosh, et al.
0

Multi-group agnostic learning is a formal learning criterion that is concerned with the conditional risks of predictors within subgroups of a population. The criterion addresses recent practical concerns such as subgroup fairness and hidden stratification. This paper studies the structure of solutions to the multi-group learning problem, and provides simple and near-optimal algorithms for the learning problem.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/07/2023

Group conditional validity via multi-group learning

We consider the problem of distribution-free conformal prediction and th...
research
03/30/2022

Optimal Learning

This paper studies the problem of learning an unknown function f from gi...
research
05/28/2019

A near-optimal algorithm for approximating the John Ellipsoid

We develop a simple and efficient algorithm for approximating the John E...
research
06/12/2020

Algorithms and Learning for Fair Portfolio Design

We consider a variation on the classical finance problem of optimal port...
research
05/19/2020

Safe Learning for Near Optimal Scheduling

In this paper, we investigate the combination of synthesis techniques an...
research
11/21/2015

Near-Optimal Active Learning of Multi-Output Gaussian Processes

This paper addresses the problem of active learning of a multi-output Ga...
research
08/08/2023

Engineering LaCAM^∗: Towards Real-Time, Large-Scale, and Near-Optimal Multi-Agent Pathfinding

This paper addresses the challenges of real-time, large-scale, and near-...

Please sign up or login with your details

Forgot password? Click here to reset