Unleashing Linear Optimizers for Group-Fair Learning and Optimization

04/11/2018
by   Daniel Alabi, et al.
0

Most systems and learning algorithms optimize average performance or average loss -- one reason being computational complexity. However, many objectives of practical interest are more complex than simply average loss. This arises, for example, when balancing performance or loss with fairness across people. We prove that, from a computational perspective, optimizing arbitrary objectives that take into account performance over a small number of groups is not significantly harder to optimize than average performance. Our main result is a polynomial-time reduction that uses a linear optimizer to optimize an arbitrary (Lipschitz continuous) function of performance over a (constant) number of possibly-overlapping groups. This includes fairness objectives over small numbers of groups, and we further point out that other existing notions of fairness such as individual fairness can be cast as convex optimization and hence more standard convex techniques can be used. Beyond learning, our approach applies to multi-objective optimization, more generally.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/11/2018

When optimizing nonlinear objectives is no harder than linear objectives

Most systems and learning algorithms optimize average performance or ave...
research
09/09/2020

Addressing Fairness in Classification with a Model-Agnostic Multi-Objective Algorithm

The goal of fairness in classification is to learn a classifier that doe...
research
11/12/2019

Efficient Fair Principal Component Analysis

The flourishing assessments of fairness measure in machine learning algo...
research
01/10/2023

Proportionally Fair Matching with Multiple Groups

The study of fair algorithms has become mainstream in machine learning a...
research
02/17/2018

Optimizing Interactive Systems with Data-Driven Objectives

Effective optimization is essential for to provide a satisfactory user ...
research
02/28/2019

Fair Dimensionality Reduction and Iterative Rounding for SDPs

We model "fair" dimensionality reduction as an optimization problem. A c...
research
06/08/2019

Maximum Weighted Loss Discrepancy

Though machine learning algorithms excel at minimizing the average loss ...

Please sign up or login with your details

Forgot password? Click here to reset