The Fairness-Accuracy Pareto Front

08/25/2020
by   Susan Wei, et al.
0

Mitigating bias in machine learning is a challenging task, due in large part to the presence of competing objectives. Namely, a fair algorithm often comes at the cost of lower predictive accuracy, and vice versa, a highly predictive algorithm may be one that incurs high bias. This work presents a methodology for estimating the fairness-accuracy Pareto front of a fully-connected feedforward neural network, for any accuracy measure and any fairness measure. Our experiments firstly reveal that for training data already exhibiting disparities, a newly introduced causal notion of fairness may be capable of traversing a greater part of the fairness-accuracy space, relative to more standard measures such as demographic parity and conditional parity. The experiments also reveal that tools from multi-objective optimisation are crucial in efficiently estimating the Pareto front (i.e., by finding more non-dominated points), relative to other sensible but ad-hoc approaches. Finally, the work serves to highlight possible synergy between deep learning and multi-objective optimisation. Given that deep learning is increasingly deployed in real-world decision making, the Pareto front can provide a formal way to reason about inherent conflicts.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset