Fairly Accurate: Learning Optimal Accuracy vs. Fairness Tradeoffs for Hate Speech Detection

04/15/2022
by   Venelin Kovatchev, et al.
0

Recent work has emphasized the importance of balancing competing objectives in model training (e.g., accuracy vs. fairness, or competing measures of fairness). Such trade-offs reflect a broader class of multi-objective optimization (MOO) problems in which optimization methods seek Pareto optimal trade-offs between competing goals. In this work, we first introduce a differentiable measure that enables direct optimization of group fairness (specifically, balancing accuracy across groups) in model training. Next, we demonstrate two model-agnostic MOO frameworks for learning Pareto optimal parameterizations over different groups of neural classification models. We evaluate our methods on the specific task of hate speech detection, in which prior work has shown lack of group fairness across speakers of different English dialects. Empirical results across convolutional, sequential, and transformer-based neural architectures show superior empirical accuracy vs. fairness trade-offs over prior work. More significantly, our measure enables the Pareto machinery to ensure that each architecture achieves the best possible trade-off between fairness and accuracy w.r.t. the dataset, given user-prescribed error tolerance bounds.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/03/2020

Accuracy and Fairness Trade-offs in Machine Learning: A Stochastic Multi-Objective Approach

In the application of machine learning to real-life decision-making syst...
research
10/02/2021

Consider the Alternatives: Navigating Fairness-Accuracy Tradeoffs via Disqualification

In many machine learning settings there is an inherent tension between f...
research
03/15/2020

Balancing Competing Objectives with Noisy Data: Score-Based Classifiers for Welfare-Aware Machine Learning

While real-world decisions involve many competing objectives, algorithmi...
research
02/24/2022

Trade-offs between Group Fairness Metrics in Societal Resource Allocation

We consider social resource allocations that deliver an array of scarce ...
research
10/04/2021

An Empirical Investigation of Learning from Biased Toxicity Labels

Collecting annotations from human raters often results in a trade-off be...
research
09/10/2020

Prune Responsibly

Irrespective of the specific definition of fairness in a machine learnin...
research
04/22/2022

Mostra: A Flexible Balancing Framework to Trade-off User, Artist and Platform Objectives for Music Sequencing

We consider the task of sequencing tracks on music streaming platforms w...

Please sign up or login with your details

Forgot password? Click here to reset