Same Same, But Different: Conditional Multi-Task Learning for Demographic-Specific Toxicity Detection

02/14/2023
by   Soumyajit Gupta, et al.
0

Algorithmic bias often arises as a result of differential subgroup validity, in which predictive relationships vary across groups. For example, in toxic language detection, comments targeting different demographic groups can vary markedly across groups. In such settings, trained models can be dominated by the relationships that best fit the majority group, leading to disparate performance. We propose framing toxicity detection as multi-task learning (MTL), allowing a model to specialize on the relationships that are relevant to each demographic group while also leveraging shared properties across groups. With toxicity detection, each task corresponds to identifying toxicity against a particular demographic group. However, traditional MTL requires labels for all tasks to be present for every data point. To address this, we propose Conditional MTL (CondMTL), wherein only training examples relevant to the given demographic group are considered by the loss function. This lets us learn group specific representations in each branch which are not cross contaminated by irrelevant labels. Results on synthetic and real data show that using CondMTL improves predictive recall over various baselines in general and for the minority demographic group in particular, while having similar overall accuracy.

READ FULL TEXT

page 9

page 17

research
05/22/2023

Generalizing Fairness using Multi-Task Learning without Demographic Information

To ensure the fairness of machine learning systems, we can include a fai...
research
05/24/2023

Centering the Margins: Outlier-Based Identification of Harmed Populations in Toxicity Detection

A standard method for measuring the impacts of AI on marginalized commun...
research
03/27/2022

Reinforcement Guided Multi-Task Learning Framework for Low-Resource Stereotype Detection

As large Pre-trained Language Models (PLMs) trained on large amounts of ...
research
05/11/2023

When the Majority is Wrong: Leveraging Annotator Disagreement for Subjective Tasks

Though majority vote among annotators is typically used for ground truth...
research
05/15/2022

Fair Bayes-Optimal Classifiers Under Predictive Parity

Increasing concerns about disparate effects of AI have motivated a great...
research
07/03/2019

Multi-Task Networks With Universe, Group, and Task Feature Learning

We present methods for multi-task learning that take advantage of natura...
research
05/09/2022

Behind the Mask: Demographic bias in name detection for PII masking

Many datasets contain personally identifiable information, or PII, which...

Please sign up or login with your details

Forgot password? Click here to reset