Efficiently Mitigating Classification Bias via Transfer Learning

10/24/2020
by   Xisen Jin, et al.
0

Prediction bias in machine learning models refers to unintended model behaviors that discriminate against inputs mentioning or produced by certain groups; for example, hate speech classifiers predict more false positives for neutral text mentioning specific social groups. Mitigating bias for each task or domain is inefficient, as it requires repetitive model training, data annotation (e.g., demographic information), and evaluation. In pursuit of a more accessible solution, we propose the Upstream Bias Mitigation for Downstream Fine-Tuning (UBM) framework, which mitigate one or multiple bias factors in downstream classifiers by transfer learning from an upstream model. In the upstream bias mitigation stage, explanation regularization and adversarial training are applied to mitigate multiple bias factors. In the downstream fine-tuning stage, the classifier layer of the model is re-initialized, and the entire model is fine-tuned to downstream tasks in potentially novel domains without any further bias mitigation. We expect downstream classifiers to be less biased by transfer learning from de-biased upstream models. We conduct extensive experiments varying the similarity between the source and target data, as well as varying the number of dimensions of bias (e.g., discrimination against specific social groups or dialects). Our results indicate the proposed UBM framework can effectively reduce bias in downstream classifiers.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/13/2020

Parameter-Efficient Transfer from Sequential Behaviors for User Profiling and Recommendation

Inductive transfer learning has greatly impacted the computer vision and...
research
02/13/2023

Provable Detection of Propagating Sampling Bias in Prediction Models

With an increased focus on incorporating fairness in machine learning mo...
research
10/26/2022

A Robust Bias Mitigation Procedure Based on the Stereotype Content Model

The Stereotype Content model (SCM) states that we tend to perceive minor...
research
08/08/2023

From Fake to Real (FFR): A two-stage training pipeline for mitigating spurious correlations with synthetic data

Visual recognition models are prone to learning spurious correlations in...
research
05/06/2021

The Authors Matter: Understanding and Mitigating Implicit Bias in Deep Text Classification

It is evident that deep text classification models trained on human data...
research
09/07/2022

Power of Explanations: Towards automatic debiasing in hate speech detection

Hate speech detection is a common downstream application of natural lang...
research
03/20/2023

Bias mitigation techniques in image classification: fair machine learning in human heritage collections

A major problem with using automated classification systems is that if t...

Please sign up or login with your details

Forgot password? Click here to reset