Log In Sign Up

LOGAN: Local Group Bias Detection by Clustering

by   Jieyu Zhao, et al.

Machine learning techniques have been widely used in natural language processing (NLP). However, as revealed by many recent studies, machine learning models often inherit and amplify the societal biases in data. Various metrics have been proposed to quantify biases in model predictions. In particular, several of them evaluate disparity in model performance between protected groups and advantaged groups in the test corpus. However, we argue that evaluating bias at the corpus level is not enough for understanding how biases are embedded in a model. In fact, a model with similar aggregated performance between different groups on the entire data may behave differently on instances in a local region. To analyze and detect such local bias, we propose LOGAN, a new bias detection technique based on clustering. Experiments on toxicity classification and object classification tasks show that LOGAN identifies bias in a local region and allows us to better analyze the biases in model predictions.


Speciesist Language and Nonhuman Animal Bias in English Masked Language Models

Various existing studies have analyzed what social biases are inherited ...

A Systematic Study of Bias Amplification

Recent research suggests that predictions made by machine-learning model...

Mitigating Gender Bias Amplification in Distribution by Posterior Regularization

Advanced machine learning techniques have boosted the performance of nat...

Auditing Algorithmic Fairness in Machine Learning for Health with Severity-Based LOGAN

Auditing machine learning-based (ML) healthcare tools for bias is critic...

The Impact of Presentation Style on Human-In-The-Loop Detection of Algorithmic Bias

While decision makers have begun to employ machine learning, machine lea...

TRAPDOOR: Repurposing backdoors to detect dataset bias in machine learning-based genomic analysis

Machine Learning (ML) has achieved unprecedented performance in several ...

Evaluating Debiasing Techniques for Intersectional Biases

Bias is pervasive in NLP models, motivating the development of automatic...

Code Repositories


codes for EMNLP2020 LOGAN paper

view repo


Code for "Local Group Bias Detection" , done as part of UCLA CS263 - Natural Language Processing by Prof. Kai Wei Chang

view repo