Assessing gender bias in medical and scientific masked language models with StereoSet

11/15/2021
by   Robert Robinson, et al.
0

NLP systems use language models such as Masked Language Models (MLMs) that are pre-trained on large quantities of text such as Wikipedia create representations of language. BERT is a powerful and flexible general-purpose MLM system developed using unlabeled text. Pre-training on large quantities of text also has the potential to transparently embed the cultural and social biases found in the source text into the MLM system. This study aims to compare biases in general purpose and medical MLMs with the StereoSet bias assessment tool. The general purpose MLMs showed significant bias overall, with BERT scoring 57 and RoBERTa scoring 61. The category of gender bias is where the best performances were found, with 63 for BERT and 73 for RoBERTa. Performances for profession, race, and religion were similar to the overall bias scores for the general-purpose MLMs.Medical MLMs showed more bias in all categories than the general-purpose MLMs except for SciBERT, which showed a race bias score of 55, which was superior to the race bias score of 53 for BERT. More gender (Medical 54-58 vs. General 63-73) and religious (46-54 vs. 58) biases were found with medical MLMs. This evaluation of four medical MLMs for stereotyped assessments about race, gender, religion, and profession showed inferior performance to general-purpose MLMs. These medically focused MLMs differ considerably in training source data, which is likely the root cause of the differences in ratings for stereotyped biases from the StereoSet tool.

READ FULL TEXT
research
11/25/2022

An Analysis of Social Biases Present in BERT Variants Across Multiple Languages

Although large pre-trained language models have achieved great success i...
research
07/18/2022

Selection Bias Induced Spurious Correlations in Large Language Models

In this work we show how large language models (LLMs) can learn statisti...
research
05/06/2023

Algorithmic Bias, Generalist Models,and Clinical Medicine

The technical landscape of clinical machine learning is shifting in ways...
research
02/10/2023

FairPy: A Toolkit for Evaluation of Social Biases and their Mitigation in Large Language Models

Studies have shown that large pretrained language models exhibit biases ...
research
10/28/2020

Image Representations Learned With Unsupervised Pre-Training Contain Human-like Biases

Recent advances in machine learning leverage massive datasets of unlabel...
research
06/27/2023

Gender Bias in BERT – Measuring and Analysing Biases through Sentiment Rating in a Realistic Downstream Classification Task

Pretrained language models are publicly available and constantly finetun...
research
04/13/2023

Evaluation of Social Biases in Recent Large Pre-Trained Models

Large pre-trained language models are widely used in the community. Thes...

Please sign up or login with your details

Forgot password? Click here to reset