DeepAI AI Chat
Log In Sign Up

Social Biases in NLP Models as Barriers for Persons with Disabilities

by   Ben Hutchinson, et al.

Building equitable and inclusive NLP technologies demands consideration of whether and how social attitudes are represented in ML models. In particular, representations encoded in models often inadvertently perpetuate undesirable social biases from the data on which they are trained. In this paper, we present evidence of such undesirable biases towards mentions of disability in two different English language models: toxicity prediction and sentiment analysis. Next, we demonstrate that the neural embeddings that are the critical first step in most NLP pipelines similarly contain undesirable biases towards mentions of disability. We end by highlighting topical biases in the discourse about disability which may contribute to the observed model biases; for instance, gun violence, homelessness, and drug addiction are over-represented in texts discussing mental illness.


Speciesist Language and Nonhuman Animal Bias in English Masked Language Models

Various existing studies have analyzed what social biases are inherited ...

Perturbation Sensitivity Analysis to Detect Unintended Model Biases

Data-driven statistical Natural Language Processing (NLP) techniques lev...

Disembodied Machine Learning: On the Illusion of Objectivity in NLP

Machine Learning seeks to identify and encode bodies of knowledge within...

Back to the Future: On Potential Histories in NLP

Machine learning and NLP require the construction of datasets to train a...

Cultural Re-contextualization of Fairness Research in Language Technologies in India

Recent research has revealed undesirable biases in NLP data and models. ...

Re-contextualizing Fairness in NLP: The Case of India

Recent research has revealed undesirable biases in NLP data and models. ...

Addressing Biases in the Texts using an End-to-End Pipeline Approach

The concept of fairness is gaining popularity in academia and industry. ...