Fair is Better than Sensational:Man is to Doctor as Woman is to Doctor

05/23/2019
by   Malvina Nissim, et al.
0

Analogies such as man is to king as woman is to X are often used to illustrate the amazing power of word embeddings. Concurrently, they have also exposed how strongly human biases are encoded in vector spaces built on natural language. While finding that queen is the answer to man is to king as woman is to X leaves us in awe, papers have also reported finding analogies deeply infused with human biases, like man is to computer programmer as woman is to homemaker, which instead leave us with worry and rage. In this work we show that,often unknowingly, embedding spaces have not been treated fairly. Through a series of simple experiments, we highlight practical and theoretical problems in previous works, and demonstrate that some of the most widely used biased analogies are in fact not supported by the data. We claim that rather than striving to find sensational biases, we should aim at observing the data "as is", which is biased enough. This should serve as a fair starting point to properly address the evident, serious, and compelling problem of human bias in word embeddings.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/06/2020

Detecting Emergent Intersectional Biases: Contextualized Word Embeddings Contain a Distribution of Human-like Biases

With the starting point that implicit human biases are reflected in the ...
research
04/26/2019

Are We Consistently Biased? Multidimensional Analysis of Biases in Distributional Word Vectors

Word embeddings have recently been shown to reflect many of the pronounc...
research
06/30/2020

OSCaR: Orthogonal Subspace Correction and Rectification of Biases in Word Embeddings

Language representations are known to carry stereotypical biases and, as...
research
07/21/2021

Using Adversarial Debiasing to Remove Bias from Word Embeddings

Word Embeddings have been shown to contain the societal biases present i...
research
02/20/2020

Measuring Social Biases in Grounded Vision and Language Embeddings

We generalize the notion of social biases from language embeddings to gr...
research
10/27/2020

Discovering and Interpreting Conceptual Biases in Online Communities

Language carries implicit human biases, functioning both as a reflection...
research
10/29/2021

The Golden Rule as a Heuristic to Measure the Fairness of Texts Using Machine Learning

To treat others as one would wish to be treated is a common formulation ...

Please sign up or login with your details

Forgot password? Click here to reset