Towards Reducing Bias in Gender Classification

11/16/2019
by   Komal K. Teru, et al.
0

Societal bias towards certain communities is a big problem that affects a lot of machine learning systems. This work aims at addressing the racial bias present in many modern gender recognition systems. We learn race invariant representations of human faces with an adversarially trained autoencoder model. We show that such representations help us achieve less biased performance in gender classification. We use variance in classification accuracy across different races as a surrogate for the racial bias of the model and achieve a drop of over 40

READ FULL TEXT
research
08/17/2022

Deep Generative Views to Mitigate Gender Classification Bias Across Gender-Race Groups

Published studies have suggested the bias of automated face-based gender...
research
06/14/2020

An adversarial learning algorithm for mitigating gender bias in face recognition

State-of-the-art face recognition networks implicitly encode gender info...
research
12/01/2021

Are Investors Biased Against Women? Analyzing How Gender Affects Startup Funding in Europe

One of the main challenges of startups is to raise capital from investor...
research
06/13/2021

User Acceptance of Gender Stereotypes in Automated Career Recommendations

Currently, there is a surge of interest in fair Artificial Intelligence ...
research
05/19/2021

Obstructing Classification via Projection

Machine learning and data mining techniques are effective tools to class...
research
07/03/2021

The Price of Diversity

Systemic bias with respect to gender, race and ethnicity, often unconsci...
research
08/12/2020

Null-sampling for Interpretable and Fair Representations

We propose to learn invariant representations, in the data domain, to ac...

Please sign up or login with your details

Forgot password? Click here to reset