Bias in Multimodal AI: Testbed for Fair Automatic Recruitment

04/15/2020
by   Alejandro Peña, et al.
2

The presence of decision-making algorithms in society is rapidly increasing nowadays, while concerns about their transparency and the possibility of these algorithms becoming new sources of discrimination are arising. In fact, many relevant automated systems have been shown to make decisions based on sensitive information or discriminate certain social groups (e.g. certain biometric systems for person recognition). With the aim of studying how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data, we propose a fictitious automated recruitment testbed: FairCVtest. We train automatic recruitment algorithms using a set of multimodal synthetic profiles consciously scored with gender and racial biases. FairCVtest shows the capacity of the Artificial Intelligence (AI) behind such recruitment tool to extract sensitive information from unstructured data, and exploit it in combination to data biases in undesirable (unfair) ways. Finally, we present a list of recent works developing techniques capable of removing sensitive information from the decision-making process of deep learning architectures. We have used one of these algorithms (SensitiveNets) to experiment discrimination-aware learning for the elimination of sensitive information in our multimodal AI framework. Our methodology and results show how to generate fairer AI-based tools in general, and in particular fairer automated recruitment systems.

READ FULL TEXT

page 4

page 6

page 7

research
09/12/2020

FairCVtest Demo: Understanding Bias in Multimodal Learning with a Testbed in Fair Automatic Recruitment

With the aim of studying how current multimodal AI algorithms based on h...
research
02/13/2023

Human-Centric Multimodal Machine Learning: Recent Advances and Testbed on AI-based Recruitment

The presence of decision-making algorithms in society is rapidly increas...
research
08/08/2019

Oxford Handbook on AI Ethics Book Chapter on Race and Gender

From massive face-recognition-based surveillance and machine-learning-ba...
research
06/04/2021

Towards Fairness Certification in Artificial Intelligence

Thanks to the great progress of machine learning in the last years, seve...
research
05/17/2022

Using sensitive data to prevent discrimination by AI: Does the GDPR need a new exception?

Organisations can use artificial intelligence to make decisions about pe...
research
05/14/2020

Towards Socially Responsible AI: Cognitive Bias-Aware Multi-Objective Learning

Human society had a long history of suffering from cognitive biases lead...

Please sign up or login with your details

Forgot password? Click here to reset