Human-Centric Multimodal Machine Learning: Recent Advances and Testbed on AI-based Recruitment

02/13/2023
by   Alejandro Peña, et al.
0

The presence of decision-making algorithms in society is rapidly increasing nowadays, while concerns about their transparency and the possibility of these algorithms becoming new sources of discrimination are arising. There is a certain consensus about the need to develop AI applications with a Human-Centric approach. Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes. All these four Human-Centric requirements are closely related to each other. With the aim of studying how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data, we propose a fictitious case study focused on automated recruitment: FairCVtest. We train automatic recruitment algorithms using a set of multimodal synthetic profiles including image, text, and structured data, which are consciously scored with gender and racial biases. FairCVtest shows the capacity of the Artificial Intelligence (AI) behind automatic recruitment tools built this way (a common practice in many other application scenarios beyond recruitment) to extract sensitive information from unstructured data and exploit it in combination to data biases in undesirable (unfair) ways. We present an overview of recent works developing techniques capable of removing sensitive information and biases from the decision-making process of deep learning architectures, as well as commonly used databases for fairness research in AI. We demonstrate how learning approaches developed to guarantee privacy in latent spaces can lead to unbiased and fair automatic decision-making process.

READ FULL TEXT
research
04/15/2020

Bias in Multimodal AI: Testbed for Fair Automatic Recruitment

The presence of decision-making algorithms in society is rapidly increas...
research
09/12/2020

FairCVtest Demo: Understanding Bias in Multimodal Learning with a Testbed in Fair Automatic Recruitment

With the aim of studying how current multimodal AI algorithms based on h...
research
01/19/2022

Investigating underdiagnosis of AI algorithms in the presence of multiple sources of dataset bias

Deep learning models have shown great potential for image-based diagnosi...
research
03/16/2023

Lessons Learnt from a Multimodal Learning Analytics Deployment In-the-wild

Multimodal Learning Analytics (MMLA) innovations make use of rapidly evo...
research
10/10/2022

A survey of Identification and mitigation of Machine Learning algorithmic biases in Image Analysis

The problem of algorithmic bias in machine learning has gained a lot of ...
research
08/02/2017

Fairness-aware machine learning: a perspective

Algorithms learned from data are increasingly used for deciding many asp...

Please sign up or login with your details

Forgot password? Click here to reset