Improving Fairness in Large-Scale Object Recognition by CrowdSourced Demographic Information

06/02/2022
by   Zu Kim, et al.
0

There has been increasing awareness of ethical issues in machine learning, and fairness has become an important research topic. Most fairness efforts in computer vision have been focused on human sensing applications and preventing discrimination by people's physical attributes such as race, skin color or age by increasing visual representation for particular demographic groups. We argue that ML fairness efforts should extend to object recognition as well. Buildings, artwork, food and clothing are examples of the objects that define human culture. Representing these objects fairly in machine learning datasets will lead to models that are less biased towards a particular culture and more inclusive of different traditions and values. There exist many research datasets for object recognition, but they have not carefully considered which classes should be included, or how much training data should be collected per class. To address this, we propose a simple and general approach, based on crowdsourcing the demographic composition of the contributors: we define fair relevance scores, estimate them, and assign them to each class. We showcase its application to the landmark recognition domain, presenting a detailed analysis and the final fairer landmark rankings. We present analysis which leads to a much fairer coverage of the world compared to existing datasets. The evaluation dataset was used for the 2021 Google Landmark Challenges, which was the first of a kind with an emphasis on fairness in generic object recognition.

READ FULL TEXT
research
08/19/2021

Towards A Fairer Landmark Recognition Dataset

We introduce a new landmark recognition dataset, which is created with a...
research
06/07/2023

Migrate Demographic Group For Fair GNNs

Graph Neural networks (GNNs) have been applied in many scenarios due to ...
research
08/31/2023

FACET: Fairness in Computer Vision Evaluation Benchmark

Computer vision models have known performance disparities across attribu...
research
10/30/2020

"What We Can't Measure, We Can't Understand": Challenges to Demographic Data Procurement in the Pursuit of Fairness

As calls for fair and unbiased algorithmic systems increase, so too does...
research
09/18/2014

Visual Landmark Recognition from Internet Photo Collections: A Large-Scale Evaluation

The task of a visual landmark recognition system is to identify photogra...
research
06/29/2023

Improving Fairness in Deepfake Detection

Despite the development of effective deepfake detection models in recent...
research
05/10/2022

Towards Intersectionality in Machine Learning: Including More Identities, Handling Underrepresentation, and Performing Evaluation

Research in machine learning fairness has historically considered a sing...

Please sign up or login with your details

Forgot password? Click here to reset