Casual Conversations v2: Designing a large consent-driven dataset to measure algorithmic bias and robustness

11/10/2022
by   Caner Hazirbas, et al.
7

Developing robust and fair AI systems require datasets with comprehensive set of labels that can help ensure the validity and legitimacy of relevant measurements. Recent efforts, therefore, focus on collecting person-related datasets that have carefully selected labels, including sensitive characteristics, and consent forms in place to use those attributes for model testing and development. Responsible data collection involves several stages, including but not limited to determining use-case scenarios, selecting categories (annotations) such that the data are fit for the purpose of measuring algorithmic bias for subgroups and most importantly ensure that the selected categories/subcategories are robust to regional diversities and inclusive of as many subgroups as possible. Meta, in a continuation of our efforts to measure AI algorithmic bias and robustness (https://ai.facebook.com/blog/shedding-light-on-fairness-in-ai-with-a-new-data-set), is working on collecting a large consent-driven dataset with a comprehensive list of categories. This paper describes our proposed design of such categories and subcategories for Casual Conversations v2.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/15/2022

More Data Can Lead Us Astray: Active Data Acquisition in the Presence of Label Bias

An increased awareness concerning risks of algorithmic bias has driven a...
research
10/30/2020

"What We Can't Measure, We Can't Understand": Challenges to Demographic Data Procurement in the Pursuit of Fairness

As calls for fair and unbiased algorithmic systems increase, so too does...
research
12/21/2022

A Seven-Layer Model for Standardising AI Fairness Assessment

Problem statement: Standardisation of AI fairness rules and benchmarks i...
research
09/13/2021

Measurement as governance in and for responsible AI

Measurement of social phenomena is everywhere, unavoidably, in sociotech...
research
08/24/2023

EgoBlur: Responsible Innovation in Aria

Project Aria pushes the frontiers of Egocentric AI with large-scale real...
research
04/04/2023

Participation and Division of Labor in User-Driven Algorithm Audits: How Do Everyday Users Work together to Surface Algorithmic Harms?

Recent years have witnessed an interesting phenomenon in which users com...
research
03/03/2023

A toolkit of dilemmas: Beyond debiasing and fairness formulas for responsible AI/ML

Approaches to fair and ethical AI have recently fell under the scrutiny ...

Please sign up or login with your details

Forgot password? Click here to reset