Measurement as governance in and for responsible AI

09/13/2021
by   Abigail Z. Jacobs, et al.
0

Measurement of social phenomena is everywhere, unavoidably, in sociotechnical systems. This is not (only) an academic point: Fairness-related harms emerge when there is a mismatch in the measurement process between the thing we purport to be measuring and the thing we actually measure. However, the measurement process – where social, cultural, and political values are implicitly encoded in sociotechnical systems – is almost always obscured. Furthermore, this obscured process is where important governance decisions are encoded: governance about which systems are fair, which individuals belong in which categories, and so on. We can then use the language of measurement, and the tools of construct validity and reliability, to uncover hidden governance decisions. In particular, we highlight two types of construct validity, content validity and consequential validity, that are useful to elicit and characterize the feedback loops between the measurement, social construction, and enforcement of social categories. We then explore the constructs of fairness, robustness, and responsibility in the context of governance in and for responsible AI. Together, these perspectives help us unpack how measurement acts as a hidden governance process in sociotechnical systems. Understanding measurement as governance supports a richer understanding of the governance processes already happening in AI – responsible or otherwise – revealing paths to more effective interventions.

READ FULL TEXT
research
12/11/2019

Measurement and Fairness

We introduce the language of measurement modeling from the quantitative ...
research
03/09/2022

Towards a Roadmap on Software Engineering for Responsible AI

Although AI is transforming the world, there are serious concerns about ...
research
03/03/2023

A toolkit of dilemmas: Beyond debiasing and fairness formulas for responsible AI/ML

Approaches to fair and ethical AI have recently fell under the scrutiny ...
research
01/23/2022

External Stability Auditing to Test the Validity of Personality Prediction in AI Hiring

Automated hiring systems are among the fastest-developing of all high-st...
research
11/10/2022

Casual Conversations v2: Designing a large consent-driven dataset to measure algorithmic bias and robustness

Developing robust and fair AI systems require datasets with comprehensiv...
research
05/02/2023

Taxonomizing and Measuring Representational Harms: A Look at Image Tagging

In this paper, we examine computational approaches for measuring the "fa...
research
06/15/2022

Respect as a Lens for the Design of AI Systems

Critical examinations of AI systems often apply principles such as fairn...

Please sign up or login with your details

Forgot password? Click here to reset