Understanding bias in facial recognition technologies

10/05/2020
by   David Leslie, et al.
0

Over the past couple of years, the growing debate around automated facial recognition has reached a boiling point. As developers have continued to swiftly expand the scope of these kinds of technologies into an almost unbounded range of applications, an increasingly strident chorus of critical voices has sounded concerns about the injurious effects of the proliferation of such systems. Opponents argue that the irresponsible design and use of facial detection and recognition technologies (FDRTs) threatens to violate civil liberties, infringe on basic human rights and further entrench structural racism and systemic marginalisation. They also caution that the gradual creep of face surveillance infrastructures into every domain of lived experience may eventually eradicate the modern democratic forms of life that have long provided cherished means to individual flourishing, social solidarity and human self-creation. Defenders, by contrast, emphasise the gains in public safety, security and efficiency that digitally streamlined capacities for facial identification, identity verification and trait characterisation may bring. In this explainer, I focus on one central aspect of this debate: the role that dynamics of bias and discrimination play in the development and deployment of FDRTs. I examine how historical patterns of discrimination have made inroads into the design and implementation of FDRTs from their very earliest moments. And, I explain the ways in which the use of biased FDRTs can lead distributional and recognitional injustices. The explainer concludes with an exploration of broader ethical questions around the potential proliferation of pervasive face-based surveillance infrastructures and makes some recommendations for cultivating more responsible approaches to the development and governance of these technologies.

READ FULL TEXT

page 1

page 3

page 8

page 9

page 16

page 21

page 39

page 40

research
05/05/2023

Navigating Surveillance Capitalism: A Critical Analysis through philosophical perspectives in Computer Ethics

Surveillance capitalism is a concept that describes the practice of coll...
research
01/03/2020

Saving Face: Investigating the Ethical Concerns of Facial Recognition Auditing

Although essential to revealing biased performance, well intentioned att...
research
09/05/2018

Debiasing Desire: Addressing Bias & Discrimination on Intimate Platforms

Designing technical systems to be resistant to bias and discrimination r...
research
01/28/2017

Detection of Face using Viola Jones and Recognition using Back Propagation Neural Network

Detection and recognition of the facial images of people is an intricate...
research
10/15/2021

Comparing Human and Machine Bias in Face Recognition

Much recent research has uncovered and discussed serious concerns of bia...
research
09/19/2019

Responsible Facial Recognition and Beyond

Facial recognition is changing the way we live in and interact with our ...
research
03/05/2020

Demographic Bias in Biometrics: A Survey on an Emerging Challenge

Systems incorporating biometric technologies have become ubiquitous in p...

Please sign up or login with your details

Forgot password? Click here to reset