Two-Face: Adversarial Audit of Commercial Face Recognition Systems

by   Siddharth D Jaiswal, et al.

Computer vision applications like automated face detection are used for a variety of purposes ranging from unlocking smart devices to tracking potential persons of interest for surveillance. Audits of these applications have revealed that they tend to be biased against minority groups which result in unfair and concerning societal and political outcomes. Despite multiple studies over time, these biases have not been mitigated completely and have in fact increased for certain tasks like age prediction. While such systems are audited over benchmark datasets, it becomes necessary to evaluate their robustness for adversarial inputs. In this work, we perform an extensive adversarial audit on multiple systems and datasets, making a number of concerning observations - there has been a drop in accuracy for some tasks on CELEBSET dataset since a previous audit. While there still exists a bias in accuracy against individuals from minority groups for multiple datasets, a more worrying observation is that these biases tend to get exorbitantly pronounced with adversarial inputs toward the minority group. We conclude with a discussion on the broader societal impacts in light of these observations and a few suggestions on how to collectively deal with this issue.


page 4

page 9


Oxford Handbook on AI Ethics Book Chapter on Race and Gender

From massive face-recognition-based surveillance and machine-learning-ba...

Towards Fair Face Verification: An In-depth Analysis of Demographic Biases

Deep learning-based person identification and verification systems have ...

Are Commercial Face Detection Models as Biased as Academic Models?

As facial recognition systems are deployed more widely, scholars and act...

Naive-Deep Face Recognition: Touching the Limit of LFW Benchmark or Not?

Face recognition performance improves rapidly with the recent deep learn...

Does Face Recognition Accuracy Get Better With Age? Deep Face Matchers Say No

Previous studies generally agree that face recognition accuracy is highe...

Compressed Models Decompress Race Biases: What Quantized Models Forget for Fair Face Recognition

With the ever-growing complexity of deep learning models for face recogn...

Seeing the Unseen: Errors and Bias in Visual Datasets

From face recognition in smartphones to automatic routing on self-drivin...

Please sign up or login with your details

Forgot password? Click here to reset