Probing Fairness of Mobile Ocular Biometrics Methods Across Gender on VISOB 2.0 Dataset

11/17/2020
by   Anoop Krishnan, et al.
0

Recent research has questioned the fairness of face-based recognition and attribute classification methods (such as gender and race) for dark-skinned people and women. Ocular biometrics in the visible spectrum is an alternate solution over face biometrics, thanks to its accuracy, security, robustness against facial expression, and ease of use in mobile devices. With the recent COVID-19 crisis, ocular biometrics has a further advantage over face biometrics in the presence of a mask. However, fairness of ocular biometrics has not been studied till now. This first study aims to explore the fairness of ocular-based authentication and gender classification methods across males and females. To this aim, VISOB 2.0 dataset, along with its gender annotations, is used for the fairness analysis of ocular biometrics methods based on ResNet-50, MobileNet-V2 and lightCNN-29 models. Experimental results suggest the equivalent performance of males and females for ocular-based mobile user-authentication in terms of genuine match rate (GMR) at lower false match rates (FMRs) and an overall Area Under Curve (AUC). For instance, an AUC of 0.96 for females and 0.95 for males was obtained for lightCNN-29 on an average. However, males significantly outperformed females in deep learning based gender classification models based on ocular-region.

READ FULL TEXT
research
10/04/2021

Investigating Fairness of Ocular Biometrics Among Young, Middle-Aged, and Older Adults

A number of studies suggest bias of the face biometrics, i.e., face reco...
research
09/24/2020

Understanding Fairness of Gender Classification Algorithms Across Gender-Race Groups

Automated gender classification has important applications in many domai...
research
06/10/2022

The Gender Gap in Face Recognition Accuracy Is a Hairy Problem

It is broadly accepted that there is a "gender gap" in face recognition ...
research
01/05/2022

The Effect of Model Compression on Fairness in Facial Expression Recognition

Deep neural networks have proved hugely successful, achieving human-like...
research
07/21/2022

GBDF: Gender Balanced DeepFake Dataset Towards Fair DeepFake Detection

Facial forgery by deepfakes has raised severe societal concerns. Several...
research
10/25/2013

Gender Classification Using Gradient Direction Pattern

A novel methodology for gender classification is presented in this paper...
research
05/21/2020

Gender Slopes: Counterfactual Fairness for Computer Vision Models by Attribute Manipulation

Automated computer vision systems have been applied in many domains incl...

Please sign up or login with your details

Forgot password? Click here to reset