Probing Fairness of Mobile Ocular Biometrics Methods Across Gender on VISOB 2.0 Dataset
Recent research has questioned the fairness of face-based recognition and attribute classification methods (such as gender and race) for dark-skinned people and women. Ocular biometrics in the visible spectrum is an alternate solution over face biometrics, thanks to its accuracy, security, robustness against facial expression, and ease of use in mobile devices. With the recent COVID-19 crisis, ocular biometrics has a further advantage over face biometrics in the presence of a mask. However, fairness of ocular biometrics has not been studied till now. This first study aims to explore the fairness of ocular-based authentication and gender classification methods across males and females. To this aim, VISOB 2.0 dataset, along with its gender annotations, is used for the fairness analysis of ocular biometrics methods based on ResNet-50, MobileNet-V2 and lightCNN-29 models. Experimental results suggest the equivalent performance of males and females for ocular-based mobile user-authentication in terms of genuine match rate (GMR) at lower false match rates (FMRs) and an overall Area Under Curve (AUC). For instance, an AUC of 0.96 for females and 0.95 for males was obtained for lightCNN-29 on an average. However, males significantly outperformed females in deep learning based gender classification models based on ocular-region.
READ FULL TEXT