Explainability of the Implications of Supervised and Unsupervised Face Image Quality Estimations Through Activation Map Variation Analyses in Face Recognition Models

12/09/2021
by   Biying Fu, et al.
0

It is challenging to derive explainability for unsupervised or statistical-based face image quality assessment (FIQA) methods. In this work, we propose a novel set of explainability tools to derive reasoning for different FIQA decisions and their face recognition (FR) performance implications. We avoid limiting the deployment of our tools to certain FIQA methods by basing our analyses on the behavior of FR models when processing samples with different FIQA decisions. This leads to explainability tools that can be applied for any FIQA method with any CNN-based FR solution using activation mapping to exhibit the network's activation derived from the face embedding. To avoid the low discrimination between the general spatial activation mapping of low and high-quality images in FR models, we build our explainability tools in a higher derivative space by analyzing the variation of the FR activation maps of image sets with different quality decisions. We demonstrate our tools and analyze the findings on four FIQA methods, by presenting inter and intra-FIQA method analyses. Our proposed tools and the analyses based on them point out, among other conclusions, that high-quality images typically cause consistent low activation on the areas outside of the central face region, while low-quality images, despite general low activation, have high variations of activation in such areas. Our explainability tools also extend to analyzing single images where we show that low-quality images tend to have an FR model spatial activation that strongly differs from what is expected from a high-quality image where this difference also tends to appear more in areas outside of the central face region and does correspond to issues like extreme poses and facial occlusions. The implementation of the proposed tools is accessible here [link].

READ FULL TEXT

page 5

page 6

page 8

research
08/29/2022

Towards Explaining Demographic Bias through the Eyes of Face Recognition Models

Biases inherent in both data and algorithms make the fairness of widespr...
research
03/10/2021

SDD-FIQA: Unsupervised Face Image Quality Assessment with Similarity Distribution Distance

In recent years, Face Image Quality Assessment (FIQA) has become an indi...
research
01/19/2021

Intelligent Frame Selection as a Privacy-Friendlier Alternative to Face Recognition

The widespread deployment of surveillance cameras for facial recognition...
research
01/03/2022

FaceQgen: Semi-Supervised Deep Learning for Face Image Quality Assessment

In this paper we develop FaceQgen, a No-Reference Quality Assessment app...
research
04/26/2023

Are Explainability Tools Gender Biased? A Case Study on Face Presentation Attack Detection

Face recognition (FR) systems continue to spread in our daily lives with...
research
09/06/2021

Does Melania Trump have a body double from the perspective of automatic face recognition?

In this paper, we explore whether automatic face recognition can help in...
research
04/20/2021

Revisiting The Evaluation of Class Activation Mapping for Explainability: A Novel Metric and Experimental Analysis

As the request for deep learning solutions increases, the need for expla...

Please sign up or login with your details

Forgot password? Click here to reset