Fairness-aware Adversarial Perturbation Towards Bias Mitigation for Deployed Deep Models

03/03/2022
by   Zhibo Wang, et al.
0

Prioritizing fairness is of central importance in artificial intelligence (AI) systems, especially for those societal applications, e.g., hiring systems should recommend applicants equally from different demographic groups, and risk assessment systems must eliminate racism in criminal justice. Existing efforts towards the ethical development of AI systems have leveraged data science to mitigate biases in the training set or introduced fairness principles into the training process. For a deployed AI system, however, it may not allow for retraining or tuning in practice. By contrast, we propose a more flexible approach, i.e., fairness-aware adversarial perturbation (FAAP), which learns to perturb input data to blind deployed models on fairness-related features, e.g., gender and ethnicity. The key advantage is that FAAP does not modify deployed models in terms of parameters and structures. To achieve this, we design a discriminator to distinguish fairness-related attributes based on latent representations from deployed models. Meanwhile, a perturbation generator is trained against the discriminator, such that no fairness-related features could be extracted from perturbed inputs. Exhaustive experimental evaluation demonstrates the effectiveness and superior performance of the proposed FAAP. In addition, FAAP is validated on real-world commercial deployments (inaccessible to model parameters), which shows the transferability of FAAP, foreseeing the potential of black-box adaptation.

READ FULL TEXT

page 6

page 7

research
01/10/2022

Fairness Score and Process Standardization: Framework for Fairness Certification in Artificial Intelligence Systems

Decisions made by various Artificial Intelligence (AI) systems greatly i...
research
07/11/2023

The Butterfly Effect in AI Fairness and Bias

The Butterfly Effect, a concept originating from chaos theory, underscor...
research
11/08/2020

FairLens: Auditing Black-box Clinical Decision Support Systems

The pervasive application of algorithmic decision-making is raising conc...
research
03/14/2022

Fairness Evaluation in Deepfake Detection Models using Metamorphic Testing

Fairness of deepfake detectors in the presence of anomalies are not well...
research
12/16/2019

Fairness Assessment for Artificial Intelligence in Financial Industry

Artificial Intelligence (AI) is an important driving force for the devel...
research
06/23/2022

Context matters for fairness – a case study on the effect of spatial distribution shifts

With the ever growing involvement of data-driven AI-based decision makin...
research
07/28/2021

Generalizing Fairness: Discovery and Mitigation of Unknown Sensitive Attributes

Ensuring trusted artificial intelligence (AI) in the real world is an cr...

Please sign up or login with your details

Forgot password? Click here to reset