Subverting Fair Image Search with Generative Adversarial Perturbations

05/05/2022
by   Avijit Ghosh, et al.
1

In this work we explore the intersection fairness and robustness in the context of ranking: when a ranking model has been calibrated to achieve some definition of fairness, is it possible for an external adversary to make the ranking model behave unfairly without having access to the model or training data? To investigate this question, we present a case study in which we develop and then attack a state-of-the-art, fairness-aware image search engine using images that have been maliciously modified using a Generative Adversarial Perturbation (GAP) model. These perturbations attempt to cause the fair re-ranking algorithm to unfairly boost the rank of images containing people from an adversary-selected subpopulation. We present results from extensive experiments demonstrating that our attacks can successfully confer significant unfair advantage to people from the majority class relative to fairly-ranked baseline search results. We demonstrate that our attacks are robust across a number of variables, that they have close to zero impact on the relevance of search results, and that they succeed under a strict threat model. Our findings highlight the danger of deploying fair machine learning algorithms in-the-wild when (1) the data necessary to achieve fairness may be adversarially manipulated, and (2) the models themselves are not robust against attacks.

READ FULL TEXT

page 2

page 9

page 14

research
06/15/2020

On Adversarial Bias and the Robustness of Fair Machine Learning

Optimizing prediction accuracy can come at the expense of fairness. Towa...
research
11/04/2022

Fairness-aware Regression Robust to Adversarial Attacks

In this paper, we take a first step towards answering the question of ho...
research
09/24/2020

Ranking for Individual and Group Fairness Simultaneously

Search and recommendation systems, such as search engines, recruiting to...
research
05/28/2018

FairGAN: Fairness-aware Generative Adversarial Networks

Fairness-aware learning is increasingly important in data mining. Discri...
research
05/27/2019

FairSearch: A Tool For Fairness in Ranked Search Results

Ranked search results and recommendations have become the main mechanism...
research
10/22/2021

Fairness Degrading Adversarial Attacks Against Clustering Algorithms

Clustering algorithms are ubiquitous in modern data science pipelines, a...
research
09/04/2023

Fair Ranking under Disparate Uncertainty

Ranking is a ubiquitous method for focusing the attention of human evalu...

Please sign up or login with your details

Forgot password? Click here to reset