Exploiting Fairness to Enhance Sensitive Attributes Reconstruction

09/02/2022
by   Julien Ferry, et al.
3

In recent years, a growing body of work has emerged on how to learn machine learning models under fairness constraints, often expressed with respect to some sensitive attributes. In this work, we consider the setting in which an adversary has black-box access to a target model and show that information about this model's fairness can be exploited by the adversary to enhance his reconstruction of the sensitive attributes of the training data. More precisely, we propose a generic reconstruction correction method, which takes as input an initial guess made by the adversary and corrects it to comply with some user-defined constraints (such as the fairness information) while minimizing the changes in the adversary's guess. The proposed method is agnostic to the type of target model, the fairness-aware learning method as well as the auxiliary knowledge of the adversary. To assess the applicability of our approach, we have conducted a thorough experimental evaluation on two state-of-the-art fair learning methods, using four different fairness metrics with a wide range of tolerances and with three datasets of diverse sizes and sensitive attributes. The experimental results demonstrate the effectiveness of the proposed approach to improve the reconstruction of the sensitive attributes of the training set.

READ FULL TEXT

page 9

page 18

page 19

page 20

page 21

page 22

page 23

page 24

research
07/18/2022

On Fair Classification with Mostly Private Sensitive Attributes

Machine learning models have demonstrated promising performance in many ...
research
06/08/2022

Joint Adversarial Learning for Cross-domain Fair Classification

Modern machine learning (ML) models are becoming increasingly popular an...
research
02/26/2020

Fairness-Aware Learning with Prejudice Free Representations

Machine learning models are extensively being used to make decisions tha...
research
02/16/2023

Group Fairness with Uncertainty in Sensitive Attributes

We consider learning a fair predictive model when sensitive attributes a...
research
11/18/2015

Censoring Representations with an Adversary

In practice, there are often explicit constraints on what representation...
research
01/04/2023

On Fairness of Medical Image Classification with Multiple Sensitive Attributes via Learning Orthogonal Representations

Mitigating the discrimination of machine learning models has gained incr...
research
05/28/2021

Fair Representations by Compression

Organizations that collect and sell data face increasing scrutiny for th...

Please sign up or login with your details

Forgot password? Click here to reset