Achieving Transparency Report Privacy in Linear Time

03/31/2021
by   Chien-Lun Chen, et al.
0

An accountable algorithmic transparency report (ATR) should ideally investigate the (a) transparency of the underlying algorithm, and (b) fairness of the algorithmic decisions, and at the same time preserve data subjects' privacy. However, a provably formal study of the impact to data subjects' privacy caused by the utility of releasing an ATR (that investigates transparency and fairness), is yet to be addressed in the literature. The far-fetched benefit of such a study lies in the methodical characterization of privacy-utility trade-offs for release of ATRs in public, and their consequential application-specific impact on the dimensions of society, politics, and economics. In this paper, we first investigate and demonstrate potential privacy hazards brought on by the deployment of transparency and fairness measures in released ATRs. To preserve data subjects' privacy, we then propose a linear-time optimal-privacy scheme, built upon standard linear fractional programming (LFP) theory, for announcing ATRs, subject to constraints controlling the tolerance of privacy perturbation on the utility of transparency schemes. Subsequently, we quantify the privacy-utility trade-offs induced by our scheme, and analyze the impact of privacy perturbation on fairness measures in ATRs. To the best of our knowledge, this is the first analytical work that simultaneously addresses trade-offs between the triad of privacy, utility, and fairness, applicable to algorithmic transparency reports.

READ FULL TEXT

page 8

page 17

page 19

page 23

page 28

page 29

page 30

page 33

research
02/15/2023

Evaluating Trade-offs in Computer Vision Between Attribute Privacy, Fairness and Utility

This paper investigates to what degree and magnitude tradeoffs exist bet...
research
07/28/2023

Holistic Survey of Privacy and Fairness in Machine Learning

Privacy and fairness are two crucial pillars of responsible Artificial I...
research
09/08/2021

On the Fundamental Trade-offs in Learning Invariant Representations

Many applications of representation learning, such as privacy-preservati...
research
02/17/2023

Learning with Impartiality to Walk on the Pareto Frontier of Fairness, Privacy, and Utility

Deploying machine learning (ML) models often requires both fairness and ...
research
12/15/2020

Beyond Privacy Trade-offs with Structured Transparency

Many socially valuable activities depend on sensitive information, such ...
research
01/17/2023

Binary Mechanisms under Privacy-Preserving Noise

We study mechanism design for public-good provision under a noisy privac...
research
06/27/2023

A Three-Way Knot: Privacy, Fairness, and Predictive Performance Dynamics

As the frontier of machine learning applications moves further into huma...

Please sign up or login with your details

Forgot password? Click here to reset