DP-SGD vs PATE: Which Has Less Disparate Impact on GANs?

11/26/2021
by   Georgi Ganev, et al.
0

Generative Adversarial Networks (GANs) are among the most popular approaches to generate synthetic data, especially images, for data sharing purposes. Given the vital importance of preserving the privacy of the individual data points in the original data, GANs are trained utilizing frameworks with robust privacy guarantees such as Differential Privacy (DP). However, these approaches remain widely unstudied beyond single performance metrics when presented with imbalanced datasets. To this end, we systematically compare GANs trained with the two best-known DP frameworks for deep learning, DP-SGD, and PATE, in different data imbalance settings from two perspectives – the size of the classes in the generated synthetic data and their classification performance. Our analyses show that applying PATE, similarly to DP-SGD, has a disparate effect on the under/over-represented classes but in a much milder magnitude making it more robust. Interestingly, our experiments consistently show that for PATE, unlike DP-SGD, the privacy-utility trade-off is not monotonically decreasing but is much smoother and inverted U-shaped, meaning that adding a small degree of privacy actually helps generalization. However, we have also identified some settings (e.g., large imbalance) where PATE-GAN completely fails to learn some subparts of the training data.

READ FULL TEXT
research
09/23/2021

Robin Hood and Matthew Effects – Differential Privacy Has Disparate Impact on Synthetic Data

Generative models trained using Differential Privacy (DP) are increasing...
research
07/06/2021

DTGAN: Differential Private Training for Tabular GANs

Tabular generative adversarial networks (TGAN) have recently emerged to ...
research
04/01/2022

CTAB-GAN+: Enhancing Tabular Data Synthesis

While data sharing is crucial for knowledge development, privacy concern...
research
10/07/2021

Complex-valued deep learning with differential privacy

We present ζ-DP, an extension of differential privacy (DP) to complex-va...
research
12/28/2021

Financial Vision Based Differential Privacy Applications

The importance of deep learning data privacy has gained significant atte...
research
02/24/2022

Exploring the Unfairness of DP-SGD Across Settings

End users and regulators require private and fair artificial intelligenc...
research
04/07/2022

What You See is What You Get: Distributional Generalization for Algorithm Design in Deep Learning

We investigate and leverage a connection between Differential Privacy (D...

Please sign up or login with your details

Forgot password? Click here to reset