DPMLBench: Holistic Evaluation of Differentially Private Machine Learning

05/10/2023
by   Chengkun Wei, et al.
2

Differential privacy (DP), as a rigorous mathematical definition quantifying privacy leakage, has become a well-accepted standard for privacy protection. Combined with powerful machine learning techniques, differentially private machine learning (DPML) is increasingly important. As the most classic DPML algorithm, DP-SGD incurs a significant loss of utility, which hinders DPML's deployment in practice. Many studies have recently proposed improved algorithms based on DP-SGD to mitigate utility loss. However, these studies are isolated and cannot comprehensively measure the performance of improvements proposed in algorithms. More importantly, there is a lack of comprehensive research to compare improvements in these DPML algorithms across utility, defensive capabilities, and generalizability. We fill this gap by performing a holistic measurement of improved DPML algorithms on utility and defense capability against membership inference attacks (MIAs) on image classification tasks. We first present a taxonomy of where improvements are located in the machine learning life cycle. Based on our taxonomy, we jointly perform an extensive measurement study of the improved DPML algorithms. We also cover state-of-the-art label differential privacy (Label DP) algorithms in the evaluation. According to our empirical results, DP can effectively defend against MIAs, and sensitivity-bounding techniques such as per-sample gradient clipping play an important role in defense. We also explore some improvements that can maintain model utility and defend against MIAs more effectively. Experiments show that Label DP algorithms achieve less utility loss but are fragile to MIAs. To support our evaluation, we implement a modular re-usable software, DPMLBench, which enables sensitive data owners to deploy DPML algorithms and serves as a benchmark tool for researchers and practitioners.

READ FULL TEXT

page 9

page 11

page 12

page 13

page 20

research
10/31/2020

Differentially Private ADMM Algorithms for Machine Learning

In this paper, we study efficient differentially private alternating dir...
research
10/18/2022

DPIS: An Enhanced Mechanism for Differentially Private SGD with Importance Sampling

Nowadays, differential privacy (DP) has become a well-accepted standard ...
research
12/24/2021

DP-UTIL: Comprehensive Utility Analysis of Differential Privacy in Machine Learning

Differential Privacy (DP) has emerged as a rigorous formalism to reason ...
research
09/07/2022

On the utility and protection of optimization with differential privacy and classic regularization techniques

Nowadays, owners and developers of deep learning models must consider st...
research
09/21/2021

Privacy, Security, and Utility Analysis of Differentially Private CPES Data

Differential privacy (DP) has been widely used to protect the privacy of...
research
02/04/2021

ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models

Inference attacks against Machine Learning (ML) models allow adversaries...
research
09/22/2021

An automatic differentiation system for the age of differential privacy

We introduce Tritium, an automatic differentiation-based sensitivity ana...

Please sign up or login with your details

Forgot password? Click here to reset