Exploring Machine Learning Privacy/Utility trade-off from a hyperparameters Lens

03/03/2023
by   Ayoub Arous, et al.
0

Machine Learning (ML) architectures have been applied to several applications that involve sensitive data, where a guarantee of users' data privacy is required. Differentially Private Stochastic Gradient Descent (DPSGD) is the state-of-the-art method to train privacy-preserving models. However, DPSGD comes at a considerable accuracy loss leading to sub-optimal privacy/utility trade-offs. Towards investigating new ground for better privacy-utility trade-off, this work questions; (i) if models' hyperparameters have any inherent impact on ML models' privacy-preserving properties, and (ii) if models' hyperparameters have any impact on the privacy/utility trade-off of differentially private models. We propose a comprehensive design space exploration of different hyperparameters such as the choice of activation functions, the learning rate and the use of batch normalization. Interestingly, we found that utility can be improved by using Bounded RELU as activation functions with the same privacy-preserving characteristics. With a drop-in replacement of the activation function, we achieve new state-of-the-art accuracy on MNIST (96.02%), FashionMnist (84.76%), and CIFAR-10 (44.42%) without any modification of the learning procedure fundamentals of DPSGD.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/26/2019

Automatic Discovery of Privacy-Utility Pareto Fronts

Differential privacy is a mathematical framework for privacy-preserving ...
research
04/02/2022

Production of Categorical Data Verifying Differential Privacy: Conception and Applications to Machine Learning

Private and public organizations regularly collect and analyze digitaliz...
research
10/12/2021

Not all noise is accounted equally: How differentially private learning benefits from large sampling rates

Learning often involves sensitive data and as such, privacy preserving e...
research
09/10/2018

Privacy-Preserving Deep Learning for any Activation Function

This paper considers the scenario that multiple data owners wish to appl...
research
02/17/2023

Learning with Impartiality to Walk on the Pareto Frontier of Fairness, Privacy, and Utility

Deploying machine learning (ML) models often requires both fairness and ...
research
02/01/2019

Privacy Preserving Off-Policy Evaluation

Many reinforcement learning applications involve the use of data that is...
research
07/12/2021

Improving the Algorithm of Deep Learning with Differential Privacy

In this paper, an adjustment to the original differentially private stoc...

Please sign up or login with your details

Forgot password? Click here to reset