When approximate design for fast homomorphic computation provides differential privacy guarantees

by   Arnaud Grivet Sébert, et al.

While machine learning has become pervasive in as diversified fields as industry, healthcare, social networks, privacy concerns regarding the training data have gained a critical importance. In settings where several parties wish to collaboratively train a common model without jeopardizing their sensitive data, the need for a private training protocol is particularly stringent and implies to protect the data against both the model's end-users and the actors of the training phase. Differential privacy (DP) and cryptographic primitives are complementary popular countermeasures against privacy attacks. Among these cryptographic primitives, fully homomorphic encryption (FHE) offers ciphertext malleability at the cost of time-consuming operations in the homomorphic domain. In this paper, we design SHIELD, a probabilistic approximation algorithm for the argmax operator which is both fast when homomorphically executed and whose inaccuracy is used as a feature to ensure DP guarantees. Even if SHIELD could have other applications, we here focus on one setting and seamlessly integrate it in the SPEED collaborative training framework from "SPEED: Secure, PrivatE, and Efficient Deep learning" (Grivet Sébert et al., 2021) to improve its computational efficiency. After thoroughly describing the FHE implementation of our algorithm and its DP analysis, we present experimental results. To the best of our knowledge, it is the first work in which relaxing the accuracy of an homomorphic calculation is constructively usable as a degree of freedom to achieve better FHE performances.


page 1

page 2

page 3

page 4


DP-Cryptography: Marrying Differential Privacy and Cryptography in Emerging Applications

Differential privacy (DP) has arisen as the state-of-the-art metric for ...

Efficient Deep Learning on Multi-Source Private Data

Machine learning models benefit from large and diverse datasets. Using s...

DP-Fast MH: Private, Fast, and Accurate Metropolis-Hastings for Large-Scale Bayesian Inference

Bayesian inference provides a principled framework for learning from com...

Hybrid Differentially Private Federated Learning on Vertically Partitioned Data

We present HDP-VFL, the first hybrid differentially private (DP) framewo...

Protecting Data from all Parties: Combining FHE and DP in Federated Learning

This paper tackles the problem of ensuring training data privacy in a fe...

A Secure Location-based Alert System with Tunable Privacy-Performance Trade-off

Monitoring location updates from mobile users has important applications...

Randomness Concerns When Deploying Differential Privacy

The U.S. Census Bureau is using differential privacy (DP) to protect con...

Please sign up or login with your details

Forgot password? Click here to reset