Privacy-Preserving Machine Learning Training in Aggregation Scenarios

09/21/2020
by   Liehuang Zhu, et al.
0

To develop Smart City, the growing popularity of Machine Learning (ML) that appreciates high-quality training datasets generated from diverse IoT devices raises natural questions about the privacy guarantees that can be provided in such settings. Privacy-preserving ML training in an aggregation scenario enables a model demander to securely train ML models with the sensitive IoT data gathered from personal IoT devices. Existing solutions are generally server-aided, cannot deal with the collusion threat between the servers or between the servers and data owners, and do not match the delicate environments of IoT. We propose a privacy-preserving ML training framework named Heda that consists of a library of building blocks based on partial homomorphic encryption (PHE) enabling constructing multiple privacy-preserving ML training protocols for the aggregation scenario without the assistance of untrusted servers and defending the security under collusion situations. Rigorous security analysis demonstrates the proposed protocols can protect the privacy of each participant in the honest-but-curious model and defend the security under most collusion situations. Extensive experiments validate the efficiency of Heda which achieves the privacy-preserving ML training without losing the model accuracy.

READ FULL TEXT
research
04/05/2020

PrivFL: Practical Privacy-preserving Federated Regressions on High-dimensional Data over Mobile Networks

Federated Learning (FL) enables a large number of users to jointly learn...
research
12/06/2018

When Homomorphic Cryptosystem Meets Differential Privacy: Training Machine Learning Classifier with Privacy Protection

Machine learning (ML) classifiers are invaluable building blocks that ha...
research
07/25/2022

LiPI: Lightweight Privacy-Preserving Data Aggregation in IoT

In the modern digital world, a user of a smart system remains surrounded...
research
10/20/2020

Image Obfuscation for Privacy-Preserving Machine Learning

Privacy becomes a crucial issue when outsourcing the training of machine...
research
02/08/2022

PrivFair: a Library for Privacy-Preserving Fairness Auditing

Machine learning (ML) has become prominent in applications that directly...
research
09/26/2022

Preprint: Privacy-preserving IoT Data Sharing Scheme

Data sharing can be granted using different factors one of which is some...
research
03/31/2022

Efficient Dropout-resilient Aggregation for Privacy-preserving Machine Learning

With the increasing adoption of data-hungry machine learning algorithms,...

Please sign up or login with your details

Forgot password? Click here to reset