PPFL: Privacy-preserving Federated Learning with Trusted Execution Environments

04/29/2021
by   Fan Mo, et al.
0

We propose and implement a Privacy-preserving Federated Learning (PPFL) framework for mobile systems to limit privacy leakages in federated learning. Leveraging the widespread presence of Trusted Execution Environments (TEEs) in high-end and mobile devices, we utilize TEEs on clients for local training, and on servers for secure aggregation, so that model/gradient updates are hidden from adversaries. Challenged by the limited memory size of current TEEs, we leverage greedy layer-wise training to train each model's layer inside the trusted area until its convergence. The performance evaluation of our implementation shows that PPFL can significantly improve privacy while incurring small system overheads at the client-side. In particular, PPFL can successfully defend the trained model against data reconstruction, property inference, and membership inference attacks. Furthermore, it can achieve comparable model utility with fewer communication rounds (0.54x) and a similar amount of network traffic (1.002x) compared to the standard federated learning of a complete model. This is achieved while only introducing up to  15 time,  18 client-side.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/26/2019

Federated Learning for Ranking Browser History Suggestions

Federated Learning is a new subfield of machine learning that allows fit...
research
10/17/2020

Layer-wise Characterization of Latent Information Leakage in Federated Learning

Training a deep neural network (DNN) via federated learning allows parti...
research
05/08/2019

Robust Federated Training via Collaborative Machine Teaching using Trusted Instances

Federated learning performs distributed model training using local data ...
research
03/01/2021

Federated Learning without Revealing the Decision Boundaries

We consider the recent privacy preserving methods that train the models ...
research
03/28/2022

FedVLN: Privacy-preserving Federated Vision-and-Language Navigation

Data privacy is a central problem for embodied agents that can perceive ...
research
10/04/2021

AsymML: An Asymmetric Decomposition Framework for Privacy-Preserving DNN Training and Inference

Leveraging parallel hardware (e.g. GPUs) to conduct deep neural network ...
research
03/02/2022

EnclaveTree: Privacy-preserving Data Stream Training and Inference Using TEE

The classification service over a stream of data is becoming an importan...

Please sign up or login with your details

Forgot password? Click here to reset