A Federated Learning Framework for Privacy-preserving and Parallel Training

01/22/2020
by   Tien-Dung Cao, et al.
0

The deployment of such deep learning in practice has been hurdled by two issues: the computational cost of model training and the privacy issue of training data such as medical or healthcare records. The large size of both learning models and datasets incurs a massive computational cost, requiring efficient approaches to speed up the training phase. While parallel and distributed learning can address the issue of computational overhead, preserving the privacy of training data and intermediate results (e.g., gradients) remains a hard problem. Enabling parallel training of deep learning models on distributed datasets while preserving data privacy is even more complex and challenging. In this paper, we develop and implement FEDF, a distributed deep learning framework for privacy-preserving and parallel training. The framework allows a model to be learned on multiple geographically-distributed training datasets (which may belong to different owners) while do not reveal any information of each dataset as well as the intermediate results. We formally prove the convergence of the learning model when training with the developed framework and its privacy-preserving property. We carry out extensive experiments to evaluate the performance of the framework in terms of speedup ratio, the approximation to the upper-bound performance (when training centrally) and communication overhead between the master and training workers. The results show that the developed framework achieves a speedup of up to 9x compared to the centralized training approach and maintaining the performance approximation of the models within 4.5 centrally-trained models. The proposed framework also significantly reduces the amount of data exchanged between the master and training workers by up to 34 compared to existing work.

READ FULL TEXT
research
08/21/2023

Split Learning for Distributed Collaborative Training of Deep Learning Models in Health Informatics

Deep learning continues to rapidly evolve and is now demonstrating remar...
research
03/19/2021

An Experiment Study on Federated LearningTestbed

While the Internet of Things (IoT) can benefit from machine learning by ...
research
09/12/2023

Quality-Agnostic Deepfake Detection with Intra-model Collaborative Learning

Deepfake has recently raised a plethora of societal concerns over its po...
research
05/13/2021

DeepObliviate: A Powerful Charm for Erasing Data Residual Memory in Deep Neural Networks

Machine unlearning has great significance in guaranteeing model security...
research
03/18/2020

Predicting Performance of Asynchronous Differentially-Private Learning

We consider training machine learning models using Training data located...
research
01/09/2020

Privacy-Preserving Deep Learning Computation for Geo-Distributed Medical Big-Data Platforms

This paper proposes a distributed deep learning framework for privacy-pr...
research
06/21/2019

Privacy Preserving QoE Modeling using Collaborative Learning

Machine Learning based Quality of Experience (QoE) models potentially su...

Please sign up or login with your details

Forgot password? Click here to reset