Multiple Classification with Split Learning

08/22/2020
by   Jongwon Kim, et al.
0

Privacy issues were raised in the process of training deep learning in medical, mobility, and other fields. To solve this problem, we want to present privacy-preserving distributed deep learning method that allow clients to learn a variety of data without direct exposure. We divided a single deep learning architecture into a common extractor, a cloud model and a local classifier for the distributed learning. First, the common extractor, which is used by local clients, extracts secure features from the input data. The secure features also take the role that the cloud model can employ various task and diverse types of data. The feature contain the most important information that helps to proceed various task. Second, the cloud model including most parts of the whole training model gets the embedded features from the massive local clients, and performs most of deep learning operations which takes severe computing cost. After the operations in cloud model finished, outputs of the cloud model send back to local clients. Finally, the local classifier determined classification results and delivers the results to local clients. When clients train models, our model does not directly expose sensitive information to exterior network. During the test, the average performance improvement was 1.11 existing local training model. However, in a distributed environment, there is a possibility of inversion attack due to exposed features. For this reason, we experimented with the common extractor to prevent data restoration. The quality of restoration of the original image was tested by adjusting the depth of the common extractor. As a result, we found that the deeper the common extractor, the restoration score decreased to 89.74.

READ FULL TEXT

page 3

page 5

research
04/26/2023

FedVS: Straggler-Resilient and Privacy-Preserving Vertical Federated Learning for Split Models

In a vertical federated learning (VFL) system consisting of a central se...
research
03/26/2021

Vulnerability Due to Training Order in Split Learning

Split learning (SL) is a privacy-preserving distributed deep learning me...
research
01/08/2021

Privacy-Preserving Cloud-Aided Broad Learning System

With the rapid development of artificial intelligence and the advent of ...
research
05/25/2023

pFedSim: Similarity-Aware Model Aggregation Towards Personalized Federated Learning

The federated learning (FL) paradigm emerges to preserve data privacy du...
research
12/23/2020

Comparison of Privacy-Preserving Distributed Deep Learning Methods in Healthcare

In this paper, we compare three privacy-preserving distributed learning ...
research
12/13/2022

Privacy-Preserving Collaborative Learning through Feature Extraction

We propose a framework in which multiple entities collaborate to build a...
research
09/30/2022

Cloud Classification with Unsupervised Deep Learning

We present a framework for cloud characterization that leverages modern ...

Please sign up or login with your details

Forgot password? Click here to reset