Split Learning without Local Weight Sharing to Enhance Client-side Data Privacy

12/01/2022
by   Ngoc Duy Pham, et al.
0

Split learning (SL) aims to protect user data privacy by splitting deep models between client-server and keeping private data locally. SL has been demonstrated to achieve similar accuracy as the centralized learning model. In SL with multiple clients, the local training weights are shared between clients for local model aggregation. This paper investigates the potential of data leakage due to local weight sharing among the clients in SL by performing model inversion attacks. To mitigate the identified leakage issue, we propose and analyze privacy-enhancement SL (P-SL), e.g., SL without local weight sharing, to boost client-side data privacy. We also propose paralleled P-SL to speed up the training process by employing multiple servers without accuracy reduction. Finally, we investigate P-SL with late participating clients and develop a server-based cache-based training to address the forgetting phenomenon in SL. Experimental results demonstrate that P-SL helps reduce up to 50 client-side data leakage compared to SL. Moreover, P-SL and its cache-based version achieve comparable accuracy to SL under various data distributions with lower computation and communications costs. Also, caching in P-SL reduces the negative effect of forgetting, stabilizes the learning, and enables effective and low-complexity training in a dynamic environment with late-arriving clients.

READ FULL TEXT

page 1

page 5

research
06/10/2022

Binarizing Split Learning for Data Privacy Enhancement and Computation Reduction

Split learning (SL) enables data privacy preservation by allowing client...
research
09/15/2023

A More Secure Split: Enhancing the Security of Privacy-Preserving Split Learning

Split learning (SL) is a new collaborative learning technique that allow...
research
02/16/2023

Defense Mechanisms Against Training-Hijacking Attacks in Split Learning

Distributed deep learning frameworks enable more efficient and privacy-a...
research
10/29/2019

Shielding Collaborative Learning: Mitigating Poisoning Attacks through Client-Side Detection

Collaborative learning allows multiple clients to train a joint model wi...
research
06/25/2019

CAPnet: A Defense Against Cache Accounting Attacks on Content Distribution Networks

Peer-assisted content distribution networks(CDNs) have emerged to improv...
research
03/26/2020

Corella: A Private Multi Server Learning Approach based on Correlated Queries

The emerging applications of machine learning algorithms on mobile devic...
research
07/31/2018

Revisiting Client Puzzles for State Exhaustion Attacks Resilience

In this paper, we address the challenges facing the adoption of client p...

Please sign up or login with your details

Forgot password? Click here to reset