Gradient Inversion Attack: Leaking Private Labels in Two-Party Split Learning

11/25/2021
by   Sanjay Kariyappa, et al.
0

Split learning is a popular technique used to perform vertical federated learning, where the goal is to jointly train a model on the private input and label data held by two parties. To preserve privacy of the input and label data, this technique uses a split model and only requires the exchange of intermediate representations (IR) of the inputs and gradients of the IR between the two parties during the learning process. In this paper, we propose Gradient Inversion Attack (GIA), a label leakage attack that allows an adversarial input owner to learn the label owner's private labels by exploiting the gradient information obtained during split learning. GIA frames the label leakage attack as a supervised learning problem by developing a novel loss function using certain key properties of the dataset and models. Our attack can uncover the private label data on several multi-class image classification problems and a binary conversion prediction task with near-perfect accuracy (97.01 demonstrating that split learning provides negligible privacy benefits to the label owner. Furthermore, we evaluate the use of gradient noise to defend against GIA. While this technique is effective for simpler datasets, it significantly degrades utility for datasets with higher input dimensionality. Our findings underscore the need for better privacy-preserving training techniques for vertically split data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/18/2023

Defending Label Inference Attacks in Split Learning under Regression Setting

As a privacy-preserving method for implementing Vertical Federated Learn...
research
03/04/2022

Differentially Private Label Protection in Split Learning

Split learning is a distributed training framework that allows multiple ...
research
01/18/2023

Label Inference Attack against Split Learning under Regression Setting

As a crucial building block in vertical Federated Learning (vFL), Split ...
research
04/12/2021

Practical Defences Against Model Inversion Attacks for Split Neural Networks

We describe a threat model under which a split network-based federated l...
research
02/17/2021

Label Leakage and Protection in Two-party Split Learning

In vertical federated learning, two-party split learning has become an i...
research
10/18/2022

Making Split Learning Resilient to Label Leakage by Potential Energy Loss

As a practical privacy-preserving learning method, split learning has dr...
research
06/15/2023

Your Room is not Private: Gradient Inversion Attack for Deep Q-Learning

The prominence of embodied Artificial Intelligence (AI), which empowers ...

Please sign up or login with your details

Forgot password? Click here to reset