Measuring and Controlling Split Layer Privacy Leakage Using Fisher Information

09/21/2022
by   Kiwan Maeng, et al.
1

Split learning and inference propose to run training/inference of a large model that is split across client devices and the cloud. However, such a model splitting imposes privacy concerns, because the activation flowing through the split layer may leak information about the clients' private input data. There is currently no good way to quantify how much private information is being leaked through the split layer, nor a good way to improve privacy up to the desired level. In this work, we propose to use Fisher information as a privacy metric to measure and control the information leakage. We show that Fisher information can provide an intuitive understanding of how much private information is leaking through the split layer, in the form of an error bound for an unbiased reconstruction attacker. We then propose a privacy-enhancing technique, ReFIL, that can enforce a user-desired level of Fisher information leakage at the split layer to achieve high privacy, while maintaining reasonable utility.

READ FULL TEXT
research
02/23/2021

Measuring Data Leakage in Machine-Learning Models with Fisher Information

Machine-learning models contain information about the data they were tra...
research
10/18/2022

Making Split Learning Resilient to Label Leakage by Potential Energy Loss

As a practical privacy-preserving learning method, split learning has dr...
research
08/22/2022

Split-U-Net: Preventing Data Leakage in Split Learning for Collaborative Multi-Modal Brain Tumor Segmentation

Split learning (SL) has been proposed to train deep learning models in a...
research
09/13/2020

Information Laundering for Model Privacy

In this work, we propose information laundering, a novel framework for e...
research
08/30/2023

Split Without a Leak: Reducing Privacy Leakage in Split Learning

The popularity of Deep Learning (DL) makes the privacy of sensitive data...
research
06/10/2022

Binarizing Split Learning for Data Privacy Enhancement and Computation Reduction

Split learning (SL) enables data privacy preservation by allowing client...
research
06/21/2022

The Privacy Onion Effect: Memorization is Relative

Machine learning models trained on private datasets have been shown to l...

Please sign up or login with your details

Forgot password? Click here to reset