NoPeek: Information leakage reduction to share activations in distributed deep learning

08/20/2020
by   Praneeth Vepakomma, et al.
5

For distributed machine learning with sensitive data, we demonstrate how minimizing distance correlation between raw data and intermediary representations reduces leakage of sensitive raw data patterns across client communications while maintaining model accuracy. Leakage (measured using distance correlation between input and intermediate representations) is the risk associated with the invertibility of raw data from intermediary representations. This can prevent client entities that hold sensitive data from using distributed deep learning services. We demonstrate that our method is resilient to such reconstruction attacks and is based on reduction of distance correlation between raw data and learned representations during training and inference with image datasets. We prevent such reconstruction of raw data while maintaining information required to sustain good classification accuracies.

READ FULL TEXT

page 5

page 6

page 7

research
03/02/2022

Label Leakage and Protection from Forward Embedding in Vertical Federated Learning

Vertical federated learning (vFL) has gained much attention and been dep...
research
12/18/2019

Preventing Information Leakage with Neural Architecture Search

Powered by machine learning services in the cloud, numerous learning-dri...
research
01/20/2023

Split Ways: Privacy-Preserving Training of Encrypted Data Using Split Learning

Split Learning (SL) is a new collaborative learning technique that allow...
research
06/12/2020

Dataset-Level Attribute Leakage in Collaborative Learning

Multi-party machine learning allows several parties to build a joint mod...
research
04/11/2019

Mitigating Information Leakage in Image Representations: A Maximum Entropy Approach

Image recognition systems have demonstrated tremendous progress over the...
research
10/10/2020

Data-driven Regularized Inference Privacy

Data is used widely by service providers as input to inference systems t...
research
10/14/2020

Learning, compression, and leakage: Minimizing classification error via meta-universal compression principles

Learning and compression are driven by the common aim of identifying and...

Please sign up or login with your details

Forgot password? Click here to reset