Subject Granular Differential Privacy in Federated Learning

06/07/2022
by   Virendra J. Marathe, et al.
0

This paper introduces subject granular privacy in the Federated Learning (FL) setting, where a subject is an individual whose private information is embodied by several data items either confined within a single federation user or distributed across multiple federation users. We formally define the notion of subject level differential privacy for FL. We propose three new algorithms that enforce subject level DP. Two of these algorithms are based on notions of user level local differential privacy (LDP) and group differential privacy respectively. The third algorithm is based on a novel idea of hierarchical gradient averaging (HiGradAvgDP) for subjects participating in a training mini-batch. We also introduce horizontal composition of privacy loss for a subject across multiple federation users. We show that horizontal composition is equivalent to sequential composition in the worst case. We prove the subject level DP guarantee for all our algorithms and empirically analyze them using the FEMNIST and Shakespeare datasets. Our evaluation shows that, of our three algorithms, HiGradAvgDP delivers the best model performance, approaching that of a model trained using a DP-SGD based algorithm that provides a weaker item level privacy guarantee.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/23/2023

ULDP-FL: Federated Learning with Across Silo User-Level Differential Privacy

Differentially Private Federated Learning (DP-FL) has garnered attention...
research
06/25/2021

Understanding Clipping for Federated Learning: Convergence and Client-Level Differential Privacy

Providing privacy protection has been one of the primary motivations of ...
research
11/18/2022

How Do Input Attributes Impact the Privacy Loss in Differential Privacy?

Differential privacy (DP) is typically formulated as a worst-case privac...
research
02/08/2023

Exploratory Analysis of Federated Learning Methods with Differential Privacy on MIMIC-III

Background: Federated learning methods offer the possibility of training...
research
06/13/2023

(Amplified) Banded Matrix Factorization: A unified approach to private training

Matrix factorization (MF) mechanisms for differential privacy (DP) have ...
research
04/27/2023

Mean Estimation Under Heterogeneous Privacy: Some Privacy Can Be Free

Differential Privacy (DP) is a well-established framework to quantify pr...
research
06/18/2022

Measuring Lower Bounds of Local Differential Privacy via Adversary Instantiations in Federated Learning

Local differential privacy (LDP) gives a strong privacy guarantee to be ...

Please sign up or login with your details

Forgot password? Click here to reset