Private Non-Convex Federated Learning Without a Trusted Server

03/13/2022
by   Andrew Lowy, et al.
0

We study differentially private (DP) federated learning (FL) with non-convex loss functions and heterogeneous (non-i.i.d.) client data in the absence of a trusted server, both with and without a secure "shuffler" to anonymize client reports. We propose novel algorithms that satisfy local differential privacy (LDP) at the client level and shuffle differential privacy (SDP) for three classes of Lipschitz continuous loss functions: First, we consider losses satisfying the Proximal Polyak-Lojasiewicz (PL) inequality, which is an extension of the classical PL condition to the constrained setting. Prior works studying DP PL optimization only consider the unconstrained problem with Lipschitz loss functions, which rules out many interesting practical losses, such as strongly convex, least squares, and regularized logistic regression. However, by analyzing the proximal PL scenario, we permit such losses which are Lipschitz on a restricted parameter domain. We propose LDP and SDP algorithms that nearly attain the optimal strongly convex, homogeneous (i.i.d.) rates. Second, we provide the first DP algorithms for non-convex/non-smooth loss functions. Third, we specialize our analysis to smooth, unconstrained non-convex FL. Our bounds improve on the state-of-the-art, even in the special case of a single client, and match the non-private lower bound in certain practical parameter regimes. Numerical experiments show that our algorithm yields better accuracy than baselines for most privacy levels.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/15/2022

Private Stochastic Optimization in the Presence of Outliers: Optimal Rates for (Non-Smooth) Convex Losses and Extension to Non-Convex Losses

We study differentially private (DP) stochastic optimization (SO) with d...
research
11/17/2021

Differentially Private Federated Learning on Heterogeneous Data

Federated Learning (FL) is a paradigm for large-scale distributed learni...
research
06/17/2021

Locally Differentially Private Federated Learning: Efficient Algorithms with Tight Risk Bounds

Federated learning (FL) is a distributed learning paradigm in which many...
research
06/13/2021

DP-NormFedAvg: Normalizing Client Updates for Privacy-Preserving Federated Learning

In this paper, we focus on facilitating differentially private quantized...
research
08/19/2021

Order Optimal One-Shot Federated Learning for non-Convex Loss Functions

We consider the problem of federated learning in a one-shot setting in w...
research
03/01/2022

Private Convex Optimization via Exponential Mechanism

In this paper, we study private optimization problems for non-smooth con...
research
11/14/2020

A Theoretical Perspective on Differentially Private Federated Multi-task Learning

In the era of big data, the need to expand the amount of data through da...

Please sign up or login with your details

Forgot password? Click here to reset