(Amplified) Banded Matrix Factorization: A unified approach to private training

Matrix factorization (MF) mechanisms for differential privacy (DP) have substantially improved the state-of-the-art in privacy-utility-computation tradeoffs for ML applications in a variety of scenarios, but in both the centralized and federated settings there remain instances where either MF cannot be easily applied, or other algorithms provide better tradeoffs (typically, as ϵ becomes small). In this work, we show how MF can subsume prior state-of-the-art algorithms in both federated and centralized training settings, across all privacy budgets. The key technique throughout is the construction of MF mechanisms with banded matrices. For cross-device federated learning (FL), this enables multiple-participations with a relaxed device participation schema compatible with practical FL infrastructure (as demonstrated by a production deployment). In the centralized setting, we prove that banded matrices enjoy the same privacy amplification results as for the ubiquitous DP-SGD algorithm, but can provide strictly better performance in most scenarios – this lets us always at least match DP-SGD, and often outperform it even at ϵ≪2. Finally, b̂-banded matrices substantially reduce the memory and time complexity of per-step noise generation from 𝒪(n), n the total number of iterations, to a constant 𝒪(b̂), compared to general MF mechanisms.

READ FULL TEXT

page 22

page 25

research
06/25/2021

Understanding Clipping for Federated Learning: Convergence and Client-Level Differential Privacy

Providing privacy protection has been one of the primary motivations of ...
research
11/12/2022

Multi-Epoch Matrix Factorization Mechanisms for Private Machine Learning

We introduce new differentially private (DP) mechanisms for gradient-bas...
research
06/16/2022

On Privacy and Personalization in Cross-Silo Federated Learning

While the application of differential privacy (DP) has been well-studied...
research
06/07/2022

Subject Granular Differential Privacy in Federated Learning

This paper introduces subject granular privacy in the Federated Learning...
research
09/21/2020

Training Production Language Models without Memorizing User Data

This paper presents the first consumer-scale next-word prediction (NWP) ...
research
02/24/2023

From Noisy Fixed-Point Iterations to Private ADMM for Centralized and Federated Learning

We study differentially private (DP) machine learning algorithms as inst...
research
02/16/2022

Private Online Prefix Sums via Optimal Matrix Factorizations

Motivated by differentially-private (DP) training of machine learning mo...

Please sign up or login with your details

Forgot password? Click here to reset