Privacy-Preserving Federated Learning via System Immersion and Random Matrix Encryption

04/05/2022
by   Haleh Hayati, et al.
0

Federated learning (FL) has emerged as a privacy solution for collaborative distributed learning where clients train AI models directly on their devices instead of sharing their data with a centralized (potentially adversarial) server. Although FL preserves local data privacy to some extent, it has been shown that information about clients' data can still be inferred from model updates. In recent years, various privacy-preserving schemes have been developed to address this privacy leakage. However, they often provide privacy at the expense of model performance or system efficiency and balancing these tradeoffs is a crucial challenge when implementing FL schemes. In this manuscript, we propose a Privacy-Preserving Federated Learning (PPFL) framework built on the synergy of matrix encryption and system immersion tools from control theory. The idea is to immerse the learning algorithm, a Stochastic Gradient Decent (SGD), into a higher-dimensional system (the so-called target system) and design the dynamics of the target system so that: the trajectories of the original SGD are immersed/embedded in its trajectories, and it learns on encrypted data (here we use random matrix encryption). Matrix encryption is reformulated at the server as a random change of coordinates that maps original parameters to a higher-dimensional parameter space and enforces that the target SGD converges to an encrypted version of the original SGD optimal solution. The server decrypts the aggregated model using the left inverse of the immersion map. We show that our algorithm provides the same level of accuracy and convergence rate as the standard FL with a negligible computation cost while revealing no information about the clients' data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/25/2022

Stochastic Coded Federated Learning with Convergence and Privacy Guarantees

Federated learning (FL) has attracted much attention as a privacy-preser...
research
06/08/2023

FheFL: Fully Homomorphic Encryption Friendly Privacy-Preserving Federated Learning with Byzantine Users

The federated learning (FL) technique was developed to mitigate data pri...
research
03/20/2023

FedML-HE: An Efficient Homomorphic-Encryption-Based Privacy-Preserving Federated Learning System

Federated Learning (FL) enables machine learning model training on distr...
research
07/19/2022

MUD-PQFed: Towards Malicious User Detection in Privacy-Preserving Quantized Federated Learning

Federated Learning (FL), a distributed machine learning paradigm, has be...
research
10/17/2020

Secure Weighted Aggregation in Federated Learning

Federated learning (FL) schemes enable multiple clients to jointly solve...
research
06/30/2023

Federated Ensemble YOLOv5 - A Better Generalized Object Detection Algorithm

Federated learning (FL) has gained significant traction as a privacy-pre...
research
11/21/2022

Immersion and Invariance-based Coding for Privacy in Remote Anomaly Detection

We present a framework for the design of coding mechanisms that allow re...

Please sign up or login with your details

Forgot password? Click here to reset