Practical Homomorphic Aggregation for Byzantine ML

09/11/2023
by   Antoine Choffrut, et al.
0

Due to the large-scale availability of data, machine learning (ML) algorithms are being deployed in distributed topologies, where different nodes collaborate to train ML models over their individual data by exchanging model-related information (e.g., gradients) with a central server. However, distributed learning schemes are notably vulnerable to two threats. First, Byzantine nodes can single-handedly corrupt the learning by sending incorrect information to the server, e.g., erroneous gradients. The standard approach to mitigate such behavior is to use a non-linear robust aggregation method at the server. Second, the server can violate the privacy of the nodes. Recent attacks have shown that exchanging (unencrypted) gradients enables a curious server to recover the totality of the nodes' data. The use of homomorphic encryption (HE), a gold standard security primitive, has extensively been studied as a privacy-preserving solution to distributed learning in non-Byzantine scenarios. However, due to HE's large computational demand especially for high-dimensional ML models, there has not yet been any attempt to design purely homomorphic operators for non-linear robust aggregators. In this work, we present SABLE, the first completely homomorphic and Byzantine robust distributed learning algorithm. SABLE essentially relies on a novel plaintext encoding method that enables us to implement the robust aggregator over batching-friendly BGV. Moreover, this encoding scheme also accelerates state-of-the-art homomorphic sorting with larger security margins and smaller ciphertext size. We perform extensive experiments on image classification tasks and show that our algorithm achieves practical execution times while matching the ML performance of its non-private counterpart.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/08/2023

FheFL: Fully Homomorphic Encryption Friendly Privacy-Preserving Federated Learning with Byzantine Users

The federated learning (FL) technique was developed to mitigate data pri...
research
11/18/2019

Fast Machine Learning with Byzantine Workers and Servers

Machine Learning (ML) solutions are nowadays distributed and are prone t...
research
04/17/2023

Decentralized Learning Made Easy with DecentralizePy

Decentralized learning (DL) has gained prominence for its potential bene...
research
08/05/2021

Aspis: A Robust Detection System for Distributed Learning

State of the art machine learning models are routinely trained on large ...
research
08/17/2022

Efficient Detection and Filtering Systems for Distributed Training

A plethora of modern machine learning tasks requires the utilization of ...
research
10/10/2020

ByzShield: An Efficient and Robust System for Distributed Training

Training of large scale models on distributed clusters is a critical com...
research
02/12/2023

Flag Aggregator: Scalable Distributed Training under Failures and Augmented Losses using Convex Optimization

Modern ML applications increasingly rely on complex deep learning models...

Please sign up or login with your details

Forgot password? Click here to reset