Secure Byzantine-Robust Machine Learning

06/08/2020
by   Lie He, et al.
0

Increasingly machine learning systems are being deployed to edge servers and devices (e.g. mobile phones) and trained in a collaborative manner. Such distributed/federated/decentralized training raises a number of concerns about the robustness, privacy, and security of the procedure. While extensive work has been done in tackling with robustness, privacy, or security individually, their combination has rarely been studied. In this paper, we propose a secure two-server protocol that offers both input privacy and Byzantine-robustness. In addition, this protocol is communication-efficient, fault-tolerant and enjoys local differential privacy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/06/2022

SPDL: Blockchain-secured and Privacy-preserving Decentralized Learning

Decentralized learning involves training machine learning models over re...
research
10/06/2021

Secure Byzantine-Robust Distributed Learning via Clustering

Federated learning systems that jointly preserve Byzantine robustness an...
research
05/30/2021

PPT: A Privacy-Preserving Global Model Training Protocol for Federated Learning in P2P Networks

The concept of Federated Learning has emerged as a convergence of distri...
research
06/21/2021

Secure Distributed Training at Scale

Some of the hardest problems in deep learning can be solved with the com...
research
10/20/2022

Proof of Backhaul: Trustfree Measurement of Broadband Bandwidth

Recent years have seen the emergence of decentralized wireless networks ...
research
03/14/2023

Compact and Divisible E-Cash with Threshold Issuance

Decentralized, offline, and privacy-preserving e-cash could fulfil the n...
research
01/04/2022

Reliable Transactions in Serverless-Edge Architecture

With a growing interest in edge applications, such as the Internet of Th...

Please sign up or login with your details

Forgot password? Click here to reset