Shielding Collaborative Learning: Mitigating Poisoning Attacks through Client-Side Detection

10/29/2019
by   Lingchen Zhao, et al.
0

Collaborative learning allows multiple clients to train a joint model without sharing their data with each other. Each client performs training locally and then submits the model updates to a central server for aggregation. Since the server has no visibility into the process of generating the updates, collaborative learning is vulnerable to poisoning attacks where a malicious client can generate a poisoned update to introduce backdoor functionality to the joint model. The existing solutions for detecting poisoned updates, however, fail to defend against the recently proposed attacks, especially in the non-IID setting. In this paper, we present a novel defense scheme to detect anomalous updates in both IID and non-IID settings. Our key idea is to realize client-side cross-validation, where each update is evaluated over other clients' local data. The server will adjust the weights of the updates based on the evaluation results when performing aggregation. To adapt to the unbalanced distribution of data in the non-IID setting, a dynamic client allocation mechanism is designed to assign detection tasks to the most suitable clients. During the detection process, we also protect the client-level privacy to prevent malicious clients from stealing the training data of other clients, by integrating differential privacy with our design without degrading the detection performance. Our experimental evaluations on two real-world datasets show that our scheme is significantly robust to two representative poisoning attacks.

READ FULL TEXT

page 7

page 8

research
04/28/2022

Shielding Federated Learning: Robust Aggregation with Adaptive Client Selection

Federated learning (FL) enables multiple clients to collaboratively trai...
research
07/31/2018

Revisiting Client Puzzles for State Exhaustion Attacks Resilience

In this paper, we address the challenges facing the adoption of client p...
research
02/23/2021

CIAO: An Optimization Framework for Client-Assisted Data Loading

Data loading has been one of the most common performance bottlenecks for...
research
09/18/2020

Robust Decentralized Learning for Neural Networks

In decentralized learning, data is distributed among local clients which...
research
06/25/2019

CAPnet: A Defense Against Cache Accounting Attacks on Content Distribution Networks

Peer-assisted content distribution networks(CDNs) have emerged to improv...
research
10/14/2022

Close the Gate: Detecting Backdoored Models in Federated Learning based on Client-Side Deep Layer Output Analysis

Federated Learning (FL) is a scheme for collaboratively training Deep Ne...
research
12/01/2022

Split Learning without Local Weight Sharing to Enhance Client-side Data Privacy

Split learning (SL) aims to protect user data privacy by splitting deep ...

Please sign up or login with your details

Forgot password? Click here to reset