Covert Model Poisoning Against Federated Learning: Algorithm Design and Optimization

01/28/2021
by   Kang Wei, et al.
12

Federated learning (FL), as a type of distributed machine learning frameworks, is vulnerable to external attacks on FL models during parameters transmissions. An attacker in FL may control a number of participant clients, and purposely craft the uploaded model parameters to manipulate system outputs, namely, model poisoning (MP). In this paper, we aim to propose effective MP algorithms to combat state-of-the-art defensive aggregation mechanisms (e.g., Krum and Trimmed mean) implemented at the server without being noticed, i.e., covert MP (CMP). Specifically, we first formulate the MP as an optimization problem by minimizing the Euclidean distance between the manipulated model and designated one, constrained by a defensive aggregation rule. Then, we develop CMP algorithms against different defensive mechanisms based on the solutions of their corresponding optimization problems. Furthermore, to reduce the optimization complexity, we propose low complexity CMP algorithms with a slight performance degradation. In the case that the attacker does not know the defensive aggregation mechanism, we design a blind CMP algorithm, in which the manipulated model will be adjusted properly according to the aggregated model generated by the unknown defensive aggregation. Our experimental results demonstrate that the proposed CMP algorithms are effective and substantially outperform existing attack mechanisms.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

page 6

page 7

page 9

research
08/20/2022

Just-in-Time Aggregation for Federated Learning

The increasing number and scale of federated learning (FL) jobs necessit...
research
09/28/2020

Loosely Coupled Federated Learning Over Generative Models

Federated learning (FL) was proposed to achieve collaborative machine le...
research
05/10/2021

Federated Learning with Unreliable Clients: Performance Analysis and Mechanism Design

Owing to the low communication costs and privacy-promoting capabilities,...
research
02/07/2022

Blind leads Blind: A Zero-Knowledge Attack on Federated Learning

Attacks on Federated Learning (FL) can severely reduce the quality of th...
research
08/13/2023

Approximate and Weighted Data Reconstruction Attack in Federated Learning

Federated Learning (FL) is a distributed learning paradigm that enables ...
research
03/21/2023

STDLens: Model Hijacking-Resilient Federated Learning for Object Detection

Federated Learning (FL) has been gaining popularity as a collaborative l...
research
08/06/2020

Federated Transfer Learning with Dynamic Gradient Aggregation

In this paper, a Federated Learning (FL) simulation platform is introduc...

Please sign up or login with your details

Forgot password? Click here to reset