Untargeted Poisoning Attack Detection in Federated Learning via Behavior Attestation

01/24/2021
by   Ranwa Al Mallah, et al.
0

Federated Learning (FL) is a paradigm in Machine Learning (ML) that addresses data privacy, security, access rights and access to heterogeneous information issues by training a global model using distributed nodes. Despite its advantages, there is an increased potential for cyberattacks on FL-based ML techniques that can undermine the benefits. Model-poisoning attacks on FL target the availability of the model. The adversarial objective is to disrupt the training. We propose attestedFL, a defense mechanism that monitors the training of individual nodes through state persistence in order to detect a malicious worker. A fine-grained assessment of the history of the worker permits the evaluation of its behavior in time and results in innovative detection strategies. We present three lines of defense that aim at assessing if the worker is reliable by observing if the node is really training, advancing towards a goal. Our defense exposes an attacker's malicious behavior and removes unreliable nodes from the aggregation process so that the FL process converge faster. Through extensive evaluations and against various adversarial settings, attestedFL increased the accuracy of the model between 12 stages of convergence, attackers colluding and continuous attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/14/2022

Federated Learning based on Defending Against Data Poisoning Attacks in IoT

The rapidly expanding number of Internet of Things (IoT) devices is gene...
research
02/21/2022

Backdoor Defense in Federated Learning Using Differential Testing and Outlier Detection

The goal of federated learning (FL) is to train one global model by aggr...
research
09/11/2021

On the Initial Behavior Monitoring Issues in Federated Learning

In Federated Learning (FL), a group of workers participate to build a gl...
research
04/21/2023

Denial-of-Service or Fine-Grained Control: Towards Flexible Model Poisoning Attacks on Federated Learning

Federated learning (FL) is vulnerable to poisoning attacks, where advers...
research
03/31/2022

Ransomware Detection using Process Memory

Ransomware attacks have increased significantly in recent years, causing...
research
07/05/2021

Towards Node Liability in Federated Learning: Computational Cost and Network Overhead

Many machine learning (ML) techniques suffer from the drawback that thei...
research
10/14/2020

BlockFLA: Accountable Federated Learning via Hybrid Blockchain Architecture

Federated Learning (FL) is a distributed, and decentralized machine lear...

Please sign up or login with your details

Forgot password? Click here to reset