An Analysis of Untargeted Poisoning Attack and Defense Methods for Federated Online Learning to Rank Systems

07/04/2023
by   Shuyi Wang, et al.
0

Federated online learning to rank (FOLTR) aims to preserve user privacy by not sharing their searchable data and search interactions, while guaranteeing high search effectiveness, especially in contexts where individual users have scarce training data and interactions. For this, FOLTR trains learning to rank models in an online manner – i.e. by exploiting users' interactions with the search systems (queries, clicks), rather than labels – and federatively – i.e. by not aggregating interaction data in a central server for training purposes, but by training instances of a model on each user device on their own private data, and then sharing the model updates, not the data, across a set of users that have formed the federation. Existing FOLTR methods build upon advances in federated learning. While federated learning methods have been shown effective at training machine learning models in a distributed way without the need of data sharing, they can be susceptible to attacks that target either the system's security or its overall effectiveness. In this paper, we consider attacks on FOLTR systems that aim to compromise their search effectiveness. Within this scope, we experiment with and analyse data and model poisoning attack methods to showcase their impact on FOLTR search effectiveness. We also explore the effectiveness of defense methods designed to counteract attacks on FOLTR systems. We contribute an understanding of the effect of attack and defense methods for FOLTR systems, as well as identifying the key factors influencing their effectiveness.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/10/2021

Gradient Disaggregation: Breaking Privacy in Federated Learning by Reconstructing the User Participant Matrix

We show that aggregated model updates in federated learning may be insec...
research
05/11/2021

Federated Unbiased Learning to Rank

Unbiased Learning to Rank (ULTR) studies the problem of learning a ranki...
research
10/17/2022

Thinking Two Moves Ahead: Anticipating Other Users Improves Backdoor Attacks in Federated Learning

Federated learning is particularly susceptible to model poisoning and ba...
research
09/23/2020

Pocket Diagnosis: Secure Federated Learning against Poisoning Attack in the Cloud

Federated learning has become prevalent in medical diagnosis due to its ...
research
11/03/2022

Try to Avoid Attacks: A Federated Data Sanitization Defense for Healthcare IoMT Systems

Healthcare IoMT systems are becoming intelligent, miniaturized, and more...
research
04/20/2022

Is Non-IID Data a Threat in Federated Online Learning to Rank?

In this perspective paper we study the effect of non independent and ide...
research
06/21/2020

With Great Dispersion Comes Greater Resilience: Efficient Poisoning Attacks and Defenses for Online Regression Models

With the rise of third parties in the machine learning pipeline, the ser...

Please sign up or login with your details

Forgot password? Click here to reset