Is Non-IID Data a Threat in Federated Online Learning to Rank?

04/20/2022
by   Shuyi Wang, et al.
0

In this perspective paper we study the effect of non independent and identically distributed (non-IID) data on federated online learning to rank (FOLTR) and chart directions for future work in this new and largely unexplored research area of Information Retrieval. In the FOLTR process, clients participate in a federation to jointly create an effective ranker from the implicit click signal originating in each client, without the need to share data (documents, queries, clicks). A well-known factor that affects the performance of federated learning systems, and that poses serious challenges to these approaches, is that there may be some type of bias in the way data is distributed across clients. While FOLTR systems are on their own rights a type of federated learning system, the presence and effect of non-IID data in FOLTR has not been studied. To this aim, we first enumerate possible data distribution settings that may showcase data bias across clients and thus give rise to the non-IID problem. Then, we study the impact of each setting on the performance of the current state-of-the-art FOLTR approach, the Federated Pairwise Differentiable Gradient Descent (FPDGD), and we highlight which data distributions may pose a problem for FOLTR methods. We also explore how common approaches proposed in the federated learning literature address non-IID issues in FOLTR. This allows us to unveil new research gaps that, we argue, future research in FOLTR should consider. This is an important contribution to the current state of FOLTR field because, for FOLTR systems to be deployed, the factors affecting their performance, including the impact of non-IID data, need to be thoroughly understood.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 6

page 7

page 9

page 10

research
07/09/2020

Client Adaptation improves Federated Learning with Simulated Non-IID Clients

We present a federated learning approach for learning a client adaptable...
research
08/22/2022

FedOS: using open-set learning to stabilize training in federated learning

Federated Learning is a recent approach to train statistical models on d...
research
08/24/2023

A Huber Loss Minimization Approach to Byzantine Robust Federated Learning

Federated learning systems are susceptible to adversarial attacks. To co...
research
10/27/2021

What Do We Mean by Generalization in Federated Learning?

Federated learning data is drawn from a distribution of distributions: c...
research
09/20/2023

Bold but Cautious: Unlocking the Potential of Personalized Federated Learning through Cautiously Aggressive Collaboration

Personalized federated learning (PFL) reduces the impact of non-independ...
research
07/04/2023

An Analysis of Untargeted Poisoning Attack and Defense Methods for Federated Online Learning to Rank Systems

Federated online learning to rank (FOLTR) aims to preserve user privacy ...
research
03/17/2021

Bias-Free FedGAN: A Federated Approach to Generate Bias-Free Datasets

Federated Generative Adversarial Network (FedGAN) is a communication-eff...

Please sign up or login with your details

Forgot password? Click here to reset