Robust Federated Learning: The Case of Affine Distribution Shifts

06/16/2020
by   Amirhossein Reisizadeh, et al.
9

Federated learning is a distributed paradigm that aims at training models using samples distributed across multiple users in a network while keeping the samples on users' devices with the aim of efficiency and protecting users privacy. In such settings, the training data is often statistically heterogeneous and manifests various distribution shifts across users, which degrades the performance of the learnt model. The primary goal of this paper is to develop a robust federated learning algorithm that achieves satisfactory performance against distribution shifts in users' samples. To achieve this goal, we first consider a structured affine distribution shift in users' data that captures the device-dependent data heterogeneity in federated settings. This perturbation model is applicable to various federated learning problems such as image classification where the images undergo device-dependent imperfections, e.g. different intensity, contrast, and brightness. To address affine distribution shifts across users, we propose a Federated Learning framework Robust to Affine distribution shifts (FLRA) that is provably robust against affine Wasserstein shifts to the distribution of observed samples. To solve the FLRA's distributed minimax problem, we propose a fast and efficient optimization method and provide convergence guarantees via a gradient Descent Ascent (GDA) method. We further prove generalization error bounds for the learnt classifier to show proper generalization from empirical distribution of samples to the true underlying distribution. We perform several numerical experiments to empirically support FLRA. We show that an affine distribution shift indeed suffices to significantly decrease the performance of the learnt classifier in a new test user, and our proposed algorithm achieves a significant gain in comparison to standard federated learning and adversarial training methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/29/2023

FedConceptEM: Robust Federated Learning Under Diverse Distribution Shifts

Federated Learning (FL) is a machine learning paradigm that protects pri...
research
06/08/2023

Federated Learning under Covariate Shifts with Generalization Guarantees

This paper addresses intra-client and inter-client covariate shifts in f...
research
06/03/2022

On the Generalization of Wasserstein Robust Federated Learning

In federated learning, participating clients typically possess non-i.i.d...
research
02/12/2020

Towards Federated Learning: Robustness Analytics to Data Heterogeneity

Federated Learning allows remote centralized server training models with...
research
01/07/2020

FedDANE: A Federated Newton-Type Method

Federated learning aims to jointly learn statistical models over massive...
research
06/16/2019

Robust Federated Learning in a Heterogeneous Environment

We study a recently proposed large-scale distributed learning paradigm, ...
research
06/08/2021

Robust Generalization despite Distribution Shift via Minimum Discriminating Information

Training models that perform well under distribution shifts is a central...

Please sign up or login with your details

Forgot password? Click here to reset