-
Towards Federated Learning: Robustness Analytics to Data Heterogeneity
Federated Learning allows remote centralized server training models with...
read it
-
FedDANE: A Federated Newton-Type Method
Federated learning aims to jointly learn statistical models over massive...
read it
-
Robust Federated Learning in a Heterogeneous Environment
We study a recently proposed large-scale distributed learning paradigm, ...
read it
-
Affine Variational Autoencoders: An Efficient Approach for Improving Generalization and Robustness to Distribution Shift
In this study, we propose the Affine Variational Autoencoder (AVAE), a v...
read it
-
Faster On-Device Training Using New Federated Momentum Algorithm
Mobile crowdsensing has gained significant attention in recent years and...
read it
-
Federated Evaluation of On-device Personalization
Federated learning is a distributed, on-device computation framework tha...
read it
-
An Efficient Framework for Clustered Federated Learning
We address the problem of Federated Learning (FL) where users are distri...
read it
Robust Federated Learning: The Case of Affine Distribution Shifts
Federated learning is a distributed paradigm that aims at training models using samples distributed across multiple users in a network while keeping the samples on users' devices with the aim of efficiency and protecting users privacy. In such settings, the training data is often statistically heterogeneous and manifests various distribution shifts across users, which degrades the performance of the learnt model. The primary goal of this paper is to develop a robust federated learning algorithm that achieves satisfactory performance against distribution shifts in users' samples. To achieve this goal, we first consider a structured affine distribution shift in users' data that captures the device-dependent data heterogeneity in federated settings. This perturbation model is applicable to various federated learning problems such as image classification where the images undergo device-dependent imperfections, e.g. different intensity, contrast, and brightness. To address affine distribution shifts across users, we propose a Federated Learning framework Robust to Affine distribution shifts (FLRA) that is provably robust against affine Wasserstein shifts to the distribution of observed samples. To solve the FLRA's distributed minimax problem, we propose a fast and efficient optimization method and provide convergence guarantees via a gradient Descent Ascent (GDA) method. We further prove generalization error bounds for the learnt classifier to show proper generalization from empirical distribution of samples to the true underlying distribution. We perform several numerical experiments to empirically support FLRA. We show that an affine distribution shift indeed suffices to significantly decrease the performance of the learnt classifier in a new test user, and our proposed algorithm achieves a significant gain in comparison to standard federated learning and adversarial training methods.
READ FULL TEXT
Comments
There are no comments yet.