Federated Learning with Label Distribution Skew via Logits Calibration

09/01/2022
by   Jie Zhang, et al.
4

Traditional federated optimization methods perform poorly with heterogeneous data (ie, accuracy reduction), especially for highly skewed data. In this paper, we investigate the label distribution skew in FL, where the distribution of labels varies across clients. First, we investigate the label distribution skew from a statistical view. We demonstrate both theoretically and empirically that previous methods based on softmax cross-entropy are not suitable, which can result in local models heavily overfitting to minority classes and missing classes. Additionally, we theoretically introduce a deviation bound to measure the deviation of the gradient after local update. At last, we propose FedLC (Fed erated learning viaL ogitsC alibration), which calibrates the logits before softmax cross-entropy according to the probability of occurrence of each class. FedLC applies a fine-grained calibrated cross-entropy loss to local update by adding a pairwise label margin. Extensive experiments on federated datasets and real-world datasets demonstrate that FedLC leads to a more accurate global model and much improved performance. Furthermore, integrating other FL methods into our approach can further enhance the performance of the global model.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/11/2023

Re-Weighted Softmax Cross-Entropy to Control Forgetting in Federated Learning

In Federated Learning, a global model is learned by aggregating model up...
research
03/09/2022

Efficient Image Representation Learning with Federated Sampled Softmax

Learning image representations on decentralized data can bring many bene...
research
01/29/2020

FOCUS: Dealing with Label Quality Disparity in Federated Learning

Ubiquitous systems with End-Edge-Cloud architecture are increasingly bei...
research
06/06/2022

Generalized Federated Learning via Sharpness Aware Minimization

Federated Learning (FL) is a promising framework for performing privacy-...
research
03/02/2022

GSC Loss: A Gaussian Score Calibrating Loss for Deep Learning

Cross entropy (CE) loss integrated with softmax is an orthodox component...
research
09/26/2019

Adaptive Class Weight based Dual Focal Loss for Improved Semantic Segmentation

In this paper, we propose a Dual Focal Loss (DFL) function, as a replace...
research
05/24/2023

SELFOOD: Self-Supervised Out-Of-Distribution Detection via Learning to Rank

Deep neural classifiers trained with cross-entropy loss (CE loss) often ...

Please sign up or login with your details

Forgot password? Click here to reset