Federated Self-Supervised Contrastive Learning via Ensemble Similarity Distillation

09/29/2021
by   Haizhou Shi, et al.
3

This paper investigates the feasibility of learning good representation space with unlabeled client data in the federated scenario. Existing works trivially inherit the supervised federated learning methods, which does not apply to the model heterogeneity and has the potential risk of privacy exposure. To tackle the problems above, we first identify that self-supervised contrastive local training is more robust against the non-i.i.d.-ness than the traditional supervised learning paradigm. Then we propose a novel federated self-supervised contrastive learning framework FLESD that supports architecture-agnostic local training and communication-efficient global aggregation. At each round of communication, the server first gathers a fraction of the clients' inferred similarity matrices on a public dataset. Then FLESD ensembles the similarity matrices and trains the global model via similarity distillation. We verify the effectiveness of our proposed framework by a series of empirical experiments and show that FLESD has three main advantages over the existing methods: it handles the model heterogeneity, is less prone to privacy leak, and is more communication-efficient. We will release the code of this paper in the future.

READ FULL TEXT

page 1

page 2

page 3

page 6

page 7

page 11

research
05/25/2022

Federated Self-supervised Learning for Heterogeneous Clients

Federated Learning has become an important learning paradigm due to its ...
research
04/09/2022

Divergence-aware Federated Self-Supervised Learning

Self-supervised learning (SSL) is capable of learning remarkable represe...
research
10/27/2022

Federated Graph Representation Learning using Self-Supervision

Federated graph representation learning (FedGRL) brings the benefits of ...
research
11/14/2022

Feature Correlation-guided Knowledge Transfer for Federated Self-supervised Learning

To eliminate the requirement of fully-labeled data for supervised model ...
research
01/05/2023

MS-DINO: Efficient Distributed Training of Vision Transformer Foundation Model in Medical Domain through Masked Sampling

In spite of the recent success of deep learning in the medical domain, t...
research
07/27/2023

Federated Model Aggregation via Self-Supervised Priors for Highly Imbalanced Medical Image Classification

In the medical field, federated learning commonly deals with highly imba...
research
07/04/2023

SelfFed: Self-supervised Federated Learning for Data Heterogeneity and Label Scarcity in IoMT

Self-supervised learning in federated learning paradigm has been gaining...

Please sign up or login with your details

Forgot password? Click here to reset