Homogeneous Learning: Self-Attention Decentralized Deep Learning

10/11/2021
by   Yuwei Sun, et al.
0

Federated learning (FL) has been facilitating privacy-preserving deep learning in many walks of life such as medical image classification, network intrusion detection, and so forth. Whereas it necessitates a central parameter server for model aggregation, which brings about delayed model communication and vulnerability to adversarial attacks. A fully decentralized architecture like Swarm Learning allows peer-to-peer communication among distributed nodes, without the central server. One of the most challenging issues in decentralized deep learning is that data owned by each node are usually non-independent and identically distributed (non-IID), causing time-consuming convergence of model training. To this end, we propose a decentralized learning model called Homogeneous Learning (HL) for tackling non-IID data with a self-attention mechanism. In HL, training performs on each round's selected node, and the trained model of a node is sent to the next selected node at the end of each round. Notably, for the selection, the self-attention mechanism leverages reinforcement learning to observe a node's inner state and its surrounding environment's state, and find out which node should be selected to optimize the training. We evaluate our method with various scenarios for an image classification task. The result suggests that HL can produce a better performance compared with standalone learning and greatly reduce both the total training rounds by 50.8 random policy-based decentralized learning for training on non-IID data.

READ FULL TEXT
research
02/27/2023

MoDeST: Bridging the Gap between Federated and Decentralized Learning with Decentralized Sampling

Federated and decentralized machine learning leverage end-user devices f...
research
08/01/2022

DeFL: Decentralized Weight Aggregation for Cross-silo Federated Learning

Federated learning (FL) is an emerging promising paradigm of privacy-pre...
research
05/30/2021

PPT: A Privacy-Preserving Global Model Training Protocol for Federated Learning in P2P Networks

The concept of Federated Learning has emerged as a convergence of distri...
research
10/01/2022

Privacy-preserving Decentralized Federated Learning over Time-varying Communication Graph

Establishing how a set of learners can provide privacy-preserving federa...
research
08/09/2023

Tram-FL: Routing-based Model Training for Decentralized Federated Learning

In decentralized federated learning (DFL), substantial traffic from freq...
research
08/23/2022

FedMCSA: Personalized Federated Learning via Model Components Self-Attention

Federated learning (FL) facilitates multiple clients to jointly train a ...
research
01/31/2019

Peer-to-peer Federated Learning on Graphs

We consider the problem of training a machine learning model over a netw...

Please sign up or login with your details

Forgot password? Click here to reset