On In-network learning. A Comparative Study with Federated and Split Learning

04/30/2021
by   Matei Moldoveanu, et al.
0

In this paper, we consider a problem in which distributively extracted features are used for performing inference in wireless networks. We elaborate on our proposed architecture, which we herein refer to as "in-network learning", provide a suitable loss function and discuss its optimization using neural networks. We compare its performance with both Federated- and Split learning; and show that this architecture offers both better accuracy and bandwidth savings.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/04/2023

Privacy and Efficiency of Communications in Federated Split Learning

Everyday, large amounts of sensitive data is distributed across mobile p...
research
05/25/2019

Fair Resource Allocation in Federated Learning

Federated learning involves training statistical models in massive, hete...
research
07/07/2021

In-Network Learning: Distributed Training and Inference in Networks

It is widely perceived that leveraging the success of modern machine lea...
research
07/25/2023

SplitFed resilience to packet loss: Where to split, that is the question

Decentralized machine learning has broadened its scope recently with the...
research
11/02/2021

Federated Split Vision Transformer for COVID-19 CXR Diagnosis using Task-Agnostic Training

Federated learning, which shares the weights of the neural network acros...
research
02/04/2023

GAN-based federated learning for label protection in binary classification

As an emerging technique, vertical federated learning collaborates with ...

Please sign up or login with your details

Forgot password? Click here to reset