Federated Split Vision Transformer for COVID-19 CXR Diagnosis using Task-Agnostic Training

11/02/2021
by   Sangjoon Park, et al.
0

Federated learning, which shares the weights of the neural network across clients, is gaining attention in the healthcare sector as it enables training on a large corpus of decentralized data while maintaining data privacy. For example, this enables neural network training for COVID-19 diagnosis on chest X-ray (CXR) images without collecting patient CXR data across multiple hospitals. Unfortunately, the exchange of the weights quickly consumes the network bandwidth if highly expressive network architecture is employed. So-called split learning partially solves this problem by dividing a neural network into a client and a server part, so that the client part of the network takes up less extensive computation resources and bandwidth. However, it is not clear how to find the optimal split without sacrificing the overall network performance. To amalgamate these methods and thereby maximize their distinct strengths, here we show that the Vision Transformer, a recently developed deep learning architecture with straightforward decomposable configuration, is ideally suitable for split learning without sacrificing performance. Even under the non-independent and identically distributed data distribution which emulates a real collaboration between hospitals using CXR datasets from multiple sources, the proposed framework was able to attain performance comparable to data-centralized training. In addition, the proposed framework along with heterogeneous multi-task clients also improves individual task performances including the diagnosis of COVID-19, eliminating the need for sharing large weights with innumerable parameters. Our results affirm the suitability of Transformer for collaborative learning in medical imaging and pave the way forward for future real-world implementations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/26/2023

FeSViBS: Federated Split Learning of Vision Transformer with Block Sampling

Data scarcity is a significant obstacle hindering the learning of powerf...
research
06/30/2023

FedBone: Towards Large-Scale Federated Multi-Task Learning

Heterogeneous federated multi-task learning (HFMTL) is a federated learn...
research
02/10/2021

FLOP: Federated Learning on Medical Datasets using Partial Networks

The outbreak of COVID-19 Disease due to the novel coronavirus has caused...
research
09/19/2021

Splitfed learning without client-side synchronization: Analyzing client-side split network portion size to overall performance

Federated Learning (FL), Split Learning (SL), and SplitFed Learning (SFL...
research
08/20/2021

Spatio-Temporal Split Learning for Privacy-Preserving Medical Platforms: Case Studies with COVID-19 CT, X-Ray, and Cholesterol Data

Machine learning requires a large volume of sample data, especially when...
research
03/12/2021

Vision Transformer for COVID-19 CXR Diagnosis using Chest X-ray Feature Corpus

Under the global COVID-19 crisis, developing robust diagnosis algorithm ...
research
04/30/2021

On In-network learning. A Comparative Study with Federated and Split Learning

In this paper, we consider a problem in which distributively extracted f...

Please sign up or login with your details

Forgot password? Click here to reset