Visual Transformer Meets CutMix for Improved Accuracy, Communication Efficiency, and Data Privacy in Split Learning

07/01/2022
by   Sihun Baek, et al.
16

This article seeks for a distributed learning solution for the visual transformer (ViT) architectures. Compared to convolutional neural network (CNN) architectures, ViTs often have larger model sizes, and are computationally expensive, making federated learning (FL) ill-suited. Split learning (SL) can detour this problem by splitting a model and communicating the hidden representations at the split-layer, also known as smashed data. Notwithstanding, the smashed data of ViT are as large as and as similar as the input data, negating the communication efficiency of SL while violating data privacy. To resolve these issues, we propose a new form of CutSmashed data by randomly punching and compressing the original smashed data. Leveraging this, we develop a novel SL framework for ViT, coined CutMixSL, communicating CutSmashed data. CutMixSL not only reduces communication costs and privacy leakage, but also inherently involves the CutMix data augmentation, improving accuracy and scalability. Simulations corroborate that CutMixSL outperforms baselines such as parallelized SL and SplitFed that integrates FL with SL.

READ FULL TEXT

page 2

page 5

page 11

08/25/2022

Reduce Communication Costs and Preserve Privacy: Prompt Tuning Method in Federated Learning

Federated learning (FL) has enabled global model training on decentraliz...
07/08/2022

StatMix: Data augmentation method that relies on image statistics in federated learning

Availability of large amount of annotated data is one of the pillars of ...
12/11/2021

Server-Side Local Gradient Averaging and Learning Rate Acceleration for Scalable Split Learning

In recent years, there have been great advances in the field of decentra...
06/04/2022

Hybrid Architectures for Distributed Machine Learning in Heterogeneous Wireless Networks

The ever-growing data privacy concerns have transformed machine learning...
04/07/2022

Multi-Task Distributed Learning using Vision Transformer with Random Patch Permutation

The widespread application of artificial intelligence in health research...
08/14/2020

Distillation-Based Semi-Supervised Federated Learning for Communication-Efficient Collaborative Training with Non-IID Private Data

This study develops a federated learning (FL) framework overcoming large...
03/02/2020

Communication-Efficient Multimodal Split Learning for mmWave Received Power Prediction

The goal of this study is to improve the accuracy of millimeter wave rec...