Differentially Private CutMix for Split Learning with Vision Transformer

10/28/2022
by   Seungeun Oh, et al.
0

Recently, vision transformer (ViT) has started to outpace the conventional CNN in computer vision tasks. Considering privacy-preserving distributed learning with ViT, federated learning (FL) communicates models, which becomes ill-suited due to ViT' s large model size and computing costs. Split learning (SL) detours this by communicating smashed data at a cut-layer, yet suffers from data privacy leakage and large communication costs caused by high similarity between ViT' s smashed data and input data. Motivated by this problem, we propose DP-CutMixSL, a differentially private (DP) SL framework by developing DP patch-level randomized CutMix (DP-CutMix), a novel privacy-preserving inter-client interpolation scheme that replaces randomly selected patches in smashed data. By experiment, we show that DP-CutMixSL not only boosts privacy guarantees and communication efficiency, but also achieves higher accuracy than its Vanilla SL counterpart. Theoretically, we analyze that DP-CutMix amplifies Rényi DP (RDP), which is upper-bounded by its Vanilla Mixup counterpart.

READ FULL TEXT
research
11/09/2021

DP-REC: Private Communication-Efficient Federated Learning

Privacy and communication efficiency are important challenges in federat...
research
07/01/2022

Visual Transformer Meets CutMix for Improved Accuracy, Communication Efficiency, and Data Privacy in Split Learning

This article seeks for a distributed learning solution for the visual tr...
research
08/19/2023

DPMAC: Differentially Private Communication for Cooperative Multi-Agent Reinforcement Learning

Communication lays the foundation for cooperation in human society and i...
research
06/14/2023

Differentially Private Wireless Federated Learning Using Orthogonal Sequences

We propose a novel privacy-preserving uplink over-the-air computation (A...
research
01/08/2021

Differentially Private Federated Learning for Cancer Prediction

Since 2014, the NIH funded iDASH (integrating Data for Analysis, Anonymi...
research
12/06/2022

Straggler-Resilient Differentially-Private Decentralized Learning

We consider the straggler problem in decentralized learning over a logic...
research
12/15/2021

One size does not fit all: Investigating strategies for differentially-private learning across NLP tasks

Preserving privacy in training modern NLP models comes at a cost. We kno...

Please sign up or login with your details

Forgot password? Click here to reset