ResSFL: A Resistance Transfer Framework for Defending Model Inversion Attack in Split Federated Learning

by   Jingtao Li, et al.

This work aims to tackle Model Inversion (MI) attack on Split Federated Learning (SFL). SFL is a recent distributed training scheme where multiple clients send intermediate activations (i.e., feature map), instead of raw data, to a central server. While such a scheme helps reduce the computational load at the client end, it opens itself to reconstruction of raw data from intermediate activation by the server. Existing works on protecting SFL only consider inference and do not handle attacks during training. So we propose ResSFL, a Split Federated Learning Framework that is designed to be MI-resistant during training. It is based on deriving a resistant feature extractor via attacker-aware training, and using this extractor to initialize the client-side model prior to standard SFL training. Such a method helps in reducing the computational complexity due to use of strong inversion model in client-side adversarial training as well as vulnerability of attacks launched in early training epochs. On CIFAR-100 dataset, our proposed framework successfully mitigates MI attack on a VGG-11 model with a high reconstruction Mean-Square-Error of 0.050 compared to 0.005 obtained by the baseline system. The framework achieves 67.5 computation overhead. Code is released at:


page 1

page 4

page 8


Practical Defences Against Model Inversion Attacks for Split Neural Networks

We describe a threat model under which a split network-based federated l...

Local Model Reconstruction Attacks in Federated Learning and their Uses

In this paper, we initiate the study of local model reconstruction attac...

Split Federated Learning: Speed up Model Training in Resource-Limited Wireless Networks

In this paper, we propose a novel distributed learning scheme, named gro...

On Feasibility of Server-side Backdoor Attacks on Split Learning

Split learning is a collaborative learning design that allows several pa...

Unleashing the Tiger: Inference Attacks on Split Learning

We investigate the security of split learning – a novel collaborative ma...

SplitAMC: Split Learning for Robust Automatic Modulation Classification

Automatic modulation classification (AMC) is a technology that identifie...

Efficient passive membership inference attack in federated learning

In cross-device federated learning (FL) setting, clients such as mobiles...

Please sign up or login with your details

Forgot password? Click here to reset