Get your Foes Fooled: Proximal Gradient Split Learning for Defense against Model Inversion Attacks on IoMT data

01/12/2022
by   Sunder Ali Khowaja, et al.
48

The past decade has seen a rapid adoption of Artificial Intelligence (AI), specifically the deep learning networks, in Internet of Medical Things (IoMT) ecosystem. However, it has been shown recently that the deep learning networks can be exploited by adversarial attacks that not only make IoMT vulnerable to the data theft but also to the manipulation of medical diagnosis. The existing studies consider adding noise to the raw IoMT data or model parameters which not only reduces the overall performance concerning medical inferences but also is ineffective to the likes of deep leakage from gradients method. In this work, we propose proximal gradient split learning (PSGL) method for defense against the model inversion attacks. The proposed method intentionally attacks the IoMT data when undergoing the deep neural network training process at client side. We propose the use of proximal gradient method to recover gradient maps and a decision-level fusion strategy to improve the recognition performance. Extensive analysis show that the PGSL not only provides effective defense mechanism against the model inversion attacks but also helps in improving the recognition performance on publicly available datasets. We report 17.9% and 36.9% gains in accuracy over reconstructed and adversarial attacked images, respectively.

READ FULL TEXT

page 1

page 2

page 4

page 7

research
06/01/2022

Defense Against Gradient Leakage Attacks via Learning to Obscure Data

Federated learning is considered as an effective privacy-preserving lear...
research
02/02/2022

An Eye for an Eye: Defending against Gradient-based Attacks with Gradients

Deep learning models have been shown to be vulnerable to adversarial att...
research
06/15/2019

Robust or Private? Adversarial Training Makes Models More Vulnerable to Privacy Attacks

Adversarial training was introduced as a way to improve the robustness o...
research
04/26/2022

Enhancing Privacy against Inversion Attacks in Federated Learning by using Mixing Gradients Strategies

Federated learning reduces the risk of information leakage, but remains ...
research
10/15/2020

Progressive Defense Against Adversarial Attacks for Deep Learning as a Service in Internet of Things

Nowadays, Deep Learning as a service can be deployed in Internet of Thin...
research
08/18/2023

Defending Label Inference Attacks in Split Learning under Regression Setting

As a privacy-preserving method for implementing Vertical Federated Learn...
research
11/21/2022

SPIN: Simulated Poisoning and Inversion Network for Federated Learning-Based 6G Vehicular Networks

The applications concerning vehicular networks benefit from the vision o...

Please sign up or login with your details

Forgot password? Click here to reset