Combining Variational Modeling with Partial Gradient Perturbation to Prevent Deep Gradient Leakage

08/09/2022
by   Daniel Scheliga, et al.
11

Exploiting gradient leakage to reconstruct supposedly private training data, gradient inversion attacks are an ubiquitous threat in collaborative learning of neural networks. To prevent gradient leakage without suffering from severe loss in model performance, recent work proposed a PRivacy EnhanCing mODulE (PRECODE) based on variational modeling as extension for arbitrary model architectures. In this work, we investigate the effect of PRECODE on gradient inversion attacks to reveal its underlying working principle. We show that variational modeling induces stochasticity on PRECODE's and its subsequent layers' gradients that prevents gradient attacks from convergence. By purposefully omitting those stochastic gradients during attack optimization, we formulate an attack that can disable PRECODE's privacy preserving effects. To ensure privacy preservation against such targeted attacks, we propose PRECODE with Partial Perturbation (PPP), as strategic combination of variational modeling and partial gradient perturbation. We conduct an extensive empirical study on four seminal model architectures and two image classification datasets. We find all architectures to be prone to gradient leakage, which can be prevented by PPP. In result, we show that our approach requires less gradient perturbation to effectively preserve privacy without harming model performance.

READ FULL TEXT

page 7

page 8

page 13

page 14

page 15

page 17

page 19

page 21

research
08/10/2021

PRECODE - A Generic Model Extension to Prevent Deep Gradient Leakage

Collaborative training of neural networks leverages distributed data by ...
research
08/12/2022

Dropout is NOT All You Need to Prevent Gradient Leakage

Gradient inversion attacks on federated learning systems reconstruct cli...
research
05/06/2023

Gradient Leakage Defense with Key-Lock Module for Federated Learning

Federated Learning (FL) is a widely adopted privacy-preserving machine l...
research
06/13/2023

Temporal Gradient Inversion Attacks with Robust Optimization

Federated Learning (FL) has emerged as a promising approach for collabor...
research
05/25/2022

Breaking the Chain of Gradient Leakage in Vision Transformers

User privacy is of great concern in Federated Learning, while Vision Tra...
research
05/28/2021

Quantifying Information Leakage from Gradients

Sharing deep neural networks' gradients instead of training data could f...
research
05/30/2019

P3SGD: Patient Privacy Preserving SGD for Regularizing Deep CNNs in Pathological Image Classification

Recently, deep convolutional neural networks (CNNs) have achieved great ...

Please sign up or login with your details

Forgot password? Click here to reset