Enhancing Privacy against Inversion Attacks in Federated Learning by using Mixing Gradients Strategies

04/26/2022
by   Shaltiel Eloul, et al.
0

Federated learning reduces the risk of information leakage, but remains vulnerable to attacks. We investigate how several neural network design decisions can defend against gradients inversion attacks. We show that overlapping gradients provides numerical resistance to gradient inversion on the highly vulnerable dense layer. Specifically, we propose to leverage batching to maximise mixing of gradients by choosing an appropriate loss function and drawing identical labels. We show that otherwise it is possible to directly recover all vectors in a mini-batch without any numerical optimisation due to the de-mixing nature of the cross entropy loss. To accurately assess data recovery, we introduce an absolute variation distance (AVD) metric for information leakage in images, derived from total variation. In contrast to standard metrics, e.g. Mean Squared Error or Structural Similarity Index, AVD offers a continuous metric for extracting information in noisy images. Finally, our empirical results on information recovery from various inversion attacks and training performance supports our defense strategies. These strategies are also shown to be useful for deep convolutional neural networks such as LeNET for image recognition. We hope that this study will help guide the development of further strategies that achieve a trustful federation policy.

READ FULL TEXT

page 6

page 8

research
11/30/2021

Evaluating Gradient Inversion Attacks and Defenses in Federated Learning

Gradient inversion attack (or input recovery from gradient) is an emergi...
research
03/22/2022

GradViT: Gradient Inversion of Vision Transformers

In this work we demonstrate the vulnerability of vision transformers (Vi...
research
05/19/2021

User Label Leakage from Gradients in Federated Learning

Federated learning enables multiple users to build a joint model by shar...
research
12/05/2022

Refiner: Data Refining against Gradient Leakage Attacks in Federated Learning

Federated Learning (FL) is pervasive in privacy-focused IoT environments...
research
01/12/2022

Get your Foes Fooled: Proximal Gradient Split Learning for Defense against Model Inversion Attacks on IoMT data

The past decade has seen a rapid adoption of Artificial Intelligence (AI...
research
02/24/2021

A Quantitative Metric for Privacy Leakage in Federated Learning

In the federated learning system, parameter gradients are shared among p...
research
12/02/2018

Dual Objective Approach Using A Convolutional Neural Network for Magnetic Resonance Elastography

Traditionally, nonlinear inversion, direct inversion, or wave estimation...

Please sign up or login with your details

Forgot password? Click here to reset