Dropout against Deep Leakage from Gradients

08/25/2021
by   Yanchong Zheng, et al.
0

As the scale and size of the data increases significantly nowadays, federal learning (Bonawitz et al. [2019]) for high performance computing and machine learning has been much more important than ever beforeAbadi et al. [2016]. People used to believe that sharing gradients seems to be safe to conceal the local training data during the training stage. However, Zhu et al. [2019] demonstrated that it was possible to recover raw data from the model training data by detecting gradients. They use generated random dummy data and minimise the distance between them and real data. Zhao et al. [2020] pushes the convergence algorithm even further. By replacing the original loss function with cross entropy loss, they achieve better fidelity threshold. In this paper, we propose using an additional dropout (Srivastava et al. [2014]) layer before feeding the data to the classifier. It is very effective in preventing leakage of raw data, as the training data cannot converge to a small RMSE even after 5,800 epochs with dropout rate set to 0.5.

READ FULL TEXT
research
01/08/2020

iDLG: Improved Deep Leakage from Gradients

It is widely believed that sharing gradients will not leak private train...
research
12/27/2019

Evolution Strategies Converges to Finite Differences

Since the debut of Evolution Strategies (ES) as a tool for Reinforcement...
research
04/30/2018

Machine Learning for Exam Triage

In this project, we extend the state-of-the-art CheXNet (Rajpurkar et al...
research
06/21/2019

Deep Leakage from Gradients

Exchanging gradients is a widely used method in modern multi-node machin...
research
09/30/2020

Erratum Concerning the Obfuscated Gradients Attack on Stochastic Activation Pruning

Stochastic Activation Pruning (SAP) (Dhillon et al., 2018) is a defense ...
research
04/13/2021

Lessons on Parameter Sharing across Layers in Transformers

We propose a parameter sharing method for Transformers (Vaswani et al., ...
research
08/10/2018

Dropout is a special case of the stochastic delta rule: faster and more accurate deep learning

Multi-layer neural networks have lead to remarkable performance on many ...

Please sign up or login with your details

Forgot password? Click here to reset