Privacy-Preserving Distributed Learning with Secret Gradient Descent

06/27/2019
by   Valentin Hartmann, et al.
0

In many important application domains of machine learning, data is a privacy-sensitive resource. In addition, due to the growing complexity of the models, single actors typically do not have sufficient data to train a model on their own. Motivated by these challenges, we propose Secret Gradient Descent (SecGD), a method for training machine learning models on data that is spread over different clients while preserving the privacy of the training data. We achieve this by letting each client add temporary noise to the information they send to the server during the training process. They also share this noise in separate messages with the server, which can then subtract it from the previously received values. By routing all data through an anonymization network such as Tor, we prevent the server from knowing which messages originate from the same client, which in turn allows us to show that breaking a client's privacy is computationally intractable as it would require solving a hard instance of the subset sum problem. This setup allows SecGD to work in the presence of only two honest clients and a malicious server, and without the need for peer-to-peer connections.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/25/2022

Stochastic Coded Federated Learning with Convergence and Privacy Guarantees

Federated learning (FL) has attracted much attention as a privacy-preser...
research
09/10/2019

A Loosely Self-stabilizing Protocol for Randomized Congestion Control with Logarithmic Memory

We consider congestion control in peer-to-peer distributed systems. The ...
research
07/08/2019

Privacy-Preserving Classification with Secret Vector Machines

Today, large amounts of valuable data are distributed among millions of ...
research
09/19/2023

Love or Hate? Share or Split? Privacy-Preserving Training Using Split Learning and Homomorphic Encryption

Split learning (SL) is a new collaborative learning technique that allow...
research
09/15/2023

A More Secure Split: Enhancing the Security of Privacy-Preserving Split Learning

Split learning (SL) is a new collaborative learning technique that allow...
research
07/11/2018

Differentially-Private "Draw and Discard" Machine Learning

In this work, we propose a novel framework for privacy-preserving client...
research
11/23/2022

Privacy-Preserving Application-to-Application Authentication Using Dynamic Runtime Behaviors

Application authentication is typically performed using some form of sec...

Please sign up or login with your details

Forgot password? Click here to reset