Dancing in the Dark: Private Multi-Party Machine Learning in an Untrusted Setting

11/23/2018
by   Clement Fung, et al.
0

Distributed machine learning (ML) systems today use an unsophisticated threat model: data sources must trust a central ML process. We propose a brokered learning abstraction that allows data sources to contribute towards a globally-shared model with provable privacy guarantees in an untrusted setting. We realize this abstraction by building on federated learning, the state of the art in multi-party ML, to construct TorMentor: an anonymous hidden service that supports private multi-party ML. We define a new threat model by characterizing, developing and evaluating new attacks in the brokered learning setting, along with new defenses for these attacks. We show that TorMentor effectively protects data providers against known ML attacks while providing them with a tunable trade-off between model accuracy and privacy. We evaluate TorMentor with local and geo-distributed deployments on Azure/Tor. In an experiment with 200 clients and 14 MB of data per client, our prototype trained a logistic regression model using stochastic gradient descent in 65s.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/24/2018

Biscotti: A Ledger for Private and Secure Peer-to-Peer Machine Learning

Centralized solutions for privacy-preserving multi-party ML are becoming...
research
08/14/2018

Mitigating Sybils in Federated Learning Poisoning

Machine learning (ML) over distributed data is relevant to a variety of ...
research
11/04/2021

A Cyber Threat Intelligence Sharing Scheme based on Federated Learning for Network Intrusion Detection

The uses of Machine Learning (ML) in detection of network attacks have b...
research
10/12/2020

Differentially Private Secure Multi-Party Computation for Federated Learning in Financial Applications

Federated Learning enables a population of clients, working with a trust...
research
02/09/2023

Hyperparameter Search Is All You Need For Training-Agnostic Backdoor Robustness

Commoditization and broad adoption of machine learning (ML) technologies...
research
09/16/2019

VeriML: Enabling Integrity Assurances and Fair Payments for Machine Learning as a Service

Machine Learning as a Service (MLaaS) allows clients with limited resour...
research
05/12/2020

Perturbing Inputs to Prevent Model Stealing

We show how perturbing inputs to machine learning services (ML-service) ...

Please sign up or login with your details

Forgot password? Click here to reset