Inference Attacks Against Collaborative Learning

05/10/2018
by   Luca Melis, et al.
0

Collaborative machine learning and related techniques such as distributed and federated learning allow multiple participants, each with his own training dataset, to build a joint model. Participants train local models and periodically exchange model parameters or gradient updates computed during the training. We demonstrate that the training data used by participants in collaborative learning is vulnerable to inference attacks. First, we show that an adversarial participant can infer the presence of exact data points in others' training data (i.e., membership inference). Then, we demonstrate that the adversary can infer properties that hold only for a subset of the training data and are independent of the properties that the joint model aims to capture. We evaluate the efficacy of our attacks on a variety of tasks, datasets, and learning configurations, and conclude with a discussion of possible defenses.

READ FULL TEXT
research
10/28/2022

On the Vulnerability of Data Points under Multiple Membership Inference Attacks and Target Models

Membership Inference Attacks (MIAs) infer whether a data point is in the...
research
12/07/2018

Reaching Data Confidentiality and Model Accountability on the CalTrain

Distributed collaborative learning (DCL) paradigms enable building joint...
research
10/22/2021

On the Necessity of Auditable Algorithmic Definitions for Machine Unlearning

Machine unlearning, i.e. having a model forget about some of its trainin...
research
11/28/2019

Free-riders in Federated Learning: Attacks and Defenses

Federated learning is a recently proposed paradigm that enables multiple...
research
07/07/2020

Backdoor attacks and defenses in feature-partitioned collaborative learning

Since there are multiple parties in collaborative learning, malicious pa...
research
09/24/2019

Matrix Sketching for Secure Collaborative Machine Learning

Collaborative machine learning (ML), also known as federated ML, allows ...
research
03/31/2022

Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets

We introduce a new class of attacks on machine learning models. We show ...

Please sign up or login with your details

Forgot password? Click here to reset