Understanding and Controlling User Linkability in Decentralized Learning

05/15/2018
by   Tribhuvanesh Orekondy, et al.
0

Machine Learning techniques are widely used by online services (e.g. Google, Apple) in order to analyze and make predictions on user data. As many of the provided services are user-centric (e.g. personal photo collections, speech recognition, personal assistance), user data generated on personal devices is key to provide the service. In order to protect the data and the privacy of the user, federated learning techniques have been proposed where the data never leaves the user's device and "only" model updates are communicated back to the server. In our work, we propose a new threat model that is not concerned with learning about the content - but rather is concerned with the linkability of users during such decentralized learning scenarios. We show that model updates are characteristic for users and therefore lend themselves to linkability attacks. We show identification and matching of users across devices in closed and open world scenarios. In our experiments, we find our attacks to be highly effective, achieving 20x-175x chance-level performance. In order to mitigate the risks of linkability attacks, we study various strategies. As adding random noise does not offer convincing operation points, we propose strategies based on using calibrated domain-specific data; we find these strategies offers substantial protection against linkability threats with little effect to utility.

READ FULL TEXT

page 7

page 10

research
09/30/2021

Federated Learning in ASR: Not as Easy as You Think

With the growing availability of smart devices and cloud services, perso...
research
08/11/2022

Shielding Federated Learning Systems against Inference Attacks with ARM TrustZone

Federated Learning (FL) opens new perspectives for training machine lear...
research
05/17/2022

On the Privacy of Decentralized Machine Learning

In this work, we carry out the first, in-depth, privacy analysis of Dece...
research
08/09/2022

PEPPER: Empowering User-Centric Recommender Systems over Gossip Learning

Recommender systems are proving to be an invaluable tool for extracting ...
research
10/07/2022

FedPC: Federated Learning for Language Generation with Personal and Context Preference Embeddings

Federated learning is a training paradigm that learns from multiple dist...
research
10/20/2022

Federated Unlearning for On-Device Recommendation

The increasing data privacy concerns in recommendation systems have made...
research
05/31/2019

Protocols for Checking Compromised Credentials

To prevent credential stuffing attacks, industry best practice now proac...

Please sign up or login with your details

Forgot password? Click here to reset