CaPC Learning: Confidential and Private Collaborative Learning

02/09/2021
by   Christopher A. Choquette-Choo, et al.
2

Machine learning benefits from large training datasets, which may not always be possible to collect by any single entity, especially when using privacy-sensitive data. In many contexts, such as healthcare and finance, separate parties may wish to collaborate and learn from each other's data but are prevented from doing so due to privacy regulations. Some regulations prevent explicit sharing of data between parties by joining datasets in a central location (confidentiality). Others also limit implicit sharing of data, e.g., through model predictions (privacy). There is currently no method that enables machine learning in such a setting, where both confidentiality and privacy need to be preserved, to prevent both explicit and implicit sharing of data. Federated learning only provides confidentiality, not privacy, since gradients shared still contain private information. Differentially private learning assumes unreasonably large datasets. Furthermore, both of these learning paradigms produce a central model whose architecture was previously agreed upon by all parties rather than enabling collaborative learning where each party learns and improves their own local model. We introduce Confidential and Private Collaborative (CaPC) learning, the first method provably achieving both confidentiality and privacy in a collaborative setting. We leverage secure multi-party computation (MPC), homomorphic encryption (HE), and other techniques in combination with privately aggregated teacher models. We demonstrate how CaPC allows participants to collaborate without having to explicitly join their training sets or train a central model. Each party is able to improve the accuracy and fairness of their model, even in settings where each party has a model that performs well on their own dataset or when datasets are not IID and model architectures are heterogeneous across parties.

READ FULL TEXT

page 7

page 8

page 19

page 20

page 21

06/04/2019

Towards Fair and Decentralized Privacy-Preserving Deep Learning with Blockchain

In collaborative deep learning, current learning frameworks follow eithe...
09/29/2021

Fairness-Driven Private Collaborative Machine Learning

The performance of machine learning algorithms can be considerably impro...
09/30/2021

Secure Machine Learning over Relational Data

A closer integration of machine learning and relational databases has ga...
10/19/2020

FLAP – A Federated Learning Framework for Attribute-based Access Control Policies

Technology advances in areas such as sensors, IoT, and robotics, enable ...
10/19/2020

Privacy-preserving Data Sharing on Vertically Partitioned Data

In this work, we present a method for differentially private data sharin...
01/08/2019

Contamination Attacks and Mitigation in Multi-Party Machine Learning

Machine learning is data hungry; the more data a model has access to in ...
12/07/2018

Reaching Data Confidentiality and Model Accountability on the CalTrain

Distributed collaborative learning (DCL) paradigms enable building joint...