A Distributed Trust Framework for Privacy-Preserving Machine Learning

06/03/2020
by   Will Abramson, et al.
0

When training a machine learning model, it is standard procedure for the researcher to have full knowledge of both the data and model. However, this engenders a lack of trust between data owners and data scientists. Data owners are justifiably reluctant to relinquish control of private information to third parties. Privacy-preserving techniques distribute computation in order to ensure that data remains in the control of the owner while learning takes place. However, architectures distributed amongst multiple agents introduce an entirely new set of security and trust complications. These include data poisoning and model theft. This paper outlines a distributed infrastructure which is used to facilitate peer-to-peer trust between distributed agents; collaboratively performing a privacy-preserving workflow. Our outlined prototype sets industry gatekeepers and governance bodies as credential issuers. Before participating in the distributed learning workflow, malicious actors must first negotiate valid credentials. We detail a proof of concept using Hyperledger Aries, Decentralised Identifiers (DIDs) and Verifiable Credentials (VCs) to establish a distributed trust architecture during a privacy-preserving machine learning experiment. Specifically, we utilise secure and authenticated DID communication channels in order to facilitate a federated learning workflow related to mental health care data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/29/2021

Privacy and Trust Redefined in Federated Machine Learning

A common privacy issue in traditional machine learning is that data need...
research
12/11/2020

Towards Secure and Leak-Free Workflows Using Microservice Isolation

Data leaks and breaches are on the rise. They result in huge losses of m...
research
10/25/2019

Substra: a framework for privacy-preserving, traceable and collaborative Machine Learning

Machine learning is promising, but it often needs to process vast amount...
research
02/26/2023

P4L: Privacy Preserving Peer-to-Peer Learning for Infrastructureless Setups

Distributed (or Federated) learning enables users to train machine learn...
research
05/16/2023

Trust-Worthy Semantic Communications for the Metaverse Relying on Federated Learning

As an evolving successor to the mobile Internet, the Metaverse creates t...
research
10/14/2022

Reflections on trusting distributed trust

Many systems today distribute trust across multiple parties such that th...
research
09/22/2020

Privacy Preserving K-Means Clustering: A Secure Multi-Party Computation Approach

Knowledge discovery is one of the main goals of Artificial Intelligence....

Please sign up or login with your details

Forgot password? Click here to reset