ESCAPED: Efficient Secure and Private Dot Product Framework for Kernel-based Machine Learning Algorithms with Applications in Healthcare

12/04/2020
by   Ali Burak Ünal, et al.
0

To train sophisticated machine learning models one usually needs many training samples. Especially in healthcare settings these samples can be very expensive, meaning that one institution alone usually does not have enough on its own. Merging privacy-sensitive data from different sources is usually restricted by data security and data protection measures. This can lead to approaches that reduce data quality by putting noise onto the variables (e.g., in ϵ-differential privacy) or omitting certain values (e.g., for k-anonymity). Other measures based on cryptographic methods can lead to very time-consuming computations, which is especially problematic for larger multi-omics data. We address this problem by introducing ESCAPED, which stands for Efficient SeCure And PrivatE Dot product framework, enabling the computation of the dot product of vectors from multiple sources on a third-party, which later trains kernel-based machine learning algorithms, while neither sacrificing privacy nor adding noise. We evaluated our framework on drug resistance prediction for HIV-infected people and multi-omics dimensionality reduction and clustering problems in precision medicine. In terms of execution time, our framework significantly outperforms the best-fitting existing approaches without sacrificing the performance of the algorithm. Even though we only show the benefit for kernel-based algorithms, our framework can open up new research opportunities for further machine learning models that require the dot product of vectors from multiple sources.

READ FULL TEXT
research
07/17/2018

Efficient Deep Learning on Multi-Source Private Data

Machine learning models benefit from large and diverse datasets. Using s...
research
02/17/2021

ppAUC: Privacy Preserving Area Under the Curve with Secure 3-Party Computation

Computing an AUC as a performance measure to compare the quality of diff...
research
10/06/2022

Federated Boosted Decision Trees with Differential Privacy

There is great demand for scalable, secure, and efficient privacy-preser...
research
11/19/2022

A Survey on Differential Privacy with Machine Learning and Future Outlook

Nowadays, machine learning models and applications have become increasin...
research
02/07/2022

CECILIA: Comprehensive Secure Machine Learning Framework

Since machine learning algorithms have proven their success in data mini...
research
10/26/2021

SEDML: Securely and Efficiently Harnessing Distributed Knowledge in Machine Learning

Training high-performing deep learning models require a rich amount of d...
research
12/15/2018

A General Approach to Adding Differential Privacy to Iterative Training Procedures

In this work we address the practical challenges of training machine lea...

Please sign up or login with your details

Forgot password? Click here to reset