CECILIA: Comprehensive Secure Machine Learning Framework

by   Ali Burak Ünal, et al.

Since machine learning algorithms have proven their success in data mining tasks, the data with sensitive information enforce privacy preserving machine learning algorithms to emerge. Moreover, the increase in the number of data sources and the high computational power required by those algorithms force individuals to outsource the training and/or the inference of a machine learning model to the clouds providing such services. To address this dilemma, we propose a secure 3-party computation framework, CECILIA, offering privacy preserving building blocks to enable more complex operations privately. Among those building blocks, we have two novel methods, which are the exact exponential of a public base raised to the power of a secret value and the inverse square root of a secret Gram matrix. We employ CECILIA to realize the private inference on pre-trained recurrent kernel networks, which require more complex operations than other deep neural networks such as convolutional neural networks, on the structural classification of proteins as the first study ever accomplishing the privacy preserving inference on recurrent kernel networks. The results demonstrate that we perform the exact and fully private exponential computation, which is done by approximation in the literature so far. Moreover, we can also perform the exact inverse square root of a secret Gram matrix computation up to a certain privacy level, which has not been addressed in the literature at all. We also analyze the scalability of CECILIA to various settings on a synthetic dataset. The framework shows a great promise to make other machine learning algorithms as well as further computations privately computable by the building blocks of the framework.


page 1

page 2

page 3

page 4


MORSE-STF: A Privacy Preserving Computation System

Privacy-preserving machine learning has become a popular area of researc...

ARIANN: Low-Interaction Privacy-Preserving Deep Learning via Function Secret Sharing

We propose ARIANN, a low-interaction framework to perform private traini...

A Privacy-Preserving Federated Learning Approach for Kernel methods

It is challenging to implement Kernel methods, if the data sources are d...

Practical Two-party Privacy-preserving Neural Network Based on Secret Sharing

Neural networks, with the capability to provide efficient predictive mod...

Private and Reliable Neural Network Inference

Reliable neural networks (NNs) provide important inference-time reliabil...

Prive-HD: Privacy-Preserved Hyperdimensional Computing

The privacy of data is a major challenge in machine learning as a traine...

ESCAPED: Efficient Secure and Private Dot Product Framework for Kernel-based Machine Learning Algorithms with Applications in Healthcare

To train sophisticated machine learning models one usually needs many tr...

Please sign up or login with your details

Forgot password? Click here to reset