Robust Representation Learning for Privacy-Preserving Machine Learning: A Multi-Objective Autoencoder Approach

by   Sofiane Ouaari, et al.

Several domains increasingly rely on machine learning in their applications. The resulting heavy dependence on data has led to the emergence of various laws and regulations around data ethics and privacy and growing awareness of the need for privacy-preserving machine learning (ppML). Current ppML techniques utilize methods that are either purely based on cryptography, such as homomorphic encryption, or that introduce noise into the input, such as differential privacy. The main criticism given to those techniques is the fact that they either are too slow or they trade off a model s performance for improved confidentiality. To address this performance reduction, we aim to leverage robust representation learning as a way of encoding our data while optimizing the privacy-utility trade-off. Our method centers on training autoencoders in a multi-objective manner and then concatenating the latent and learned features from the encoding part as the encoded form of our data. Such a deep learning-powered encoding can then safely be sent to a third party for intensive training and hyperparameter tuning. With our proposed framework, we can share our data and use third party tools without being under the threat of revealing its original form. We empirically validate our results on unimodal and multimodal settings, the latter following a vertical splitting system and show improved performance over state-of-the-art.


Privacy-Preserving Wavelet Wavelet Neural Network with Fully Homomorphic Encryption

The main aim of Privacy-Preserving Machine Learning (PPML) is to protect...

Privacy-Preserving Machine Learning for Collaborative Data Sharing via Auto-encoder Latent Space Embeddings

Privacy-preserving machine learning in data-sharing processes is an ever...

Revisiting Hyperparameter Tuning with Differential Privacy

Hyperparameter tuning is a common practice in the application of machine...

Evaluating Privacy-Preserving Machine Learning in Critical Infrastructures: A Case Study on Time-Series Classification

With the advent of machine learning in applications of critical infrastr...

Differentially Private Empirical Risk Minimization

Privacy-preserving machine learning algorithms are crucial for the incre...

Data Privacy in Multi-Cloud: An Enhanced Data Fragmentation Framework

Data splitting preserves privacy by partitioning data into various fragm...

Adjustable Privacy using Autoencoder-based Learning Structure

Inference centers need more data to have a more comprehensive and benefi...

Please sign up or login with your details

Forgot password? Click here to reset