Data Stealing Attack on Medical Images: Is it Safe to Export Networks from Data Lakes?

06/07/2022
by   Huiyu Li, et al.
0

In privacy-preserving machine learning, it is common that the owner of the learned model does not have any physical access to the data. Instead, only a secured remote access to a data lake is granted to the model owner without any ability to retrieve data from the data lake. Yet, the model owner may want to export the trained model periodically from the remote repository and a question arises whether this may cause is a risk of data leakage. In this paper, we introduce the concept of data stealing attack during the export of neural networks. It consists in hiding some information in the exported network that allows the reconstruction outside the data lake of images initially stored in that data lake. More precisely, we show that it is possible to train a network that can perform lossy image compression and at the same time solve some utility tasks such as image segmentation. The attack then proceeds by exporting the compression decoder network together with some image codes that leads to the image reconstruction outside the data lake. We explore the feasibility of such attacks on databases of CT and MR images, showing that it is possible to obtain perceptually meaningful reconstructions of the target dataset, and that the stolen dataset can be used in turns to solve a broad range of tasks. Comprehensive experiments and analyses show that data stealing attacks should be considered as a threat for sensitive imaging data sources.

READ FULL TEXT

page 3

page 7

research
01/26/2021

Property Inference From Poisoning

Property inference attacks consider an adversary who has access to the t...
research
09/07/2020

Adversarial attacks on deep learning models for fatty liver disease classification by modification of ultrasound image reconstruction method

Convolutional neural networks (CNNs) have achieved remarkable success in...
research
04/27/2021

Property Inference Attacks on Convolutional Neural Networks: Influence and Implications of Target Model's Complexity

Machine learning models' goal is to make correct predictions for specifi...
research
06/01/2022

Adaptive Local Neighborhood-based Neural Networks for MR Image Reconstruction from Undersampled Data

Recent medical image reconstruction techniques focus on generating high-...
research
10/31/2020

Evaluation of Inference Attack Models for Deep Learning on Medical Data

Deep learning has attracted broad interest in healthcare and medical com...
research
08/16/2021

NeuraCrypt is not private

NeuraCrypt (Yara et al. arXiv 2021) is an algorithm that converts a sens...

Please sign up or login with your details

Forgot password? Click here to reset