A Comparative Study of Image Disguising Methods for Confidential Outsourced Learning

12/31/2022
by   Sagar Sharma, et al.
0

Large training data and expensive model tweaking are standard features of deep learning for images. As a result, data owners often utilize cloud resources to develop large-scale complex models, which raises privacy concerns. Existing solutions are either too expensive to be practical or do not sufficiently protect the confidentiality of data and models. In this paper, we study and compare novel image disguising mechanisms, DisguisedNets and InstaHide, aiming to achieve a better trade-off among the level of protection for outsourced DNN model training, the expenses, and the utility of data. DisguisedNets are novel combinations of image blocktization, block-level random permutation, and two block-level secure transformations: random multidimensional projection (RMT) and AES pixel-level encryption (AES). InstaHide is an image mixup and random pixel flipping technique <cit.>. We have analyzed and evaluated them under a multi-level threat model. RMT provides a better security guarantee than InstaHide, under the Level-1 adversarial knowledge with well-preserved model quality. In contrast, AES provides a security guarantee under the Level-2 adversarial knowledge, but it may affect model quality more. The unique features of image disguising also help us to protect models from model-targeted attacks. We have done an extensive experimental evaluation to understand how these methods work in different settings for different datasets.

READ FULL TEXT

page 3

page 5

research
02/05/2019

Disguised-Nets: Image Disguising for Privacy-preserving Deep Learning

Due to the high training costs of deep learning, model developers often ...
research
12/06/2022

Mixer: DNN Watermarking using Image Mixup

It is crucial to protect the intellectual property rights of DNN models ...
research
07/17/2022

Security Evaluation of Compressible Image Encryption for Privacy-Preserving Image Classification against Ciphertext-only Attacks

The security of learnable image encryption schemes for image classificat...
research
06/21/2021

Delving into the pixels of adversarial samples

Despite extensive research into adversarial attacks, we do not know how ...
research
07/31/2019

Adversarial Test on Learnable Image Encryption

Data for deep learning should be protected for privacy preserving. Resea...
research
11/21/2021

Understanding Pixel-level 2D Image Semantics with 3D Keypoint Knowledge Engine

Pixel-level 2D object semantic understanding is an important topic in co...

Please sign up or login with your details

Forgot password? Click here to reset