NegDL: Privacy-Preserving Deep Learning Based on Negative Database

03/10/2021
by   Dongdong Zhao, et al.
0

In the era of big data, deep learning has become an increasingly popular topic. It has outstanding achievements in the fields of image recognition, object detection, and natural language processing et al. The first priority of deep learning is exploiting valuable information from a large amount of data, which will inevitably induce privacy issues that are worthy of attention. Presently, several privacy-preserving deep learning methods have been proposed, but most of them suffer from a non-negligible degradation of either efficiency or accuracy. Negative database (NDB) is a new type of data representation which can protect data privacy by storing and utilizing the complementary form of original data. In this paper, we propose a privacy-preserving deep learning method named NegDL based on NDB. Specifically, private data are first converted to NDB as the input of deep learning models by a generation algorithm called QK-hidden algorithm, and then the sketches of NDB are extracted for training and inference. We demonstrate that the computational complexity of NegDL is the same as the original deep learning model without privacy protection. Experimental results on Breast Cancer, MNIST, and CIFAR-10 benchmark datasets demonstrate that the accuracy of NegDL could be comparable to the original deep learning model in most cases, and it performs better than the method based on differential privacy.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset