Privacy for Rescue: A New Testimony Why Privacy is Vulnerable In Deep Models

12/31/2019
by   Ruiyuan Gao, et al.
0

The huge computation demand of deep learning models and limited computation resources on the edge devices calls for the cooperation between edge device and cloud service by splitting the deep models into two halves. However, transferring the intermediates results from the partial models between edge device and cloud service makes the user privacy vulnerable since the attacker can intercept the intermediate results and extract privacy information from them. Existing research works rely on metrics that are either impractical or insufficient to measure the effectiveness of privacy protection methods in the above scenario, especially from the aspect of a single user. In this paper, we first present a formal definition of the privacy protection problem in the edge-cloud system running DNN models. Then, we analyze the-state-of-the-art methods and point out the drawbacks of their methods, especially the evaluation metrics such as the Mutual Information (MI). In addition, we perform several experiments to demonstrate that although existing methods perform well under MI, they are not effective enough to protect the privacy of a single user. To address the drawbacks of the evaluation metrics, we propose two new metrics that are more accurate to measure the effectiveness of privacy protection methods. Finally, we highlight several potential research directions to encourage future efforts addressing the privacy protection problem.

READ FULL TEXT

page 1

page 6

research
12/07/2022

Privacy protection and service evaluation methods for location-based services in edge computing environments

This paper proposes a privacy protection and evaluation method for locat...
research
08/28/2018

Privacy-preserving Neural Representations of Text

This article deals with adversarial attacks towards deep learning system...
research
06/19/2018

Self-adaptive Privacy Concern Detection for User-generated Content

To protect user privacy in data analysis, a state-of-the-art strategy is...
research
08/28/2019

Confidential Deep Learning: Executing Proprietary Models on Untrusted Devices

Performing deep learning on end-user devices provides fast offline infer...
research
05/26/2019

Shredder: Learning Noise to Protect Privacy with Partial DNN Inference on the Edge

A wide variety of DNN applications increasingly rely on the cloud to per...
research
06/27/2023

Impact of User Privacy and Mobility on Edge Offloading

Offloading high-demanding applications to the edge provides better quali...
research
05/25/2023

Privacy Protectability: An Information-theoretical Approach

Recently, inference privacy has attracted increasing attention. The infe...

Please sign up or login with your details

Forgot password? Click here to reset