On the Evaluation of User Privacy in Deep Neural Networks using Timing Side Channel

08/01/2022
by   Shubhi Shukla, et al.
0

Recent Deep Learning (DL) advancements in solving complex real-world tasks have led to its widespread adoption in practical applications. However, this opportunity comes with significant underlying risks, as many of these models rely on privacy-sensitive data for training in a variety of applications, making them an overly-exposed threat surface for privacy violations. Furthermore, the widespread use of cloud-based Machine-Learning-as-a-Service (MLaaS) for its robust infrastructure support has broadened the threat surface to include a variety of remote side-channel attacks. In this paper, we first identify and report a novel data-dependent timing side-channel leakage (termed Class Leakage) in DL implementations originating from non-constant time branching operation in a widely used DL framework PyTorch. We further demonstrate a practical inference-time attack where an adversary with user privilege and hard-label black-box access to an MLaaS can exploit Class Leakage to compromise the privacy of MLaaS users. DL models are vulnerable to Membership Inference Attack (MIA), where an adversary's objective is to deduce whether any particular data has been used while training the model. In this paper, as a separate case study, we demonstrate that a DL model secured with differential privacy (a popular countermeasure against MIA) is still vulnerable to MIA against an adversary exploiting Class Leakage. We develop an easy-to-implement countermeasure by making a constant-time branching operation that alleviates the Class Leakage and also aids in mitigating MIA. We have chosen two standard benchmarking image classification datasets, CIFAR-10 and CIFAR-100 to train five state-of-the-art pre-trained DL models, over two different computing environments having Intel Xeon and Intel i7 processors to validate our approach.

READ FULL TEXT

page 1

page 6

research
09/05/2023

The Adversarial Implications of Variable-Time Inference

Machine learning (ML) models are known to be vulnerable to a number of a...
research
09/17/2020

On Primes, Log-Loss Scores and (No) Privacy

Membership Inference Attacks exploit the vulnerabilities of exposing mod...
research
12/03/2018

Comprehensive Privacy Analysis of Deep Learning: Stand-alone and Federated Learning under Passive and Active White-box Inference Attacks

Deep neural networks are susceptible to various inference attacks as the...
research
11/10/2022

On the Privacy Risks of Algorithmic Recourse

As predictive models are increasingly being employed to make consequenti...
research
11/04/2022

Unintended Memorization and Timing Attacks in Named Entity Recognition Models

Named entity recognition models (NER), are widely used for identifying n...
research
10/19/2022

Hope of Delivery: Extracting User Locations From Mobile Instant Messengers

Mobile instant messengers such as WhatsApp use delivery status notificat...
research
09/02/2020

Privacy Leakage of SIFT Features via Deep Generative Model based Image Reconstruction

Many practical applications, e.g., content based image retrieval and obj...

Please sign up or login with your details

Forgot password? Click here to reset