The Adversarial Implications of Variable-Time Inference

09/05/2023
by   Dudi Biton, et al.
0

Machine learning (ML) models are known to be vulnerable to a number of attacks that target the integrity of their predictions or the privacy of their training data. To carry out these attacks, a black-box adversary must typically possess the ability to query the model and observe its outputs (e.g., labels). In this work, we demonstrate, for the first time, the ability to enhance such decision-based attacks. To accomplish this, we present an approach that exploits a novel side channel in which the adversary simply measures the execution time of the algorithm used to post-process the predictions of the ML model under attack. The leakage of inference-state elements into algorithmic timing side channels has never been studied before, and we have found that it can contain rich information that facilitates superior timing attacks that significantly outperform attacks based solely on label outputs. In a case study, we investigate leakage from the non-maximum suppression (NMS) algorithm, which plays a crucial role in the operation of object detectors. In our examination of the timing side-channel vulnerabilities associated with this algorithm, we identified the potential to enhance decision-based attacks. We demonstrate attacks against the YOLOv3 detector, leveraging the timing leakage to successfully evade object detection using adversarial examples, and perform dataset inference. Our experiments show that our adversarial examples exhibit superior perturbation quality compared to a decision-based attack. In addition, we present a new threat model in which dataset inference based solely on timing leakage is performed. To address the timing leakage vulnerability inherent in the NMS algorithm, we explore the potential and limitations of implementing constant-time inference passes as a mitigation strategy.

READ FULL TEXT
research
08/01/2022

On the Evaluation of User Privacy in Deep Neural Networks using Timing Side Channel

Recent Deep Learning (DL) advancements in solving complex real-world tas...
research
12/31/2018

Stealing Neural Networks via Timing Side Channels

Deep learning is gaining importance in many applications and Cloud infra...
research
06/22/2020

Counting Down Thunder: Timing Attacks on Privacy in Payment Channel Networks

The Lightning Network is a scaling solution for Bitcoin that promises to...
research
01/26/2021

Property Inference From Poisoning

Property inference attacks consider an adversary who has access to the t...
research
10/31/2019

Quantifying (Hyper) Parameter Leakage in Machine Learning

Black Box Machine Learning models leak information about the proprietary...
research
09/18/2022

Distribution inference risks: Identifying and mitigating sources of leakage

A large body of work shows that machine learning (ML) models can leak se...
research
11/16/2021

Practical Timing Side Channel Attacks on Memory Compression

Compression algorithms are widely used as they save memory without losin...

Please sign up or login with your details

Forgot password? Click here to reset