Mitigating Query-Flooding Parameter Duplication Attack on Regression Models with High-Dimensional Gaussian Mechanism

02/06/2020
by   Xiaoguang Li, et al.
0

Public intelligent services enabled by machine learning algorithms are vulnerable to model extraction attacks that can steal confidential information of the learning models through public queries. Differential privacy (DP) has been considered a promising technique to mitigate this attack. However, we find that the vulnerability persists when regression models are being protected by current DP solutions. We show that the adversary can launch a query-flooding parameter duplication (QPD) attack to infer the model information by repeated queries. To defend against the QPD attack on logistic and linear regression models, we propose a novel High-Dimensional Gaussian (HDG) mechanism to prevent unauthorized information disclosure without interrupting the intended services. In contrast to prior work, the proposed HDG mechanism will dynamically generate the privacy budget and random noise for different queries and their results to enhance the obfuscation. Besides, for the first time, HDG enables an optimal privacy budget allocation that automatically determines the minimum amount of noise to be added per user-desired privacy level on each dimension. We comprehensively evaluate the performance of HDG using real-world datasets and shows that HDG effectively mitigates the QPD attack while satisfying the privacy requirements. We also prepare to open-source the relevant codes to the community for further research.

READ FULL TEXT
research
11/01/2020

Monitoring-based Differential Privacy Mechanism Against Query-Flooding Parameter Duplication Attack

Public intelligent services enabled by machine learning algorithms are v...
research
06/08/2023

Differential Privacy for Class-based Data: A Practical Gaussian Mechanism

In this paper, we present a notion of differential privacy (DP) for data...
research
04/06/2022

Adversarial Analysis of the Differentially-Private Federated Learning in Cyber-Physical Critical Infrastructures

Differential privacy (DP) is considered to be an effective privacy-prese...
research
01/24/2020

Privacy for All: Demystify Vulnerability Disparity of Differential Privacy against Membership Inference Attack

Machine learning algorithms, when applied to sensitive data, pose a pote...
research
11/30/2022

An Optimized Privacy-Utility Trade-off Framework for Differentially Private Data Sharing in Blockchain-based Internet of Things

Differential private (DP) query and response mechanisms have been widely...
research
06/11/2021

A Shuffling Framework for Local Differential Privacy

ldp deployments are vulnerable to inference attacks as an adversary can ...
research
08/03/2021

A Neural Database for Differentially Private Spatial Range Queries

Mobile apps and location-based services generate large amounts of locati...

Please sign up or login with your details

Forgot password? Click here to reset