Quantifying (Hyper) Parameter Leakage in Machine Learning

10/31/2019
by   Vasisht Duddu, et al.
0

Black Box Machine Learning models leak information about the proprietary model parameters and architecture, both through side channels and output predictions. An adversary can thus, exploit this leakage to reconstruct a substitute architecture similar to the target model, violating the model privacy and Intellectual Property. However, all such attacks, infer a subset of the target model attributes and identifying the rest of the architecture and parameters (optimally) is a search problem. Extracting the exact target model is not possible owing to the uncertainty in the inference attack outputs and stochastic nature of the training process. In this work, we propose a probabilistic framework, Airavata, to estimate the leakage in such model extraction attacks. Specifically, we use Bayesian Networks to capture the uncertainty, under the subjective notion of probability, in estimating the target model attributes using various model extraction attacks. We experimentally validate the model under different adversary assumptions commonly adopted by various model extraction attacks to reason about the attack efficacy. Further, this provides a practical approach of inferring actionable knowledge about extracting black box models and identify the best combination of attacks which maximise the knowledge extracted (information leaked) from the target model.

READ FULL TEXT
research
01/23/2022

Are Your Sensitive Attributes Private? Novel Model Inversion Attribute Inference Attacks on Classification Models

Increasing use of machine learning (ML) technologies in privacy-sensitiv...
research
08/09/2023

Data-Free Model Extraction Attacks in the Context of Object Detection

A significant number of machine learning models are vulnerable to model ...
research
09/02/2022

Group Property Inference Attacks Against Graph Neural Networks

With the fast adoption of machine learning (ML) techniques, sharing of M...
research
09/05/2023

The Adversarial Implications of Variable-Time Inference

Machine learning (ML) models are known to be vulnerable to a number of a...
research
02/04/2019

F-BLEAU: Fast Black-box Leakage Estimation

We consider the problem of measuring how much a system reveals about its...
research
05/09/2020

Estimating g-Leakage via Machine Learning

This paper considers the problem of estimating the information leakage o...
research
10/08/2018

Security Analysis of Deep Neural Networks Operating in the Presence of Cache Side-Channel Attacks

Recent work has introduced attacks that extract the architecture informa...

Please sign up or login with your details

Forgot password? Click here to reset