Estimating g-Leakage via Machine Learning

05/09/2020
by   Marco Romanelli, et al.
0

This paper considers the problem of estimating the information leakage of a system in the black-box scenario. It is assumed that the system's internals are unknown to the learner, or anyway too complicated to analyze, and the only available information are pairs of input-output data samples, possibly obtained by submitting queries to the system or provided by a third party. Previous research has mainly focused on counting the frequencies to estimate the input-output conditional probabilities (referred to as frequentist approach), however this method is not accurate when the domain of possible outputs is large. To overcome this difficulty, the estimation of the Bayes error of the ideal classifier was recently investigated using Machine Learning (ML) models and it has been shown to be more accurate thanks to the ability of those models to learn the input-output correspondence. However, the Bayes vulnerability is only suitable to describe one-try attacks. A more general and flexible measure of leakage is the g-vulnerability, which encompasses several different types of adversaries, with different goals and capabilities. In this paper, we propose a novel approach to perform black-box estimation of the g-vulnerability using ML. A feature of our approach is that it does not require to estimate the conditional probabilities, and that it is suitable for a large class of ML algorithms. First, we formally show the learnability for all data distributions. Then, we evaluate the performance via various experiments using k-Nearest Neighbors and Neural Networks. Our results outperform the frequentist approach when the observables domain is large.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/04/2019

F-BLEAU: Fast Black-box Leakage Estimation

We consider the problem of measuring how much a system reveals about its...
research
07/17/2020

Transfer Learning without Knowing: Reprogramming Black-box Machine Learning Models with Scarce Data and Limited Resources

Current transfer learning methods are mainly based on finetuning a pretr...
research
04/01/2019

Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning

Machine learning (ML) has progressed rapidly during the past decade and ...
research
10/31/2019

Quantifying (Hyper) Parameter Leakage in Machine Learning

Black Box Machine Learning models leak information about the proprietary...
research
02/23/2020

Stealing Black-Box Functionality Using The Deep Neural Tree Architecture

This paper makes a substantial step towards cloning the functionality of...
research
02/07/2021

SPADE: A Spectral Method for Black-Box Adversarial Robustness Evaluation

A black-box spectral method is introduced for evaluating the adversarial...
research
06/04/2020

MFPP: Morphological Fragmental Perturbation Pyramid for Black-Box Model Explanations

With the increasing popularity of deep neural networks (DNNs), it has re...

Please sign up or login with your details

Forgot password? Click here to reset