F-BLEAU: Fast Black-box Leakage Estimation

02/04/2019
by   Giovanni Cherubin, et al.
0

We consider the problem of measuring how much a system reveals about its secret inputs. We work under the black-box setting: we assume no prior knowledge of the system's internals, and we run the system for choices of secrets and measure its leakage from the respective outputs. Our goal is to estimate the Bayes risk, from which one can derive some of the most popular leakage measures (e.g., min-entropy, additive, and multiplicative leakage). The state-of-the-art method for estimating these leakage measures is the frequentist paradigm, which approximates the system's internals by looking at the frequencies of its inputs and outputs. Unfortunately, this does not scale for systems with large output spaces, where it would require too many input-output examples. Consequently, it also cannot be applied to systems with continuous outputs (e.g., time side channels, network traffic). In this paper, we exploit an analogy between Machine Learning (ML) and black-box leakage estimation to show that the Bayes risk of a system can be estimated by using a class of ML methods: the universally consistent learning rules; these rules can exploit patterns in the input-output examples to improve the estimates' convergence, while retaining formal optimality guarantees. We focus on a set of them, the nearest neighbor rules; we show that they significantly reduce the number of black-box queries required for a precise estimation whenever nearby outputs tend to be produced by the same secret; furthermore, some of them can tackle systems with continuous outputs. We illustrate the applicability of these techniques on both synthetic and real-world data, and we compare them with the state-of-the-art tool, leakiEst, which is based on the frequentist approach.

READ FULL TEXT
research
05/09/2020

Estimating g-Leakage via Machine Learning

This paper considers the problem of estimating the information leakage o...
research
08/01/2020

A Causal Lens for Peeking into Black Box Predictive Models: Predictive Model Interpretation via Causal Attribution

With the increasing adoption of predictive models trained using machine ...
research
01/10/2023

Learning nonlinear hybrid automata from input–output time-series data

Learning an automaton that approximates the behavior of a black-box syst...
research
10/31/2019

Quantifying (Hyper) Parameter Leakage in Machine Learning

Black Box Machine Learning models leak information about the proprietary...
research
05/14/2020

Automated Requirements-Based Testing of Black-Box Reactive Systems

We present a new approach to conformance testing of black-box reactive s...
research
08/23/2023

Ensembling Uncertainty Measures to Improve Safety of Black-Box Classifiers

Machine Learning (ML) algorithms that perform classification may predict...
research
09/20/2019

Output-sensitive Information flow analysis

Constant-time programming is a countermeasure to prevent cache based att...

Please sign up or login with your details

Forgot password? Click here to reset