Learning, compression, and leakage: Minimizing classification error via meta-universal compression principles

10/14/2020
by   Fernando E. Rosas, et al.
0

Learning and compression are driven by the common aim of identifying and exploiting statistical regularities in data, which opens the door for fertile collaboration between these areas. A promising group of compression techniques for learning scenarios is normalised maximum likelihood (NML) coding, which provides strong guarantees for compression of small datasets - in contrast with more popular estimators whose guarantees hold only in the asymptotic limit. Here we put forward a novel NML-based decision strategy for supervised classification problems, and show that it attains heuristic PAC learning when applied to a wide variety of models. Furthermore, we show that the misclassification rate of our method is upper bounded by the maximal leakage, a recently proposed metric to quantify the potential of data leakage in privacy-sensitive scenarios.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/10/2022

Pointwise Maximal Leakage

We introduce a privacy measure called pointwise maximal leakage, defined...
research
02/03/2021

Information Leakage in Zero-Error Source Coding: A Graph-Theoretic Perspective

We study the information leakage to a guessing adversary in zero-error s...
research
10/12/2020

Quantifying Membership Privacy via Information Leakage

Machine learning models are known to memorize the unique properties of i...
research
05/10/2023

Maximal Leakage of Masked Implementations Using Mrs. Gerber's Lemma for Min-Entropy

A common countermeasure against side-channel attacks on secret key crypt...
research
09/12/2019

Debreach: Mitigating Compression Side Channels via Static Analysis and Transformation

Compression is an emerging source of exploitable side-channel leakage th...
research
04/30/2019

Universal Mutual Information Privacy Guarantees for Smart Meters

Smart meters enable improvements in electricity distribution system effi...
research
08/20/2020

NoPeek: Information leakage reduction to share activations in distributed deep learning

For distributed machine learning with sensitive data, we demonstrate how...

Please sign up or login with your details

Forgot password? Click here to reset