A Note On Interpreting Canary Exposure

05/31/2023
by   Matthew Jagielski, et al.
0

Canary exposure, introduced in Carlini et al. is frequently used to empirically evaluate, or audit, the privacy of machine learning model training. The goal of this note is to provide some intuition on how to interpret canary exposure, including by relating it to membership inference attacks and differential privacy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/25/2023

Learning across Data Owners with Joint Differential Privacy

In this paper, we study the setting in which data owners train machine l...
research
11/12/2022

Provable Membership Inference Privacy

In applications involving sensitive data, such as finance and healthcare...
research
12/25/2017

Towards Measuring Membership Privacy

Machine learning models are increasingly made available to the masses th...
research
03/04/2021

Quantifying identifiability to choose and audit ε in differentially private deep learning

Differential privacy allows bounding the influence that training data re...
research
02/28/2021

An Introduction to Johnson-Lindenstrauss Transforms

Johnson–Lindenstrauss Transforms are powerful tools for reducing the dim...
research
09/29/2022

No Free Lunch in "Privacy for Free: How does Dataset Condensation Help Privacy"

New methods designed to preserve data privacy require careful scrutiny. ...
research
05/22/2013

PAWL-Forced Simulated Tempering

In this short note, we show how the parallel adaptive Wang-Landau (PAWL)...

Please sign up or login with your details

Forgot password? Click here to reset