NeuraCrypt is not private

08/16/2021
by   Nicholas Carlini, et al.
0

NeuraCrypt (Yara et al. arXiv 2021) is an algorithm that converts a sensitive dataset to an encoded dataset so that (1) it is still possible to train machine learning models on the encoded data, but (2) an adversary who has access only to the encoded dataset can not learn much about the original sensitive dataset. We break NeuraCrypt privacy claims, by perfectly solving the authors' public challenge, and by showing that NeuraCrypt does not satisfy the formal privacy definitions posed in the original paper. Our attack consists of a series of boosting steps that, coupled with various design flaws, turns a 1 advantage into a 100

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/08/2019

Theoretical Guarantees for Model Auditing with Finite Adversaries

Privacy concerns have led to the development of privacy-preserving appro...
research
08/25/2022

SNAP: Efficient Extraction of Private Properties with Poisoning

Property inference attacks allow an adversary to extract global properti...
research
09/29/2022

No Free Lunch in "Privacy for Free: How does Dataset Condensation Help Privacy"

New methods designed to preserve data privacy require careful scrutiny. ...
research
03/13/2023

Schrödinger's Camera: First Steps Towards a Quantum-Based Privacy Preserving Camera

Privacy-preserving vision must overcome the dual challenge of utility an...
research
09/08/2020

Attribute Privacy: Framework and Mechanisms

Ensuring the privacy of training data is a growing concern since many ma...
research
05/03/2023

Privacy in Population Protocols with Probabilistic Scheduling

The population protocol model introduced by Angluin et al. in 2006 offer...
research
06/07/2022

Data Stealing Attack on Medical Images: Is it Safe to Export Networks from Data Lakes?

In privacy-preserving machine learning, it is common that the owner of t...

Please sign up or login with your details

Forgot password? Click here to reset