Vicious Classifiers: Data Reconstruction Attack at Inference Time

12/08/2022
by   Mohammad Malekzadeh, et al.
0

Privacy-preserving inference via edge or encrypted computing paradigms encourages users of machine learning services to confidentially run a model on their personal data for a target task and only share the model's outputs with the service provider; e.g., to activate further services. Nevertheless, despite all confidentiality efforts, we show that a ”vicious” service provider can approximately reconstruct its users' personal data by observing only the model's outputs, while keeping the target utility of the model very close to that of a ”honest” service provider. We show the possibility of jointly training a target model (to be run at users' side) and an attack model for data reconstruction (to be secretly used at server's side). We introduce the ”reconstruction risk”: a new measure for assessing the quality of reconstructed data that better captures the privacy risk of such attacks. Experimental results on 6 benchmark datasets show that for low-complexity data types, or for tasks with larger number of classes, a user's personal data can be approximately reconstructed from the outputs of a single target inference task. We propose a potential defense mechanism that helps to distinguish vicious vs. honest classifiers at inference time. We conclude this paper by discussing current challenges and open directions for future studies. We open-source our code and results, as a benchmark for future work.

READ FULL TEXT

page 9

page 14

research
01/04/2023

Privacy Considerations for Risk-Based Authentication Systems

Risk-based authentication (RBA) extends authentication mechanisms to mak...
research
04/22/2014

Stochastic Privacy

Online services such as web search and e-commerce applications typically...
research
04/27/2021

Property Inference Attacks on Convolutional Neural Networks: Influence and Implications of Target Model's Complexity

Machine learning models' goal is to make correct predictions for specifi...
research
05/25/2021

Honest-but-Curious Nets: Sensitive Attributes of Private Inputs can be Secretly Coded into the Entropy of Classifiers' Outputs

It is known that deep neural networks, trained for the classification of...
research
06/23/2022

A Framework for Understanding Model Extraction Attack and Defense

The privacy of machine learning models has become a significant concern ...
research
01/25/2019

Better accuracy with quantified privacy: representations learned via reconstructive adversarial network

The remarkable success of machine learning, especially deep learning, ha...
research
02/21/2018

Protecting Sensory Data against Sensitive Inferences

There is growing concern about how personal data are used when users gra...

Please sign up or login with your details

Forgot password? Click here to reset