Embedding is not Cipher: Understanding the risk of embedding leakages

01/28/2019
by   Zhe Zhou, et al.
0

Machine Learning (ML) already has been integrated into all kinds of systems, helping developers to solve problems with even higher accuracy than human beings. However, when integrating ML models into a system, developers may accidentally take not enough care of the outputs of ML models, mainly because of their unfamiliarity with ML and AI, resulting in severe consequences like hurting data owners' privacy. In this work, we focus on understanding the risks of abusing embeddings of ML models, an important and popular way of using ML. To show the consequence, we reveal several kinds of channels in which embeddings are accidentally leaked. As our study shows, a face verification system deployed by a government organization leaking only distance to authentic users allows an attacker to exactly recover the embedding of the verifier's pre-installed photo. Further, as we discovered, with the leaked embedding, attackers can easily recover the input photo with negligible quality losses, indicating devastating consequences to users' privacy. This is achieved with our devised GAN-like structure model, which showed 93.65 popular face embedding model under black box assumption.

READ FULL TEXT
research
07/10/2022

Scaling up ML-based Black-box Planning with Partial STRIPS Models

A popular approach for sequential decision-making is to perform simulato...
research
06/02/2020

SearchFromFree: Adversarial Measurements for Machine Learning-based Energy Theft Detection

Energy theft causes large economic losses to utility companies around th...
research
04/21/2022

The Risks of Machine Learning Systems

The speed and scale at which machine learning (ML) systems are deployed ...
research
08/05/2020

More Than Accuracy: Towards Trustworthy Machine Learning Interfaces for Object Recognition

This paper investigates the user experience of visualizations of a machi...
research
05/12/2020

Perturbing Inputs to Prevent Model Stealing

We show how perturbing inputs to machine learning services (ML-service) ...
research
09/09/2021

Detecting and Mitigating Test-time Failure Risks via Model-agnostic Uncertainty Learning

Reliably predicting potential failure risks of machine learning (ML) sys...

Please sign up or login with your details

Forgot password? Click here to reset