Embedding is not Cipher: Understanding the risk of embedding leakages

01/28/2019
by   Zhe Zhou, et al.
0

Machine Learning (ML) already has been integrated into all kinds of systems, helping developers to solve problems with even higher accuracy than human beings. However, when integrating ML models into a system, developers may accidentally take not enough care of the outputs of ML models, mainly because of their unfamiliarity with ML and AI, resulting in severe consequences like hurting data owners' privacy. In this work, we focus on understanding the risks of abusing embeddings of ML models, an important and popular way of using ML. To show the consequence, we reveal several kinds of channels in which embeddings are accidentally leaked. As our study shows, a face verification system deployed by a government organization leaking only distance to authentic users allows an attacker to exactly recover the embedding of the verifier's pre-installed photo. Further, as we discovered, with the leaked embedding, attackers can easily recover the input photo with negligible quality losses, indicating devastating consequences to users' privacy. This is achieved with our devised GAN-like structure model, which showed 93.65 popular face embedding model under black box assumption.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset