Information Leakage in Embedding Models

03/31/2020
by   Congzheng Song, et al.
0

Embeddings are functions that map raw input data to low-dimensional vector representations, while preserving important semantic information about the inputs. Pre-training embeddings on a large amount of unlabeled data and fine-tuning them for downstream tasks is now a de facto standard in achieving state of the art learning in many domains. We demonstrate that embeddings, in addition to encoding generic semantics, often also present a vector that leaks sensitive information about the input data. We develop three classes of attacks to systematically study information that might be leaked by embeddings. First, embedding vectors can be inverted to partially recover some of the input data. As an example, we show that our attacks on popular sentence embeddings recover between 50%–70% of the input words (F1 scores of 0.5–0.7). Second, embeddings may reveal sensitive attributes inherent in inputs and independent of the underlying semantic task at hand. Attributes such as authorship of text can be easily extracted by training an inference model on just a handful of labeled embedding vectors. Third, embedding models leak moderate amount of membership information for infrequent training data inputs. We extensively evaluate our attacks on various state-of-the-art embedding models in the text domain. We also propose and evaluate defenses that can prevent the leakage to some extent at a minor cost in utility.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/05/2022

Privacy-Preserving Text Classification on BERT Embeddings with Homomorphic Encryption

Embeddings, which compress information in raw text into semantics-preser...
research
06/21/2021

Membership Inference on Word Embedding and Beyond

In the text processing context, most ML models are built on word embeddi...
research
05/04/2023

Sentence Embedding Leaks More Information than You Expect: Generative Embedding Inversion Attack to Recover the Whole Sentence

Sentence-level representations are beneficial for various natural langua...
research
07/10/2023

Substance or Style: What Does Your Image Embedding Know?

Probes are small networks that predict properties of underlying data fro...
research
10/20/2022

Apple of Sodom: Hidden Backdoors in Superior Sentence Embeddings via Contrastive Learning

This paper finds that contrastive learning can produce superior sentence...
research
06/19/2018

Non-deterministic Behavior of Ranking-based Metrics when Evaluating Embeddings

Embedding data into vector spaces is a very popular strategy of pattern ...
research
12/10/2019

Embedding Comparator: Visualizing Differences in Global Structure and Local Neighborhoods via Small Multiples

Embeddings – mappings from high-dimensional discrete input to lower-dime...

Please sign up or login with your details

Forgot password? Click here to reset