Humpty Dumpty: Controlling Word Meanings via Corpus Poisoning

01/14/2020
by   Roei Schuster, et al.
10

Word embeddings, i.e., low-dimensional vector representations such as GloVe and SGNS, encode word "meaning" in the sense that distances between words' vectors correspond to their semantic proximity. This enables transfer learning of semantics for a variety of natural language processing tasks. Word embeddings are typically trained on large public corpora such as Wikipedia or Twitter. We demonstrate that an attacker who can modify the corpus on which the embedding is trained can control the "meaning" of new and existing words by changing their locations in the embedding space. We develop an explicit expression over corpus features that serves as a proxy for distance between words and establish a causative relationship between its values and embedding distances. We then show how to use this relationship for two adversarial objectives: (1) make a word a top-ranked neighbor of another word, and (2) move a word from one semantic cluster to another. An attack on the embedding can affect diverse downstream tasks, demonstrating for the first time the power of data poisoning in transfer learning scenarios. We use this attack to manipulate query expansion in information retrieval systems such as resume search, make certain names more or less visible to named entity recognition models, and cause new words to be translated to a particular target word regardless of the language. Finally, we show how the attacker can generate linguistically likely corpus modifications, thus fooling defenses that attempt to filter implausible sentences from the corpus using a language model.

READ FULL TEXT
research
06/23/2021

Clinical Named Entity Recognition using Contextualized Token Representations

The clinical named entity recognition (CNER) task seeks to locate and cl...
research
05/25/2016

Query Expansion with Locally-Trained Word Embeddings

Continuous space word embeddings have received a great deal of attention...
research
02/06/2019

Word Embeddings for Entity-annotated Texts

Many information retrieval and natural language processing tasks due to ...
research
10/04/2018

Building a language evolution tree based on word vector combination model

In this paper, we try to explore the evolution of language through case ...
research
08/11/2017

Semantic Word Clouds with Background Corpus Normalization and t-distributed Stochastic Neighbor Embedding

Many word clouds provide no semantics to the word placement, but use a r...
research
05/27/2019

An Empirical Study on Post-processing Methods for Word Embeddings

Word embeddings learnt from large corpora have been adopted in various a...
research
10/27/2020

Improving Word Recognition using Multiple Hypotheses and Deep Embeddings

We propose a novel scheme for improving the word recognition accuracy us...

Please sign up or login with your details

Forgot password? Click here to reset