First is Better Than Last for Training Data Influence

02/24/2022
by   Chih-Kuan Yeh, et al.
0

The ability to identify influential training examples enables us to debug training data and explain model behavior. Existing techniques are based on the flow of influence through the model parameters. For large models in NLP applications, it is often computationally infeasible to study this flow through all model parameters, therefore techniques usually pick the last layer of weights. Our first observation is that for classification problems, the last layer is reductive and does not encode sufficient input level information. Deleting influential examples, according to this measure, typically does not change the model's behavior much. We propose a technique called TracIn-WE that modifies a method called TracIn to operate on the word embedding layer instead of the last layer. This could potentially have the opposite concern, that the word embedding layer does not encode sufficient high level information. However, we find that gradients (unlike embeddings) do not suffer from this, possibly because they chain through higher layers. We show that TracIn-WE significantly outperforms other data influence methods applied on the last layer by 4-10 times on the case deletion evaluation on three language classification tasks. In addition, TracIn-WE can produce scores not just at the training data level, but at the word training data level, a further aid in debugging.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/11/2018

Delta Embedding Learning

Learning from corpus and learning from supervised NLP tasks both give us...
research
07/20/2015

How to Generate a Good Word Embedding?

We analyze three critical components of word embedding training: the mod...
research
06/16/2022

TransDrift: Modeling Word-Embedding Drift using Transformer

In modern NLP applications, word embeddings are a crucial backbone that ...
research
06/21/2021

Membership Inference on Word Embedding and Beyond

In the text processing context, most ML models are built on word embeddi...
research
09/15/2020

Improving Joint Layer RNN based Keyphrase Extraction by Using Syntactical Features

Keyphrase extraction as a task to identify important words or phrases fr...
research
05/23/2022

Tracing Knowledge in Language Models Back to the Training Data

Neural language models (LMs) have been shown to memorize a great deal of...

Please sign up or login with your details

Forgot password? Click here to reset