Assessing Word Importance Using Models Trained for Semantic Tasks

05/31/2023
by   Dávid Javorský, et al.
0

Many NLP tasks require to automatically identify the most significant words in a text. In this work, we derive word significance from models trained to solve semantic task: Natural Language Inference and Paraphrase Identification. Using an attribution method aimed to explain the predictions of these models, we derive importance scores for each input token. We evaluate their relevance using a so-called cross-task evaluation: Analyzing the performance of one model on an input masked according to the other model's weight, we show that our method is robust with respect to the choice of the initial task. Additionally, we investigate the scores from the syntax point of view and observe interesting patterns, e.g. words closer to the root of a syntactic tree receive higher importance scores. Altogether, these observations suggest that our method can be used to identify important words in sentences without any explicit word importance labeling in training.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/20/2022

SynWMD: Syntax-aware Word Mover's Distance for Sentence Similarity Evaluation

Word Mover's Distance (WMD) computes the distance between words and mode...
research
02/11/2019

LS-Tree: Model Interpretation When the Data Are Linguistic

We study the problem of interpreting trained classification models in th...
research
09/04/2017

Learning Neural Word Salience Scores

Measuring the salience of a word is an essential step in numerous NLP ta...
research
01/31/2023

Friend-training: Learning from Models of Different but Related Tasks

Current self-training methods such as standard self-training, co-trainin...
research
06/09/2019

Is Attention Interpretable?

Attention mechanisms have recently boosted performance on a range of NLP...
research
11/06/2018

Learning to Embed Sentences Using Attentive Recursive Trees

Sentence embedding is an effective feature representation for most deep ...
research
12/03/2021

Evaluating NLP Systems On a Novel Cloze Task: Judging the Plausibility of Possible Fillers in Instructional Texts

Cloze task is a widely used task to evaluate an NLP system's language un...

Please sign up or login with your details

Forgot password? Click here to reset