Monolingual word alignment is crucial to model semantic interactions bet...
Prediction head is a crucial component of Transformer language models.
D...
Understanding the inner workings of neural network models is a crucial s...
Distributed representations of words encode lexical semantic information...
Measuring the semantic similarity between two sentences is still an impo...
Word embedding is a fundamental technology in natural language processin...
Interpretable rationales for model predictions are crucial in practical
...
Transformer architecture has become ubiquitous in the natural language
p...
It is well-known that typical word embedding methods such as Word2Vec an...
The problem of estimating the probability distribution of labels has bee...
Understanding the influence of a training instance on a neural network m...
Events in a narrative differ in salience: some are more important to the...
Explaining predictions made by complex machine learning models helps use...
One key principle for assessing semantic similarity between texts is to
...
Interpretable rationales for model predictions play a critical role in
p...
Filtering noisy training data is one of the key approaches to improving ...
Because attention modules are core components of Transformer-based model...
In this paper, we propose a new kernel-based co-occurrence measure that ...
This paper presents the first study aimed at capturing stylistic similar...