Revealing the Blind Spot of Sentence Encoder Evaluation by HEROS

06/08/2023
by   Cheng-Han Chiang, et al.
0

Existing sentence textual similarity benchmark datasets only use a single number to summarize how similar the sentence encoder's decision is to humans'. However, it is unclear what kind of sentence pairs a sentence encoder (SE) would consider similar. Moreover, existing SE benchmarks mainly consider sentence pairs with low lexical overlap, so it is unclear how the SEs behave when two sentences have high lexical overlap. We introduce a high-quality SE diagnostic dataset, HEROS. HEROS is constructed by transforming an original sentence into a new sentence based on certain rules to form a minimal pair, and the minimal pair has high lexical overlaps. The rules include replacing a word with a synonym, an antonym, a typo, a random word, and converting the original sentence into its negation. Different rules yield different subsets of HEROS. By systematically comparing the performance of over 60 supervised and unsupervised SEs on HEROS, we reveal that most unsupervised sentence encoders are insensitive to negation. We find the datasets used to train the SE are the main determinants of what kind of sentence pairs an SE considers similar. We also show that even if two SEs have similar performance on STS benchmarks, they can have very different behavior on HEROS. Our result reveals the blind spot of traditional STS benchmarks when evaluating SEs.

READ FULL TEXT

page 6

page 7

research
02/23/2016

Sentence Similarity Learning by Lexical Decomposition and Composition

Most conventional sentence similarity methods only focus on similar part...
research
09/09/2021

ESimCSE: Enhanced Sample Building Method for Contrastive Learning of Unsupervised Sentence Embedding

Contrastive learning has been attracting much attention for learning uns...
research
09/13/2016

An Experimental Study of LSTM Encoder-Decoder Model for Text Simplification

Text simplification (TS) aims to reduce the lexical and structural compl...
research
04/01/2019

PAWS: Paraphrase Adversaries from Word Scrambling

Existing paraphrase identification datasets lack sentence pairs that hav...
research
06/14/2021

Improving Paraphrase Detection with the Adversarial Paraphrasing Task

If two sentences have the same meaning, it should follow that they are e...
research
12/09/2022

MED-SE: Medical Entity Definition-based Sentence Embedding

We propose Medical Entity Definition-based Sentence Embedding (MED-SE), ...
research
07/06/2023

LEA: Improving Sentence Similarity Robustness to Typos Using Lexical Attention Bias

Textual noise, such as typos or abbreviations, is a well-known issue tha...

Please sign up or login with your details

Forgot password? Click here to reset