The Daunting Dilemma with Sentence Encoders: Success on Standard Benchmarks, Failure in Capturing Basic Semantic Properties

09/07/2023
by   Yash Mahajan, et al.
0

In this paper, we adopted a retrospective approach to examine and compare five existing popular sentence encoders, i.e., Sentence-BERT, Universal Sentence Encoder (USE), LASER, InferSent, and Doc2vec, in terms of their performance on downstream tasks versus their capability to capture basic semantic properties. Initially, we evaluated all five sentence encoders on the popular SentEval benchmark and found that multiple sentence encoders perform quite well on a variety of popular downstream tasks. However, being unable to find a single winner in all cases, we designed further experiments to gain a deeper understanding of their behavior. Specifically, we proposed four semantic evaluation criteria, i.e., Paraphrasing, Synonym Replacement, Antonym Replacement, and Sentence Jumbling, and evaluated the same five sentence encoders using these criteria. We found that the Sentence-Bert and USE models pass the paraphrasing criterion, with SBERT being the superior between the two. LASER dominates in the case of the synonym replacement criterion. Interestingly, all the sentence encoders failed the antonym replacement and jumbling criteria. These results suggest that although these popular sentence encoders perform quite well on the SentEval benchmark, they still struggle to capture some basic semantic properties, thus, posing a daunting dilemma in NLP research.

READ FULL TEXT

page 6

page 7

page 13

page 14

page 15

research
05/03/2018

What you can cram into a single vector: Probing sentence embeddings for linguistic properties

Although much effort has recently been devoted to training high-quality ...
research
06/16/2018

Evaluation of sentence embeddings in downstream and linguistic probing tasks

Despite the fast developmental pace of new sentence embedding methods, i...
research
09/27/2020

What does it mean to be language-agnostic? Probing multilingual sentence encoders for typological properties

Multilingual sentence encoders have seen much success in cross-lingual m...
research
10/04/2019

Neural Language Priors

The choice of sentence encoder architecture reflects assumptions about h...
research
04/14/2023

Zero-Shot Multi-Label Topic Inference with Sentence Encoders

Sentence encoders have indeed been shown to achieve superior performance...
research
04/16/2021

Fast, Effective and Self-Supervised: Transforming Masked LanguageModels into Universal Lexical and Sentence Encoders

Pretrained Masked Language Models (MLMs) have revolutionised NLP in rece...
research
08/29/2018

On Tree-Based Neural Sentence Modeling

Neural networks with tree-based sentence encoders have shown better resu...

Please sign up or login with your details

Forgot password? Click here to reset