The Devil is in the Details: Evaluating Limitations of Transformer-based Methods for Granular Tasks

11/02/2020
by   Brihi Joshi, et al.
0

Contextual embeddings derived from transformer-based neural language models have shown state-of-the-art performance for various tasks such as question answering, sentiment analysis, and textual similarity in recent years. Extensive work shows how accurately such models can represent abstract, semantic information present in text. In this expository work, we explore a tangent direction and analyze such models' performance on tasks that require a more granular level of representation. We focus on the problem of textual similarity from two perspectives: matching documents on a granular level (requiring embeddings to capture fine-grained attributes in the text), and an abstract level (requiring embeddings to capture overall textual semantics). We empirically demonstrate, across two datasets from different domains, that despite high performance in abstract document matching as expected, contextual embeddings are consistently (and at times, vastly) outperformed by simple baselines like TF-IDF for more granular tasks. We then propose a simple but effective method to incorporate TF-IDF into models that use contextual embeddings, achieving relative improvements of up to 36

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/16/2021

Improving Entity Linking through Semantic Reinforced Entity Embeddings

Entity embeddings, which represent different aspects of each entity with...
research
11/02/2016

Dual Attention Networks for Multimodal Reasoning and Matching

We propose Dual Attention Networks (DANs) which jointly leverage visual ...
research
11/02/2020

A Closer Look at Linguistic Knowledge in Masked Language Models: The Case of Relative Clauses in American English

Transformer-based language models achieve high performance on various ta...
research
03/12/2020

Sentiment Analysis with Contextual Embeddings and Self-Attention

In natural language the intended meaning of a word or phrase is often im...
research
05/26/2021

LMMS Reloaded: Transformer-based Sense Embeddings for Disambiguation and Beyond

Distributional semantics based on neural approaches is a cornerstone of ...
research
03/06/2022

Divide and Conquer: Text Semantic Matching with Disentangled Keywords and Intents

Text semantic matching is a fundamental task that has been widely used i...
research
05/11/2020

CrisisBERT: a Robust Transformer for Crisis Classification and Contextual Crisis Embedding

Classification of crisis events, such as natural disasters, terrorist at...

Please sign up or login with your details

Forgot password? Click here to reset