Evaluating Pretrained Transformer Models for Entity Linking in Task-Oriented Dialog

12/15/2021
by   Sai Muralidhar Jayanthi, et al.
5

The wide applicability of pretrained transformer models (PTMs) for natural language tasks is well demonstrated, but their ability to comprehend short phrases of text is less explored. To this end, we evaluate different PTMs from the lens of unsupervised Entity Linking in task-oriented dialog across 5 characteristics – syntactic, semantic, short-forms, numeric and phonetic. Our results demonstrate that several of the PTMs produce sub-par results when compared to traditional techniques, albeit competitive to other neural baselines. We find that some of their shortcomings can be addressed by using PTMs fine-tuned for text-similarity tasks, which illustrate an improved ability in comprehending semantic and syntactic correspondences, as well as some improvements for short-forms, numeric and phonetic variations in entity mentions. We perform qualitative analysis to understand nuances in their predictions and discuss scope for further improvements. Code can be found at https://github.com/murali1996/el_tod

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/28/2021

A Knowledge-Grounded Dialog System Based on Pre-Trained Language Models

We present a knowledge-grounded dialog system developed for the ninth Di...
research
05/25/2023

Learn to Not Link: Exploring NIL Prediction in Entity Linking

Entity linking models have achieved significant success via utilizing pr...
research
12/14/2021

Text Classification Models for Form Entity Linking

Forms are a widespread type of template-based document used in a great v...
research
09/22/2020

SUMBT+LaRL: End-to-end Neural Task-oriented Dialog System with Reinforcement Learning

The recent advent of neural approaches for developing each dialog compon...
research
05/15/2020

Neural Entity Linking on Technical Service Tickets

Entity linking, the task of mapping textual mentions to known entities, ...
research
01/25/2021

CHOLAN: A Modular Approach for Neural Entity Linking on Wikipedia and Wikidata

In this paper, we propose CHOLAN, a modular approach to target end-to-en...
research
03/14/2023

Eliciting Latent Predictions from Transformers with the Tuned Lens

We analyze transformers from the perspective of iterative inference, see...

Please sign up or login with your details

Forgot password? Click here to reset