A Short Survey of Pre-trained Language Models for Conversational AI-A NewAge in NLP

04/22/2021
by   Munazza Zaib, et al.
34

Building a dialogue system that can communicate naturally with humans is a challenging yet interesting problem of agent-based computing. The rapid growth in this area is usually hindered by the long-standing problem of data scarcity as these systems are expected to learn syntax, grammar, decision making, and reasoning from insufficient amounts of task-specific dataset. The recently introduced pre-trained language models have the potential to address the issue of data scarcity and bring considerable advantages by generating contextualized word embeddings. These models are considered counterpart of ImageNet in NLP and have demonstrated to capture different facets of language such as hierarchical relations, long-term dependency, and sentiment. In this short survey paper, we discuss the recent progress made in the field of pre-trained language models. We also deliberate that how the strengths of these language models can be leveraged in designing more engaging and more eloquent conversational agents. This paper, therefore, intends to establish whether these pre-trained models can overcome the challenges pertinent to dialogue systems, and how their architecture could be exploited in order to overcome these challenges. Open challenges in the field of dialogue systems have also been deliberated.

READ FULL TEXT
research
07/12/2019

Hello, It's GPT-2 -- How Can I Help You? Towards the Use of Pretrained Language Models for Task-Oriented Dialogue Systems

Data scarcity is a long-standing and crucial challenge that hinders quic...
research
07/17/2022

Effectiveness of French Language Models on Abstractive Dialogue Summarization Task

Pre-trained language models have established the state-of-the-art on var...
research
02/12/2023

Stabilized In-Context Learning with Pre-trained Language Models for Few Shot Dialogue State Tracking

Prompt-based methods with large pre-trained language models (PLMs) have ...
research
07/30/2021

Towards Continual Entity Learning in Language Models for Conversational Agents

Neural language models (LM) trained on diverse corpora are known to work...
research
02/09/2021

AuGPT: Dialogue with Pre-trained Language Models and Data Augmentation

Attention-based pre-trained language models such as GPT-2 brought consid...
research
08/11/2023

Neural Conversation Models and How to Rein Them in: A Survey of Failures and Fixes

Recent conditional language models are able to continue any kind of text...
research
10/10/2021

Language Models As or For Knowledge Bases

Pre-trained language models (LMs) have recently gained attention for the...

Please sign up or login with your details

Forgot password? Click here to reset