Logical Reasoning for Task Oriented Dialogue Systems

02/08/2022
by   Sajjad Beygi, et al.
0

In recent years, large pretrained models have been used in dialogue systems to improve successful task completion rates. However, lack of reasoning capabilities of dialogue platforms make it difficult to provide relevant and fluent responses, unless the designers of a conversational experience spend a considerable amount of time implementing these capabilities in external rule based modules. In this work, we propose a novel method to fine-tune pretrained transformer models such as Roberta and T5. to reason over a set of facts in a given dialogue context. Our method includes a synthetic data generation mechanism which helps the model learn logical relations, such as comparison between list of numerical values, inverse relations (and negation), inclusion and exclusion for categorical attributes, and application of a combination of attributes over both numerical and categorical values, and spoken form for numerical values, without need for additional training dataset. We show that the transformer based model can perform logical reasoning to answer questions when the dialogue context contains all the required information, otherwise it is able to extract appropriate constraints to pass to downstream components (e.g. a knowledge base) when partial information is available. We observe that transformer based models such as UnifiedQA-T5 can be fine-tuned to perform logical reasoning (such as numerical and categorical attributes' comparison) over attributes that been seen in training time (e.g., accuracy of 90%+ for comparison of smaller than k_max=5 values over heldout test dataset).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/20/2022

Towards Large-Scale Interpretable Knowledge Graph Reasoning for Dialogue Systems

Users interacting with voice assistants today need to phrase their reque...
research
12/03/2020

Saying No is An Art: Contextualized Fallback Responses for Unanswerable Dialogue Queries

Despite end-to-end neural systems making significant progress in the las...
research
04/09/2020

MuTual: A Dataset for Multi-Turn Dialogue Reasoning

Non-task oriented dialogue systems have achieved great success in recent...
research
10/12/2020

Meta-Context Transformers for Domain-Specific Response Generation

Despite the tremendous success of neural dialogue models in recent years...
research
05/08/2023

Multi-Task End-to-End Training Improves Conversational Recommendation

In this paper, we analyze the performance of a multitask end-to-end tran...
research
04/14/2022

Can Visual Dialogue Models Do Scorekeeping? Exploring How Dialogue Representations Incrementally Encode Shared Knowledge

Cognitively plausible visual dialogue models should keep a mental scoreb...

Please sign up or login with your details

Forgot password? Click here to reset