Do Response Selection Models Really Know What's Next? Utterance Manipulation Strategies for Multi-turn Response Selection

09/10/2020
by   Taesun Whang, et al.
0

In this paper, we study the task of selecting optimal response given user and system utterance history in retrieval-based multi-turn dialog systems. Recently, pre-trained language models (e.g., BERT, RoBERTa, and ELECTRA) have shown significant improvements in various natural language processing tasks. This and similar response selection tasks can also be solved using such language models by formulating them as dialog-response binary classification tasks. Although existing works using this approach successfully obtained state-of-the-art results, we observe that language models trained in this manner tend to make predictions based on the relatedness of history and candidates, ignoring the sequential nature of multi-turn dialog systems. This suggests that the response selection task alone is insufficient in learning temporal dependencies between utterances. To this end, we propose utterance manipulation strategies (UMS) to address this problem. Specifically, UMS consist of several strategies (i.e., insertion, deletion, and search), which aid the response selection model towards maintaining dialog coherence. Further, UMS are self-supervised methods that do not require additional annotation and thus can be easily incorporated into existing approaches. Extensive evaluation across multiple languages and models shows that UMS are highly effective in teaching dialog consistency, which lead to models pushing the state-of-the-art with significant margins on multiple public benchmark datasets.

READ FULL TEXT
research
03/03/2020

Sequential Neural Networks for Noetic End-to-End Response Selection

The noetic end-to-end response selection challenge as one track in the 7...
research
01/09/2019

Sequential Attention-based Network for Noetic End-to-End Response Selection

The noetic end-to-end response selection challenge as one track in Dialo...
research
04/07/2020

Speaker-Aware BERT for Multi-Turn Response Selection in Retrieval-Based Chatbots

In this paper, we study the problem of employing pre-trained language mo...
research
10/24/2020

Measuring the `I don't know' Problem through the Lens of Gricean Quantity

We consider the intrinsic evaluation of neural generative dialog models ...
research
03/10/2020

Learning to Respond with Stickers: A Framework of Unifying Multi-Modality in Multi-Turn Dialog

Stickers with vivid and engaging expressions are becoming increasingly p...
research
05/24/2023

Frugal Prompting for Dialog Models

The use of large language models (LLMs) in natural language processing (...
research
01/24/2021

Does Dialog Length matter for Next Response Selection task? An Empirical Study

In the last few years, the release of BERT, a multilingual transformer b...

Please sign up or login with your details

Forgot password? Click here to reset