"No, they did not": Dialogue response dynamics in pre-trained language models

10/05/2022
by   Sanghee J. Kim, et al.
2

A critical component of competence in language is being able to identify relevant components of an utterance and reply appropriately. In this paper we examine the extent of such dialogue response sensitivity in pre-trained language models, conducting a series of experiments with a particular focus on sensitivity to dynamics involving phenomena of at-issueness and ellipsis. We find that models show clear sensitivity to a distinctive role of embedded clauses, and a general preference for responses that target main clause content of prior utterances. However, the results indicate mixed and generally weak trends with respect to capturing the full range of dynamics involved in targeting at-issue versus not-at-issue content. Additionally, models show fundamental limitations in grasp of the dynamics governing ellipsis, and response selections show clear interference from superficial factors that outweigh the influence of principled discourse constraints.

READ FULL TEXT

page 5

page 6

page 8

page 9

research
06/04/2021

Conversations Are Not Flat: Modeling the Dynamic Information Flow across Dialogue Utterances

Nowadays, open-domain dialogue models can generate acceptable responses ...
research
09/27/2021

Pragmatic competence of pre-trained language models through the lens of discourse connectives

As pre-trained language models (LMs) continue to dominate NLP, it is inc...
research
07/30/2021

Towards Continual Entity Learning in Language Models for Conversational Agents

Neural language models (LM) trained on diverse corpora are known to work...
research
10/06/2020

StyleDGPT: Stylized Response Generation with Pre-trained Language Models

Generating responses following a desired style has great potentials to e...
research
09/25/2021

Sorting through the noise: Testing robustness of information processing in pre-trained language models

Pre-trained LMs have shown impressive performance on downstream NLP task...
research
09/14/2021

Challenging Instances are Worth Learning: Generating Valuable Negative Samples for Response Selection Training

Retrieval-based chatbot selects the appropriate response from candidates...

Please sign up or login with your details

Forgot password? Click here to reset