Log In Sign Up

Robots-Dont-Cry: Understanding Falsely Anthropomorphic Utterances in Dialog Systems

by   David Gros, et al.

Dialog systems are often designed or trained to output human-like responses. However, some responses may be impossible for a machine to truthfully say (e.g. "that movie made me cry"). Highly anthropomorphic responses might make users uncomfortable or implicitly deceive them into thinking they are interacting with a human. We collect human ratings on the feasibility of approximately 900 two-turn dialogs sampled from 9 diverse data sources. Ratings are for two hypothetical machine embodiments: a futuristic humanoid robot and a digital assistant. We find that for some data-sources commonly used to train dialog systems, 20-30 is marginally affected by machine embodiment. We explore qualitative and quantitative reasons for these ratings. Finally, we build classifiers and explore how modeling configuration might affect output permissibly, and discuss implications for building less falsely anthropomorphic dialog systems.


page 5

page 14

page 17


Affect-Driven Dialog Generation

The majority of current systems for end-to-end dialog generation focus o...

TicketTalk: Toward human-level performance with end-to-end, transaction-based dialog systems

We present a data-driven, end-to-end approach to transaction-based dialo...

Multi-turn Dialog System on Single-turn Data in Medical Domain

Recently there has been a huge interest in dialog systems. This interest...

Unsupervised Enrichment of Persona-grounded Dialog with Background Stories

Humans often refer to personal narratives, life experiences, and events ...

Information Seeking in the Spirit of Learning: a Dataset for Conversational Curiosity

Open-ended human learning and information-seeking are increasingly media...

"I'd rather just go to bed": Understanding Indirect Answers

We revisit a pragmatic inference problem in dialog: understanding indire...