Large-scale Hybrid Approach for Predicting User Satisfaction with Conversational Agents

05/29/2020
by   Dookun Park, et al.
0

Measuring user satisfaction level is a challenging task, and a critical component in developing large-scale conversational agent systems serving the needs of real users. An widely used approach to tackle this is to collect human annotation data and use them for evaluation or modeling. Human annotation based approaches are easier to control, but hard to scale. A novel alternative approach is to collect user's direct feedback via a feedback elicitation system embedded to the conversational agent system, and use the collected user feedback to train a machine-learned model for generalization. User feedback is the best proxy for user satisfaction, but is not available for some ineligible intents and certain situations. Thus, these two types of approaches are complementary to each other. In this work, we tackle the user satisfaction assessment problem with a hybrid approach that fuses explicit user feedback, user satisfaction predictions inferred by two machine-learned models, one trained on user feedback data and the other human annotation data. The hybrid approach is based on a waterfall policy, and the experimental results with Amazon Alexa's large-scale datasets show significant improvements in inferring user satisfaction. A detailed hybrid architecture, an in-depth analysis on user feedback data, and an algorithm that generates data sets to properly simulate the live traffic are presented in this paper.

READ FULL TEXT
research
06/02/2020

Offline and Online Satisfaction Prediction in Open-Domain Conversational Systems

Predicting user satisfaction in conversational systems has become critic...
research
10/21/2020

Self-Supervised Contrastive Learning for Efficient User Satisfaction Prediction in Conversational Agents

Turn-level user satisfaction is one of the most important performance me...
research
04/26/2022

Understanding User Satisfaction with Task-oriented Dialogue Systems

Dialogue systems are evaluated depending on their type and purpose. Two ...
research
09/25/2021

Deciding Whether to Ask Clarifying Questions in Large-Scale Spoken Language Understanding

A large-scale conversational agent can suffer from understanding user ut...
research
11/06/2019

Feedback-Based Self-Learning in Large-Scale Conversational AI Agents

Today, most large-scale conversational AI agents (e.g. Alexa, Siri, or G...
research
05/17/2023

Scalable and Safe Remediation of Defective Actions in Self-Learning Conversational Systems

Off-Policy reinforcement learning has been a driving force for the state...

Please sign up or login with your details

Forgot password? Click here to reset