DeepAI AI Chat
Log In Sign Up

A Scalable Framework for Learning From Implicit User Feedback to Improve Natural Language Understanding in Large-Scale Conversational AI Systems

by   Sunghyun Park, et al.

Natural Language Understanding (NLU) is an established component within a conversational AI or digital assistant system, and it is responsible for producing semantic understanding of a user request. We propose a scalable and automatic approach for improving NLU in a large-scale conversational AI system by leveraging implicit user feedback, with an insight that user interaction data and dialog context have rich information embedded from which user satisfaction and intention can be inferred. In particular, we propose a general domain-agnostic framework for curating new supervision data for improving NLU from live production traffic. With an extensive set of experiments, we show the results of applying the framework and improving NLU for a large-scale production system and show its impact across 10 domains.


page 1

page 2

page 3

page 4


Deciding Whether to Ask Clarifying Questions in Large-Scale Spoken Language Understanding

A large-scale conversational agent can suffer from understanding user ut...

Neural model robustness for skill routing in large-scale conversational AI systems: A design choice exploration

Current state-of-the-art large-scale conversational AI or intelligent di...

Feedback-Based Self-Learning in Large-Scale Conversational AI Agents

Today, most large-scale conversational AI agents (e.g. Alexa, Siri, or G...

Self-Aware Feedback-Based Self-Learning in Large-Scale Conversational AI

Self-learning paradigms in large-scale conversational AI agents tend to ...

Large-scale Hybrid Approach for Predicting User Satisfaction with Conversational Agents

Measuring user satisfaction level is a challenging task, and a critical ...

Constrained Policy Optimization for Controlled Self-Learning in Conversational AI Systems

Recently, self-learning methods based on user satisfaction metrics and c...