Hybrid Supervised Reinforced Model for Dialogue Systems

11/04/2020
by   Carlos Miranda, et al.
13

This paper presents a recurrent hybrid model and training procedure for task-oriented dialogue systems based on Deep Recurrent Q-Networks (DRQN). The model copes with both tasks required for Dialogue Management: State Tracking and Decision Making. It is based on modeling Human-Machine interaction into a latent representation embedding an interaction context to guide the discussion. The model achieves greater performance, learning speed and robustness than a non-recurrent baseline. Moreover, results allow interpreting and validating the policy evolution and the latent representations information-wise.

READ FULL TEXT

page 7

page 8

research
03/31/2017

Frames: A Corpus for Adding Memory to Goal-Oriented Dialogue Systems

This paper presents the Frames dataset (Frames is available at http://da...
research
05/16/2022

Taming Continuous Posteriors for Latent Variational Dialogue Policies

Utilizing amortized variational inference for latent-action reinforcemen...
research
09/22/2020

Deep Reinforcement Learning for On-line Dialogue State Tracking

Dialogue state tracking (DST) is a crucial module in dialogue management...
research
05/01/2018

Memory-augmented Dialogue Management for Task-oriented Dialogue Systems

Dialogue management (DM) decides the next action of a dialogue system ac...
research
11/28/2018

Few-Shot Generalization Across Dialogue Tasks

Machine-learning based dialogue managers are able to learn complex behav...
research
09/09/2021

Uncertainty Measures in Neural Belief Tracking and the Effects on Dialogue Policy Performance

The ability to identify and resolve uncertainty is crucial for the robus...
research
08/08/2012

Hybrid systems modeling for gas transmission network

Gas Transmission Networks are large-scale complex systems, and correspon...

Please sign up or login with your details

Forgot password? Click here to reset